url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/19582
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19582/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19582/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19582/events
|
https://github.com/huggingface/transformers/pull/19582
| 1,408,023,362
|
PR_kwDOCUB6oc5Av29-
| 19,582
|
[Doctest] Add configuration_time_series_transformer.py
|
{
"login": "SD-13",
"id": 89520981,
"node_id": "MDQ6VXNlcjg5NTIwOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/89520981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SD-13",
"html_url": "https://github.com/SD-13",
"followers_url": "https://api.github.com/users/SD-13/followers",
"following_url": "https://api.github.com/users/SD-13/following{/other_user}",
"gists_url": "https://api.github.com/users/SD-13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SD-13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SD-13/subscriptions",
"organizations_url": "https://api.github.com/users/SD-13/orgs",
"repos_url": "https://api.github.com/users/SD-13/repos",
"events_url": "https://api.github.com/users/SD-13/events{/privacy}",
"received_events_url": "https://api.github.com/users/SD-13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @ydshieh, I just had a doubt\r\n\r\n> Change the import order of the model and configuration classes\r\n\r\nhere, which order do you mean, the ascending or descending? because in `configuration_time_series_transformer.py`, it was already in ascending order.",
"@ydshieh, By mistake, I pushed some merged changes, I will revert them soon.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh PTAL. Thanks,",
"> A time series of pull request and pull accept! Thanks!\n\nThat sounds AWESOME!!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes part of issue https://github.com/huggingface/transformers/issues/19487.
Adds `configuration_time_series_transformer.py` to `Doc tests`.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19582/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19582",
"html_url": "https://github.com/huggingface/transformers/pull/19582",
"diff_url": "https://github.com/huggingface/transformers/pull/19582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19582.patch",
"merged_at": 1665769196000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19581
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19581/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19581/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19581/events
|
https://github.com/huggingface/transformers/issues/19581
| 1,408,016,350
|
I_kwDOCUB6oc5T7J_e
| 19,581
|
Inconsistent padding behavior for decoder_input_ids for Seq2Seq models
|
{
"login": "rajcscw",
"id": 7319647,
"node_id": "MDQ6VXNlcjczMTk2NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7319647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajcscw",
"html_url": "https://github.com/rajcscw",
"followers_url": "https://api.github.com/users/rajcscw/followers",
"following_url": "https://api.github.com/users/rajcscw/following{/other_user}",
"gists_url": "https://api.github.com/users/rajcscw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajcscw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajcscw/subscriptions",
"organizations_url": "https://api.github.com/users/rajcscw/orgs",
"repos_url": "https://api.github.com/users/rajcscw/repos",
"events_url": "https://api.github.com/users/rajcscw/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajcscw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"@ArthurZucker let me know if you need help with this",
"@ArthurZucker I can have a look at this if it is not being looked at.",
"Hey! 🙌 it's on my to do list, but can't look at it right now so feel free to do so 😀🤗",
"@patrickvonplaten, I've had a look at this and stepped through BART.\r\n\r\nI think it's solely to do with positional embeddings. For T5, MT5 there are relational embeddings, so it doesn't occur.\r\n\r\nFor certain types of models like the original Transformer where the positional embeddings are directly summed to the input embeddings. Any time there is left padding to the input, the positional encodings are not shifted. This happens for both the encoder and decoder forward pass with left side padding. So the left padding above actually affects the encoder output as well. \r\nWhen I shift the positional embeddings according to the mask the results are correct/same to unpadded case.\r\n\r\nIt is not usually a good idea to pad on the left side. I'm not sure if there is an efficient way to resolve this, as the input attention mask could be variable after left padding. \r\n\r\ne.g.\r\n```\r\ntokenizer.padding_side = \"left\"\r\nencodings = tokenizer.batch_encode_plus(['sample_sentence',\r\n 'A much much much longer sentence.'],\r\n padding=\"max_length\",\r\n max_length=10,\r\n return_tensors=\"pt\",\r\n return_attention_mask=True,\r\n truncation=True)\r\n```\r\nSo can't use a batch fold operation.\r\n\r\nLet me know if you think there should be a PR, as I would like to be involved as took me a while to work this out 😅\r\n\r\n ",
"Gently ping @ArthurZucker :-) Let me know if you'd like me to take over the issue if you have too much on your plate",
"Sure. I've found the root cause (positional embeddings aren't shifted along with the left padding) and I don't think it is necessarily an issue/resolvable. So only occurs with models that use non-relative positional embeddings e.g. BART\r\n\r\n@ArthurZucker I'm happy to help out more if you think there is a resolution. Perhaps a PR with a warning?\r\n\r\n",
"The same problem happens when trying to left pad BERT or any model with absolute position embeddings. I notice BERT has a warning in the docs under tips. \r\n\r\nI think this issue can be closed. I can draft a PR for adding to docs of other models with similar tip to BERT.",
"Hey! Really sorry for the late reply! Awesome work and debugging! 🤗\nI totally get the gist of it 😅 \n\nFeel free to open a PR to either : \n- Add a Warning when padding is left that outputs might be incorrect (similar to BERT?) \n- Actually shift the positional embeddings when the padding is left. This might be a bit tricky \n\nEven if it is not really recommended, if people actually use left padding (either unconsciously or for a particular application) it makes sense to shift the input! ",
"@jordiclive @ArthurZucker Thanks for looking into this. Is left padding not recommended only due to position embeddings? In general, for batch next tokens prediction, it is easier for users to get the logits from the last token for the entire batch with left padding. (I remember GPT-2 had a similar issue and the left padding support was added at some point which made batch generation easier)\r\n\r\nAlso from the perspective of providing consistent behavior across many seq2seq models (through AutoModelForSeq2Seq API), shifting the positional embeddings in case of left padding is desired IMO.\r\n\r\n ",
"@rajcscw. Yes, it is just because of the old-style positional embeddings.\r\nFor gpt-2 and BERT, there is an optional kwarg for position_ids. This would be the only way to do it, the user would have to provide the position_ids as it could be variable for each input in the batch and then the positional embeddings can be shifted.\r\n\r\nI am not sure about your exact use case for seq2seq models.\r\n\r\nAbove you have left padding with the tokenizer for the encoder input and then the manual left pad of decoder input ids. This would require two position_ids kwargs (encoder and decoder) for the model as they would likely be offset differently.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,670
| 1,670
|
NONE
| null |
### System Info
transformers : 4.18.0
torch: 1.12.0
Python 3.7.13
### Who can help?
@patrickvonplaten @patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
models = [
"t5-small",
"google/mt5-small",
"facebook/m2m100_418M",
"facebook/wmt19-ru-en",
"facebook/bart-base",
"facebook/blenderbot-400M-distill",
"google/bigbird-pegasus-large-arxiv",
"allenai/led-base-16384",
"microsoft/prophetnet-large-uncased"
]
for model_name in models:
# load the seq2seq model
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.padding_side = "left"
# sample sentence
sample_sentence = "generate some numbers"
encodings = tokenizer(sample_sentence,
padding="max_length",
max_length=5,
return_tensors="pt",
return_attention_mask=True,
truncation=True)
# decoder input ids (with a default start token for the model)
decoder_input_ids = torch.ones(1,1, dtype=torch.int32) * model.config.decoder_start_token_id
# model's forward without any padding for decoder_input_ids (hence without decoder_attn mask)
outputs = model.forward(input_ids=encodings.input_ids,
attention_mask=encodings.attention_mask,
decoder_input_ids=decoder_input_ids,
return_dict=True)
next_token_logits = outputs["logits"][:,-1, :]
# same decoder input ids but padded + decoder attention mask
decoder_input_ids_with_padding = torch.ones(1,3, dtype=torch.int32) * tokenizer.pad_token_id
decoder_input_ids_with_padding[:,-1] = model.config.decoder_start_token_id
decoder_attn_mask = torch.zeros(1,3)
decoder_attn_mask[:,-1] = 1
# model's forward with padding for decoder_input_ids (hence with decoder_attn mask)
outputs_with_padding = model.forward(input_ids=encodings.input_ids,
attention_mask=encodings.attention_mask,
decoder_input_ids=decoder_input_ids_with_padding,
decoder_attention_mask=decoder_attn_mask,
return_dict=True)
next_token_logits_with_padding = outputs_with_padding["logits"][:,-1,:]
# check if padding affects the logits
if torch.allclose(next_token_logits, next_token_logits_with_padding, atol=1e-3):
print(f"No issues with model: {model_name}")
else:
print(f"Issues with model: {model_name}")
```
### Expected behavior
This issue is regarding seq2seq models for conditional text generation.
There are differences in the output logits when padding is used for decoder_input_ids (by passing also decoder_attention_mask). This issue exists only for a few models (eg: BART, BlendorBot, Pegasus etc) and for other models there are no output differences (eg: T5, MT5 etc). Hence there is no consistency in the output across diff seq2seq models.
To reproduce these differences, run the provided script which does the following:
- Do one forward pass for a sample prompt (input_ids, attention_mask), additionally passing the default start token for the decoder.
- Do another forward pass for the prompt (same input_ids and attention_mask). But this time, decoder_input_ids is left padded to a seq length of 3 with the same default start token as the last token. Additionally, decoder_attention_mask is passed to avoid attending to padded tokens.
- Last token logits from these two forward passes are compared for equivalence (with a tolerance of 1e-3)
And this is done for several seq2seq models to see which models have these differences.
Ideally, we would expect padding not to cause any such differences.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19581/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19580
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19580/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19580/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19580/events
|
https://github.com/huggingface/transformers/pull/19580
| 1,407,933,119
|
PR_kwDOCUB6oc5AvjD4
| 19,580
|
[Doctest] Add configuration_vision_text_dual_encoder.py
|
{
"login": "SD-13",
"id": 89520981,
"node_id": "MDQ6VXNlcjg5NTIwOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/89520981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SD-13",
"html_url": "https://github.com/SD-13",
"followers_url": "https://api.github.com/users/SD-13/followers",
"following_url": "https://api.github.com/users/SD-13/following{/other_user}",
"gists_url": "https://api.github.com/users/SD-13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SD-13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SD-13/subscriptions",
"organizations_url": "https://api.github.com/users/SD-13/orgs",
"repos_url": "https://api.github.com/users/SD-13/repos",
"events_url": "https://api.github.com/users/SD-13/events{/privacy}",
"received_events_url": "https://api.github.com/users/SD-13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @ydshieh , Please take a pass. Thanks,",
"Hi @SD-13 I left a comment. Also, does the doctest pass now (as you asked a question in another thread)\r\n\r\nThe checks on this PR page doesn't run doctest. So it's important for the contributors to run it 🙏 please.",
"> Hi @SD-13 I left a comment. Also, does the doctest pass now (as you asked a question in another thread)\r\n> \r\n> The checks on this PR page doesn't run doctest. So it's important for the contributors to run it pray please.\r\n\r\nHey @ydshieh, That totally makes sense. I am still getting the error and I am giving the whole error log here. please help me to debug this. Thanks,\r\n\r\n\r\n============================= test session starts ==============================\r\nplatform linux -- Python 3.10.6, pytest-7.1.3, pluggy-1.0.0 -- /home/pirate/Downloads/huggingFace/transformers/transformers/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /home/pirate/Downloads/huggingFace/transformers, configfile: setup.cfg\r\ncollected 1 item \r\n\r\nsrc/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py::transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig FAILED\r\n\r\n=================================== FAILURES ===================================\r\n_ [doctest] transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig _\r\n062 \r\n063 >>> # Accessing the model configuration\r\n064 >>> config_vision = model.config.vision_config\r\n065 >>> config_text = model.config.text_config\r\n066 \r\n067 >>> # Saving the model, including its configuration\r\n068 >>> model.save_pretrained(\"my-model\")\r\n069 \r\n070 >>> # loading model and config from pretrained folder\r\n071 >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained(\"vit-bert\")\r\nUNEXPECTED EXCEPTION: OSError(\"vit-bert is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.\")\r\nTraceback (most recent call last):\r\n File \"/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 213, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/vit-bert/resolve/main/config.json\r\nThe above exception was the direct cause of the following exception:\r\nTraceback (most recent call last):\r\n File \"/home/pirate/Downloads/huggingFace/transformers/src/transformers/utils/hub.py\", line 409, in cached_file\r\n resolved_file = hf_hub_download(\r\n File \"/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/file_download.py\", line 1053, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n File \"/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/file_download.py\", line 1359, in get_hf_file_metadata\r\n hf_raise_for_status(r)\r\n File \"/home/pirate/Downloads/huggingFace/transformers/transformers/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 242, in hf_raise_for_status\r\n raise RepositoryNotFoundError(message, response) from e\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: WiAMp3MzQkYIQuEIq-5Wj)\r\n\r\nRepository Not Found for url: https://huggingface.co/vit-bert/resolve/main/config.json.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf the repo is private, make sure you are authenticated.\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/doctest.py\", line 1350, in __run\r\n exec(compile(example.source, filename, \"single\",\r\n File \"<doctest transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig[8]>\", line 1, in <module>\r\n File \"/home/pirate/Downloads/huggingFace/transformers/src/transformers/configuration_utils.py\", line 531, in from_pretrained\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/pirate/Downloads/huggingFace/transformers/src/transformers/configuration_utils.py\", line 558, in get_config_dict\r\n config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/pirate/Downloads/huggingFace/transformers/src/transformers/configuration_utils.py\", line 613, in _get_config_dict\r\n resolved_config_file = cached_file(\r\n File \"/home/pirate/Downloads/huggingFace/transformers/src/transformers/utils/hub.py\", line 424, in cached_file\r\n raise EnvironmentError(\r\nOSError: vit-bert is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.\r\n/home/pirate/Downloads/huggingFace/transformers/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py:71: UnexpectedException\r\n063 >>> # Accessing the model configuration\r\n064 >>> config_vision = model.config.vision_config\r\n065 >>> config_text = model.config.text_config\r\n066 \r\n067 >>> # Saving the model, including its configuration\r\n068 >>> model.save_pretrained(\"my-model\")\r\n069 \r\n070 >>> # loading model and config from pretrained folder\r\n071 >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained(\"vit-bert\")\r\n072 >>> model = VisionTextDualEncoderModel.from_pretrained(\"vit-bert\", config=vision_text_config)\r\nUNEXPECTED EXCEPTION: NameError(\"name 'vision_text_config' is not defined\")\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/doctest.py\", line 1350, in __run\r\n exec(compile(example.source, filename, \"single\",\r\n File \"<doctest transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig[9]>\", line 1, in <module>\r\nNameError: name 'vision_text_config' is not defined\r\n/home/pirate/Downloads/huggingFace/transformers/src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py:72: UnexpectedException\r\n=========================== short test summary info ============================\r\nFAILED src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py::transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig\r\n============================== 1 failed in 5.48s ===============================\r\n",
"Hi! From the error message\r\n```\r\n071 >>> vision_text_config = VisionTextDualEncoderConfig.from_pretrained(\"vit-bert\")\r\nUNEXPECTED EXCEPTION: OSError(\"vit-bert is not a local folder and is not a valid model identifier listed on '[https://huggingface.co/models'\\nIf](https://huggingface.co/models'%5CnIf) this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.\")\r\n```\r\nIt tells that \"vit-bert\" doesn't exist. And if you read the code a few lines above this line, you see\r\n```\r\nmodel.save_pretrained(\"my-model\")\r\n```\r\nSo the code save model in some name but try to load it with another name. Change it to\r\n```\r\nmodel.save_pretrained(\"vit-bert\")\r\n```\r\nwill work :-)",
"Yep it worked!!\r\n\r\n\r\n=========================================================== test session starts ============================================================\r\nplatform linux -- Python 3.10.6, pytest-7.1.3, pluggy-1.0.0 -- /home/pirate/Downloads/huggingFace/transformers/transformers/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /home/pirate/Downloads/huggingFace/transformers, configfile: setup.cfg\r\ncollected 1 item \r\n\r\nsrc/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py::transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig PASSED\r\n\r\n============================================================ 1 passed in 10.89s ============================================================\r\n\r\n\r\nThanks for the explanation, I got your point.\r\n",
"Well, I need you push the necessary change in order to merge :-)"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes part of issue https://github.com/huggingface/transformers/issues/19487.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19580/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19580",
"html_url": "https://github.com/huggingface/transformers/pull/19580",
"diff_url": "https://github.com/huggingface/transformers/pull/19580.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19580.patch",
"merged_at": 1665765916000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19579
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19579/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19579/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19579/events
|
https://github.com/huggingface/transformers/issues/19579
| 1,407,788,703
|
I_kwDOCUB6oc5T6Saf
| 19,579
|
Mutli target classification
|
{
"login": "baniasbaabe",
"id": 72874670,
"node_id": "MDQ6VXNlcjcyODc0Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/72874670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baniasbaabe",
"html_url": "https://github.com/baniasbaabe",
"followers_url": "https://api.github.com/users/baniasbaabe/followers",
"following_url": "https://api.github.com/users/baniasbaabe/following{/other_user}",
"gists_url": "https://api.github.com/users/baniasbaabe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baniasbaabe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baniasbaabe/subscriptions",
"organizations_url": "https://api.github.com/users/baniasbaabe/orgs",
"repos_url": "https://api.github.com/users/baniasbaabe/repos",
"events_url": "https://api.github.com/users/baniasbaabe/events{/privacy}",
"received_events_url": "https://api.github.com/users/baniasbaabe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### Feature request
Is there a way to do multi target clasification e.g for text classification?
for example:
Input: text
Output 1: Male/Female
Output 2: Happy/Angry
### Motivation
It‘s annoying to embed the outputs of the model into a custom model
### Your contribution
Unfortunately not
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19579/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19578
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19578/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19578/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19578/events
|
https://github.com/huggingface/transformers/pull/19578
| 1,407,787,577
|
PR_kwDOCUB6oc5AvDiV
| 19,578
|
Implement BigBird in TensorFlow
|
{
"login": "E-Aho",
"id": 46936677,
"node_id": "MDQ6VXNlcjQ2OTM2Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/46936677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/E-Aho",
"html_url": "https://github.com/E-Aho",
"followers_url": "https://api.github.com/users/E-Aho/followers",
"following_url": "https://api.github.com/users/E-Aho/following{/other_user}",
"gists_url": "https://api.github.com/users/E-Aho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/E-Aho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/E-Aho/subscriptions",
"organizations_url": "https://api.github.com/users/E-Aho/orgs",
"repos_url": "https://api.github.com/users/E-Aho/repos",
"events_url": "https://api.github.com/users/E-Aho/events{/privacy}",
"received_events_url": "https://api.github.com/users/E-Aho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19430 by implementing BigBird in TF
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
(WRITING TESTS IN PROGRESS)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
PR implements BigBird based on implementations of:
* [Original BigBird implementation](https://github.com/google-research/bigbird/blob/master/bigbird/core/attention.py)
* [PyTorch BigBird implementation in PyTorch](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/modeling_big_bird.py)
* [TF version of Bert](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_tf_bert.py)
Raising this as a draft PR while I work on tests and ironing out issues I run into while testing, but thought it might be useful to let others have visibility of this while working on it!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19578/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19578/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19578",
"html_url": "https://github.com/huggingface/transformers/pull/19578",
"diff_url": "https://github.com/huggingface/transformers/pull/19578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19578.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19577
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19577/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19577/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19577/events
|
https://github.com/huggingface/transformers/pull/19577
| 1,407,778,927
|
PR_kwDOCUB6oc5AvBsQ
| 19,577
|
[Doctests] add `configuration_blenderbot.py`
|
{
"login": "grgkaran03",
"id": 95518516,
"node_id": "U_kgDOBbF_NA",
"avatar_url": "https://avatars.githubusercontent.com/u/95518516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grgkaran03",
"html_url": "https://github.com/grgkaran03",
"followers_url": "https://api.github.com/users/grgkaran03/followers",
"following_url": "https://api.github.com/users/grgkaran03/following{/other_user}",
"gists_url": "https://api.github.com/users/grgkaran03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grgkaran03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grgkaran03/subscriptions",
"organizations_url": "https://api.github.com/users/grgkaran03/orgs",
"repos_url": "https://api.github.com/users/grgkaran03/repos",
"events_url": "https://api.github.com/users/grgkaran03/events{/privacy}",
"received_events_url": "https://api.github.com/users/grgkaran03/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
`configuration_blenderbot.py` for doctests, addressing issue #19487. Please review it @ydshieh.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19577/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19577",
"html_url": "https://github.com/huggingface/transformers/pull/19577",
"diff_url": "https://github.com/huggingface/transformers/pull/19577.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19577.patch",
"merged_at": 1665679573000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19576
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19576/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19576/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19576/events
|
https://github.com/huggingface/transformers/pull/19576
| 1,407,763,995
|
PR_kwDOCUB6oc5Au-bf
| 19,576
|
[Doctests] Add `configuration_blenderbot.py`
|
{
"login": "grgkaran03",
"id": 95518516,
"node_id": "U_kgDOBbF_NA",
"avatar_url": "https://avatars.githubusercontent.com/u/95518516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grgkaran03",
"html_url": "https://github.com/grgkaran03",
"followers_url": "https://api.github.com/users/grgkaran03/followers",
"following_url": "https://api.github.com/users/grgkaran03/following{/other_user}",
"gists_url": "https://api.github.com/users/grgkaran03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grgkaran03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grgkaran03/subscriptions",
"organizations_url": "https://api.github.com/users/grgkaran03/orgs",
"repos_url": "https://api.github.com/users/grgkaran03/repos",
"events_url": "https://api.github.com/users/grgkaran03/events{/privacy}",
"received_events_url": "https://api.github.com/users/grgkaran03/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19576). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Hi!
This is for blenderbot config, for issue #19487 .
Please review this as well @ydshieh
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19576/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19576",
"html_url": "https://github.com/huggingface/transformers/pull/19576",
"diff_url": "https://github.com/huggingface/transformers/pull/19576.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19576.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19575
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19575/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19575/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19575/events
|
https://github.com/huggingface/transformers/pull/19575
| 1,407,665,848
|
PR_kwDOCUB6oc5AupMI
| 19,575
|
[Doctest] Add configuration_canine.py
|
{
"login": "IzicTemi",
"id": 19413520,
"node_id": "MDQ6VXNlcjE5NDEzNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/19413520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IzicTemi",
"html_url": "https://github.com/IzicTemi",
"followers_url": "https://api.github.com/users/IzicTemi/followers",
"following_url": "https://api.github.com/users/IzicTemi/following{/other_user}",
"gists_url": "https://api.github.com/users/IzicTemi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IzicTemi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IzicTemi/subscriptions",
"organizations_url": "https://api.github.com/users/IzicTemi/orgs",
"repos_url": "https://api.github.com/users/IzicTemi/repos",
"events_url": "https://api.github.com/users/IzicTemi/events{/privacy}",
"received_events_url": "https://api.github.com/users/IzicTemi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add configuration_canine.py to utils/documentation_tests.txt for doctest.
Based on issue [#19487](https://github.com/huggingface/transformers/issues/19487)
@ydshieh could you take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19575/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19575",
"html_url": "https://github.com/huggingface/transformers/pull/19575",
"diff_url": "https://github.com/huggingface/transformers/pull/19575.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19575.patch",
"merged_at": 1665663169000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19574
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19574/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19574/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19574/events
|
https://github.com/huggingface/transformers/pull/19574
| 1,407,587,220
|
PR_kwDOCUB6oc5AuYGf
| 19,574
|
[Doctest] Add `configuration ctrl.py`
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi again @ydshieh! Another one is ready for review :)"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Hi!
This is Ctrl config update
Based on the issue https://github.com/huggingface/transformers/issues/19487
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19574/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19574",
"html_url": "https://github.com/huggingface/transformers/pull/19574",
"diff_url": "https://github.com/huggingface/transformers/pull/19574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19574.patch",
"merged_at": 1665663005000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19573
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19573/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19573/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19573/events
|
https://github.com/huggingface/transformers/pull/19573
| 1,407,543,431
|
PR_kwDOCUB6oc5AuOqr
| 19,573
|
fix BLOOM ONNX config
|
{
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
Fixes dynamic axes for BloomOnnxConfig. After this PR https://github.com/huggingface/transformers/pull/18344, if use_past is used
* past/present keys should have the dynamic axes `{0: 'batch', 1: 'past_sequence + sequence'}`
* past/present values should have the dynamic axes `{0: 'batch', 2: 'past_sequence + sequence'}`
Should also fix failing tests for BLOOM's ONNX export.
(tested using `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "bloom" -s -x`)
cc @lewtun @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19573/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19573",
"html_url": "https://github.com/huggingface/transformers/pull/19573",
"diff_url": "https://github.com/huggingface/transformers/pull/19573.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19573.patch",
"merged_at": 1665756290000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19572
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19572/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19572/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19572/events
|
https://github.com/huggingface/transformers/pull/19572
| 1,407,534,629
|
PR_kwDOCUB6oc5AuMvw
| 19,572
|
Fix fx symbolic tracing for deberta
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@michaelbenayoun With no response from @BigBird01 I think we can merge this. Can you just fix the conflict?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,671
| 1,671
|
MEMBER
| null |
# What does this PR do?
Deberta cannot be traced when using relative attention. This fixes the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19572/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19572",
"html_url": "https://github.com/huggingface/transformers/pull/19572",
"diff_url": "https://github.com/huggingface/transformers/pull/19572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19572.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19571
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19571/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19571/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19571/events
|
https://github.com/huggingface/transformers/pull/19571
| 1,407,516,608
|
PR_kwDOCUB6oc5AuI12
| 19,571
|
Proposal Remove the weird `inspect` in ASR pipeline and make WhisperEncoder just nice to use.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19571). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19571). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
It seems that accepting `attention_mask` is kind of an invariant of our
models. For Seq2Seq ASR models, we had a special comment on how it
actually was important to send it.
`inspecting` seems pretty brittle way to handle this case.
My suggestion is to simply add it as an kwarg that and just ignoring
it with the docstring explaining why it's ignored.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19571/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19571",
"html_url": "https://github.com/huggingface/transformers/pull/19571",
"diff_url": "https://github.com/huggingface/transformers/pull/19571.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19571.patch",
"merged_at": 1668414870000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19570
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19570/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19570/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19570/events
|
https://github.com/huggingface/transformers/pull/19570
| 1,407,496,155
|
PR_kwDOCUB6oc5AuEbU
| 19,570
|
Improve error messaging for ASR pipeline.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
- ~~Raise error early (in `_sanitize`) so users don't waste time trying to
run queries with invalid params.~~ This is not easy unfortunately because the order of resolution of objection is tricky.
- Fix the error was after using `config.inputs_to_logits_ratio` so our
check was masked by the failing property does not exist.
- Added some manual check on s2t for the error message.
No non ctc model seems to be used by the default runner (they are all
skipped).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19570/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19570/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19570",
"html_url": "https://github.com/huggingface/transformers/pull/19570",
"diff_url": "https://github.com/huggingface/transformers/pull/19570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19570.patch",
"merged_at": 1665760342000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19569
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19569/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19569/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19569/events
|
https://github.com/huggingface/transformers/issues/19569
| 1,407,495,839
|
I_kwDOCUB6oc5T5K6f
| 19,569
|
DeprecationWarning from Pillow (with Pillow ≥ 9.1.0)
|
{
"login": "tasercake",
"id": 13855549,
"node_id": "MDQ6VXNlcjEzODU1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13855549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tasercake",
"html_url": "https://github.com/tasercake",
"followers_url": "https://api.github.com/users/tasercake/followers",
"following_url": "https://api.github.com/users/tasercake/following{/other_user}",
"gists_url": "https://api.github.com/users/tasercake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tasercake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tasercake/subscriptions",
"organizations_url": "https://api.github.com/users/tasercake/orgs",
"repos_url": "https://api.github.com/users/tasercake/repos",
"events_url": "https://api.github.com/users/tasercake/events{/privacy}",
"received_events_url": "https://api.github.com/users/tasercake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @amyeroberts @alaradirik ",
"Closing as this has been resolved in #19654 "
] | 1,665
| 1,666
| 1,666
|
NONE
| null |
### System Info
`transformers`: 4.22.2
`pillow`: 9.2.0
Python 3.9.9
### Who can help?
@NielsRogge @amyeroberts (tagged based on changes to `image_utils.py` in #18520, but the issue seems to span most of the repo)
## Reproduction
[Pillow 9.1.0 deprecated a bunch of constants](https://pillow.readthedocs.io/en/stable/releasenotes/9.1.0.html#deprecations) such as `PIL.Image.BILINEAR`, leading to the following warning when importing the CLIP model
Note: I ran python with warnings enabled (`python -W always`)
```
>>> from transformers import CLIPFeatureExtractor
.../transformers/image_utils.py:239: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None):
```
Caused by [this line in `image_utils.py`](https://github.com/huggingface/transformers/blob/bbd150e92f84db72e7507d0c3ce69474b2948839/src/transformers/image_utils.py#L364) (though there's other instances where deprecated PIL constants are used)
These constants are pending removal in Pillow 10 (July 2023).
## Action required
Noticed that transformers doesn't currently enforce a Pillow version constraint in [setup.py](https://github.com/huggingface/transformers/blob/main/setup.py), so I've opened this issue to check if any action is required – **either enforce Pillow < 10, or migrate to using the new Pillow constants**
---
Additional info: discovered this warning when importing https://github.com/huggingface/diffusers – simply running `import diffusers` on a fresh install (version 0.4.1) triggers this warning for me.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19569/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19568
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19568/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19568/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19568/events
|
https://github.com/huggingface/transformers/issues/19568
| 1,407,407,627
|
I_kwDOCUB6oc5T41YL
| 19,568
|
Add Swin2SR
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"If there is consensus for this, can I work on it?",
"Sure!",
"cool, I will start with this.",
"Check out some tips on contributing a model here:\r\n\r\n* https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command\r\n* https://huggingface.co/docs/transformers/contributing\r\n* https://huggingface.co/docs/transformers/add_new_model\r\n",
"Am I supposed to add the model here:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/swinv2/modeling_swinv2.py\r\n\r\nAlso, is there any Super resolution models already present in Transformers?",
"Hi,\r\n\r\nNo each model in the library has its own folder and implementation files. We duplicate a lot of code in favor of easily readable code.\r\n\r\nThere's no super resolution model already available in Transformers, it would be the first one.",
"Thanks.\r\nCan I reuse the code from https://github.com/mv-lab/swin2sr repo in a new folder or build on top of the model from here `modeling_swinv2.py` ?",
"You can start from modeling_swinv2.py, copy it over and tweak it for the new model."
] | 1,665
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
### Model description
Swin2SR is a Swinv2-based model for image super resolution and compression.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/mv-lab/swin2sr
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19568/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19567
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19567/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19567/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19567/events
|
https://github.com/huggingface/transformers/pull/19567
| 1,407,393,177
|
PR_kwDOCUB6oc5AtuRf
| 19,567
|
[Doctests] Add `configuration_vit_mae.py` and `configuration_yoso.py`
|
{
"login": "grgkaran03",
"id": 95518516,
"node_id": "U_kgDOBbF_NA",
"avatar_url": "https://avatars.githubusercontent.com/u/95518516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grgkaran03",
"html_url": "https://github.com/grgkaran03",
"followers_url": "https://api.github.com/users/grgkaran03/followers",
"following_url": "https://api.github.com/users/grgkaran03/following{/other_user}",
"gists_url": "https://api.github.com/users/grgkaran03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grgkaran03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grgkaran03/subscriptions",
"organizations_url": "https://api.github.com/users/grgkaran03/orgs",
"repos_url": "https://api.github.com/users/grgkaran03/repos",
"events_url": "https://api.github.com/users/grgkaran03/events{/privacy}",
"received_events_url": "https://api.github.com/users/grgkaran03/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @grgkaran03 Do you intend to work on yoso in this PR? I see you reverted the change in a commit, then added it back in the last commit.",
"Hi! got a little confused. I worked in yoso and vit_mae in this pr, if it's fine... @ydshieh "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Add configuration_vit_mae.py to utils/documentation_tests.txt and configuration_yoso.py for doctest.
Based on issue #19487
@ydshieh could you please review it?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19567/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19567",
"html_url": "https://github.com/huggingface/transformers/pull/19567",
"diff_url": "https://github.com/huggingface/transformers/pull/19567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19567.patch",
"merged_at": 1665666302000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19566
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19566/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19566/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19566/events
|
https://github.com/huggingface/transformers/pull/19566
| 1,407,370,473
|
PR_kwDOCUB6oc5AtpbJ
| 19,566
|
[Doctest] Add `configuration bloom.py`
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @ydshieh! It is ready for review :) "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Bloom config update
Based on issue #19487
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19566/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19566",
"html_url": "https://github.com/huggingface/transformers/pull/19566",
"diff_url": "https://github.com/huggingface/transformers/pull/19566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19566.patch",
"merged_at": 1665656078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19565
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19565/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19565/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19565/events
|
https://github.com/huggingface/transformers/pull/19565
| 1,407,298,127
|
PR_kwDOCUB6oc5Atac4
| 19,565
|
Fix `ImageToTextPipelineTests.test_small_model_tf`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
`ImageToTextPipelineTests::test_[small/large]_model_[pt/tf]` were all skipped before. I believe it was enabled after #19366 (or its child commit - BTW, thank you @sgugger !).
We have to update the wrong expected values.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19565/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19565",
"html_url": "https://github.com/huggingface/transformers/pull/19565",
"diff_url": "https://github.com/huggingface/transformers/pull/19565.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19565.patch",
"merged_at": 1665757795000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19564
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19564/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19564/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19564/events
|
https://github.com/huggingface/transformers/pull/19564
| 1,407,253,176
|
PR_kwDOCUB6oc5AtRIz
| 19,564
|
[Doctest] Add `configuration_yoso.py`
|
{
"login": "grgkaran03",
"id": 95518516,
"node_id": "U_kgDOBbF_NA",
"avatar_url": "https://avatars.githubusercontent.com/u/95518516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grgkaran03",
"html_url": "https://github.com/grgkaran03",
"followers_url": "https://api.github.com/users/grgkaran03/followers",
"following_url": "https://api.github.com/users/grgkaran03/following{/other_user}",
"gists_url": "https://api.github.com/users/grgkaran03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grgkaran03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grgkaran03/subscriptions",
"organizations_url": "https://api.github.com/users/grgkaran03/orgs",
"repos_url": "https://api.github.com/users/grgkaran03/repos",
"events_url": "https://api.github.com/users/grgkaran03/events{/privacy}",
"received_events_url": "https://api.github.com/users/grgkaran03/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@grgkaran03 You mixed 2 PRs together. ViTMAE and YOSO, and there is another PR #19567 doing the same changes."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Add configuration_yoso.py to utils/documentation_tests.txt for doctest.
Based on issue #19487
@sgugger could you please check it?
Thanks :)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19564/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19564",
"html_url": "https://github.com/huggingface/transformers/pull/19564",
"diff_url": "https://github.com/huggingface/transformers/pull/19564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19564.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19563
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19563/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19563/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19563/events
|
https://github.com/huggingface/transformers/pull/19563
| 1,407,249,135
|
PR_kwDOCUB6oc5AtQSm
| 19,563
|
[Doctest] Add `configuration_roberta.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"haha, no worries 😄 "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_roberta.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19563/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19563",
"html_url": "https://github.com/huggingface/transformers/pull/19563",
"diff_url": "https://github.com/huggingface/transformers/pull/19563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19563.patch",
"merged_at": 1665655578000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19562
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19562/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19562/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19562/events
|
https://github.com/huggingface/transformers/pull/19562
| 1,407,248,774
|
PR_kwDOCUB6oc5AtQN0
| 19,562
|
[Doctest] Add `configuration_reformer.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_reformer.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you check it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19562/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19562",
"html_url": "https://github.com/huggingface/transformers/pull/19562",
"diff_url": "https://github.com/huggingface/transformers/pull/19562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19562.patch",
"merged_at": 1665655396000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19561
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19561/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19561/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19561/events
|
https://github.com/huggingface/transformers/pull/19561
| 1,407,248,523
|
PR_kwDOCUB6oc5AtQKi
| 19,561
|
[Doctest] Add `configuration_vit.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_vit.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you please take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19561/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19561",
"html_url": "https://github.com/huggingface/transformers/pull/19561",
"diff_url": "https://github.com/huggingface/transformers/pull/19561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19561.patch",
"merged_at": 1665655634000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19560
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19560/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19560/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19560/events
|
https://github.com/huggingface/transformers/pull/19560
| 1,407,248,395
|
PR_kwDOCUB6oc5AtQI4
| 19,560
|
[Doctest] Add `configuration_deit.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_deit.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you please check it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19560/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19560",
"html_url": "https://github.com/huggingface/transformers/pull/19560",
"diff_url": "https://github.com/huggingface/transformers/pull/19560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19560.patch",
"merged_at": 1665655361000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19559
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19559/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19559/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19559/events
|
https://github.com/huggingface/transformers/pull/19559
| 1,407,194,636
|
PR_kwDOCUB6oc5AtEuc
| 19,559
|
Fix `test_tf_encode_plus_sent_to_model` for `TAPAS`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
`TapasTokenizer.encode_plus` requires a `table` argument. Currently, this test calls `TokenizerTesterMixin.test_tf_encode_plus_sent_to_model`, therefore it didn't provide this argument and fails.
This PR completely overwrites this test in `TapasTokenizationTest`, just like the pytorch one `test_torch_encode_plus_sent_to_model`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19559/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19559",
"html_url": "https://github.com/huggingface/transformers/pull/19559",
"diff_url": "https://github.com/huggingface/transformers/pull/19559.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19559.patch",
"merged_at": 1665756636000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19558
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19558/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19558/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19558/events
|
https://github.com/huggingface/transformers/pull/19558
| 1,407,146,187
|
PR_kwDOCUB6oc5As6WG
| 19,558
|
[Doctest] - Fixing doctest bert_generation configuration
|
{
"login": "Threepointone4",
"id": 22583613,
"node_id": "MDQ6VXNlcjIyNTgzNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22583613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Threepointone4",
"html_url": "https://github.com/Threepointone4",
"followers_url": "https://api.github.com/users/Threepointone4/followers",
"following_url": "https://api.github.com/users/Threepointone4/following{/other_user}",
"gists_url": "https://api.github.com/users/Threepointone4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Threepointone4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Threepointone4/subscriptions",
"organizations_url": "https://api.github.com/users/Threepointone4/orgs",
"repos_url": "https://api.github.com/users/Threepointone4/repos",
"events_url": "https://api.github.com/users/Threepointone4/events{/privacy}",
"received_events_url": "https://api.github.com/users/Threepointone4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixing doctest in bert_generation configuration. Issue : #19487
## Fixes # (issue)
Added (`with random weights)` in `modeling_ber_generation.py`.
Added `modeling_ber_generation.py` in `documentation_tests.txt`
## Who can review?
@ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19558/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19558",
"html_url": "https://github.com/huggingface/transformers/pull/19558",
"diff_url": "https://github.com/huggingface/transformers/pull/19558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19558.patch",
"merged_at": 1665655142000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19557
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19557/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19557/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19557/events
|
https://github.com/huggingface/transformers/pull/19557
| 1,407,108,417
|
PR_kwDOCUB6oc5AsyLm
| 19,557
|
Fixing mobile bert configuration doctest
|
{
"login": "RamitPahwa",
"id": 16895131,
"node_id": "MDQ6VXNlcjE2ODk1MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16895131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RamitPahwa",
"html_url": "https://github.com/RamitPahwa",
"followers_url": "https://api.github.com/users/RamitPahwa/followers",
"following_url": "https://api.github.com/users/RamitPahwa/following{/other_user}",
"gists_url": "https://api.github.com/users/RamitPahwa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RamitPahwa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RamitPahwa/subscriptions",
"organizations_url": "https://api.github.com/users/RamitPahwa/orgs",
"repos_url": "https://api.github.com/users/RamitPahwa/repos",
"events_url": "https://api.github.com/users/RamitPahwa/events{/privacy}",
"received_events_url": "https://api.github.com/users/RamitPahwa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@RamitPahwa There is an extra empty line in `configuration_mobilebert.py` that causes the tests to fail. \r\nIt would work nicely once you remove it =)",
"@daspartho Thank for the help, should work now !",
"And thanks @daspartho for the help 💯 "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Add configuration_mobilebert.py to utils/documentation_tests.txt for doctest.
Based on issue #19487
@sgugger / @ydshieh
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19557/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19557",
"html_url": "https://github.com/huggingface/transformers/pull/19557",
"diff_url": "https://github.com/huggingface/transformers/pull/19557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19557.patch",
"merged_at": 1665654995000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19556
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19556/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19556/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19556/events
|
https://github.com/huggingface/transformers/pull/19556
| 1,407,096,743
|
PR_kwDOCUB6oc5Asvs6
| 19,556
|
Fixing the Doctest for imageGPT config
|
{
"login": "RamitPahwa",
"id": 16895131,
"node_id": "MDQ6VXNlcjE2ODk1MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16895131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RamitPahwa",
"html_url": "https://github.com/RamitPahwa",
"followers_url": "https://api.github.com/users/RamitPahwa/followers",
"following_url": "https://api.github.com/users/RamitPahwa/following{/other_user}",
"gists_url": "https://api.github.com/users/RamitPahwa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RamitPahwa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RamitPahwa/subscriptions",
"organizations_url": "https://api.github.com/users/RamitPahwa/orgs",
"repos_url": "https://api.github.com/users/RamitPahwa/repos",
"events_url": "https://api.github.com/users/RamitPahwa/events{/privacy}",
"received_events_url": "https://api.github.com/users/RamitPahwa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Add configuration_imagegpt.py to utils/documentation_tests.txt for doctest.
Based on issue #19487
@sgugger / @ydshieh
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19556/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19556",
"html_url": "https://github.com/huggingface/transformers/pull/19556",
"diff_url": "https://github.com/huggingface/transformers/pull/19556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19556.patch",
"merged_at": 1665654875000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19555
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19555/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19555/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19555/events
|
https://github.com/huggingface/transformers/pull/19555
| 1,407,001,969
|
PR_kwDOCUB6oc5AscSx
| 19,555
|
add gloo backend support for CPU DDP
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @yao-matrix please help review",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,666
| 1,665
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19555/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19555",
"html_url": "https://github.com/huggingface/transformers/pull/19555",
"diff_url": "https://github.com/huggingface/transformers/pull/19555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19555.patch",
"merged_at": 1665757096000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19554
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19554/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19554/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19554/events
|
https://github.com/huggingface/transformers/issues/19554
| 1,406,927,324
|
I_kwDOCUB6oc5T3AHc
| 19,554
|
Any example for Wav2vec2ForXVector training?
|
{
"login": "LEECHOONGHO",
"id": 44384060,
"node_id": "MDQ6VXNlcjQ0Mzg0MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/44384060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LEECHOONGHO",
"html_url": "https://github.com/LEECHOONGHO",
"followers_url": "https://api.github.com/users/LEECHOONGHO/followers",
"following_url": "https://api.github.com/users/LEECHOONGHO/following{/other_user}",
"gists_url": "https://api.github.com/users/LEECHOONGHO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LEECHOONGHO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LEECHOONGHO/subscriptions",
"organizations_url": "https://api.github.com/users/LEECHOONGHO/orgs",
"repos_url": "https://api.github.com/users/LEECHOONGHO/repos",
"events_url": "https://api.github.com/users/LEECHOONGHO/events{/privacy}",
"received_events_url": "https://api.github.com/users/LEECHOONGHO/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There isn't currently an example for XVector training in Transformers! Would you like to contribute this? You can begin simply by opening a PR with the python script that you're using. We can then iterate on it to verify correctness and hopefully get a successfully trained XVector system!\r\n\r\nProbably also worth asking the same question on the forum to boost visibility: https://discuss.huggingface.co\r\n\r\nAlso cc @anton-l who has a speaker verification (SV) checkpoint on the Hub (https://huggingface.co/anton-l/wav2vec2-base-superb-sv), wondering if you had a local script for XVector fine-tuning?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, sorry for missing this! To answer @sanchit-gandhi's question: my SV checkpoint is a ported version of W2V2+XVector from S3PRL: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1\r\nSo no finetuning scripts yet, just inference ",
"Hey @LEECHOONGHO! If you want to work together to get a working XVector training script, feel free to open a PR with the script that you've got and tag me. We can iterate on it, ensuring correctness and building up to a full Transformers examples script! I think this would be of benefit to others in the community 🤗",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,675
| 1,675
|
NONE
| null |
Hello, I'm trying to train Wav2vec2ForXVector model on setting like below. But the training loss is not falling from 2.3~2.7. Is there any example for Wav2vec2ForXVector training? Or had anyone
Experienced like this?
pretrained_model : korean wav2vec2
num of audio : 2300k
num of speaker : 11223
num of used encoder layer : 1
output_xvector_dim : 512
learning rate : 2e-5
batch size : 512

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19554/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19553
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19553/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19553/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19553/events
|
https://github.com/huggingface/transformers/pull/19553
| 1,406,905,249
|
PR_kwDOCUB6oc5AsHm4
| 19,553
|
[WIP] Better Transformers integrations for BERT
|
{
"login": "HamidShojanazeri",
"id": 9162336,
"node_id": "MDQ6VXNlcjkxNjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9162336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamidShojanazeri",
"html_url": "https://github.com/HamidShojanazeri",
"followers_url": "https://api.github.com/users/HamidShojanazeri/followers",
"following_url": "https://api.github.com/users/HamidShojanazeri/following{/other_user}",
"gists_url": "https://api.github.com/users/HamidShojanazeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamidShojanazeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamidShojanazeri/subscriptions",
"organizations_url": "https://api.github.com/users/HamidShojanazeri/orgs",
"repos_url": "https://api.github.com/users/HamidShojanazeri/repos",
"events_url": "https://api.github.com/users/HamidShojanazeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamidShojanazeri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR, is the first PR from a series that add[ PyTorch Better Transformers ](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)support to Bert model for inference speed ups.
# Usage
`model = AutoModelForSequenceClassification.from_pretrained("bert-large-cased").eval().to(device)
model.bert.encoder.to_fast()
`
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -- Offline discussions
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? -- In progress
## Who can review?
@LysandreJik @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19553/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19553/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19553",
"html_url": "https://github.com/huggingface/transformers/pull/19553",
"diff_url": "https://github.com/huggingface/transformers/pull/19553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19553.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19552
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19552/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19552/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19552/events
|
https://github.com/huggingface/transformers/pull/19552
| 1,406,856,116
|
PR_kwDOCUB6oc5Ar9At
| 19,552
|
fix flaubert tokenizer
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fix an issue from #19330, see the comments in the changes.
Current test failure [here](https://github.com/huggingface/transformers/actions/runs/3231701081/jobs/5291558259)
```bash
E if self.do_lowercase:
E AttributeError: 'FlaubertTokenizer' object has no attribute 'do_lowercase'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19552/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19552",
"html_url": "https://github.com/huggingface/transformers/pull/19552",
"diff_url": "https://github.com/huggingface/transformers/pull/19552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19552.patch",
"merged_at": 1665757861000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19551
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19551/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19551/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19551/events
|
https://github.com/huggingface/transformers/pull/19551
| 1,406,741,814
|
PR_kwDOCUB6oc5ArkIY
| 19,551
|
GPTTokenizer dependency removed from deberta class
|
{
"login": "RamitPahwa",
"id": 16895131,
"node_id": "MDQ6VXNlcjE2ODk1MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16895131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RamitPahwa",
"html_url": "https://github.com/RamitPahwa",
"followers_url": "https://api.github.com/users/RamitPahwa/followers",
"following_url": "https://api.github.com/users/RamitPahwa/following{/other_user}",
"gists_url": "https://api.github.com/users/RamitPahwa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RamitPahwa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RamitPahwa/subscriptions",
"organizations_url": "https://api.github.com/users/RamitPahwa/orgs",
"repos_url": "https://api.github.com/users/RamitPahwa/repos",
"events_url": "https://api.github.com/users/RamitPahwa/events{/privacy}",
"received_events_url": "https://api.github.com/users/RamitPahwa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Hi @sgugger, I am raising this clean PR for PR #19421
Related to #19303 ,
- the GPT2Tokenizer dependency has been removed from DebertaTokenizer
- the GPT2TokenizerFast dependency has been removed from DebertaTokenizerFast
I ran` pytest tests/models/deberta/test_tokenization_deberta.py` which passed
Thanks for reviewing!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19551/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19551",
"html_url": "https://github.com/huggingface/transformers/pull/19551",
"diff_url": "https://github.com/huggingface/transformers/pull/19551.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19551.patch",
"merged_at": 1665758799000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19550
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19550/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19550/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19550/events
|
https://github.com/huggingface/transformers/pull/19550
| 1,406,717,674
|
PR_kwDOCUB6oc5Are5q
| 19,550
|
[Doctest] Add configuration_big_bird.py
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey, there seems to be a problem with this line\r\n\r\n\r\nCheck_code_quality is returning this error:\r\n\r\n\r\nWhen I tried to keep it below 119 in length and move some of the text to the next line I got another error\r\n\r\n\r\nThus I can not add this 'with random weights' text and I am wondering how to solve this problem.\r\n"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Hi!
Updating configuration_big_bird.py
Based on issue https://github.com/huggingface/transformers/issues/19487
Tests passed

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19550/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19550",
"html_url": "https://github.com/huggingface/transformers/pull/19550",
"diff_url": "https://github.com/huggingface/transformers/pull/19550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19550.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19549
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19549/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19549/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19549/events
|
https://github.com/huggingface/transformers/pull/19549
| 1,406,714,751
|
PR_kwDOCUB6oc5AreRo
| 19,549
|
[Doctest] Add `configuration_gpt2.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_gpt2.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@sgugger could you please check it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19549/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19549",
"html_url": "https://github.com/huggingface/transformers/pull/19549",
"diff_url": "https://github.com/huggingface/transformers/pull/19549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19549.patch",
"merged_at": 1665633539000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19548
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19548/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19548/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19548/events
|
https://github.com/huggingface/transformers/pull/19548
| 1,406,697,648
|
PR_kwDOCUB6oc5ArajK
| 19,548
|
[Whisper] Fix gradient checkpointing (again!)
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/19537#issuecomment-1276629836
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19548/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19548",
"html_url": "https://github.com/huggingface/transformers/pull/19548",
"diff_url": "https://github.com/huggingface/transformers/pull/19548.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19548.patch",
"merged_at": 1665763717000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19547
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19547/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19547/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19547/events
|
https://github.com/huggingface/transformers/pull/19547
| 1,406,662,232
|
PR_kwDOCUB6oc5ArSmL
| 19,547
|
Fix checkpoint in `MarkupLMConfig`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fix checkpoint in `MarkupLMConfig`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19547/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19547",
"html_url": "https://github.com/huggingface/transformers/pull/19547",
"diff_url": "https://github.com/huggingface/transformers/pull/19547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19547.patch",
"merged_at": 1665646650000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19546
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19546/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19546/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19546/events
|
https://github.com/huggingface/transformers/pull/19546
| 1,406,652,144
|
PR_kwDOCUB6oc5ArQaJ
| 19,546
|
[Doctest] Add configuration_big_bird.py
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Hi!
Updating configuration_big_bird.py
Based on issue https://github.com/huggingface/transformers/issues/19487
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19546/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19546",
"html_url": "https://github.com/huggingface/transformers/pull/19546",
"diff_url": "https://github.com/huggingface/transformers/pull/19546.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19546.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19545
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19545/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19545/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19545/events
|
https://github.com/huggingface/transformers/pull/19545
| 1,406,645,640
|
PR_kwDOCUB6oc5ArPB0
| 19,545
|
added type hints for Yolos Pytorch model
|
{
"login": "WhiteWolf47",
"id": 91716569,
"node_id": "U_kgDOBXd72Q",
"avatar_url": "https://avatars.githubusercontent.com/u/91716569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WhiteWolf47",
"html_url": "https://github.com/WhiteWolf47",
"followers_url": "https://api.github.com/users/WhiteWolf47/followers",
"following_url": "https://api.github.com/users/WhiteWolf47/following{/other_user}",
"gists_url": "https://api.github.com/users/WhiteWolf47/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WhiteWolf47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WhiteWolf47/subscriptions",
"organizations_url": "https://api.github.com/users/WhiteWolf47/orgs",
"repos_url": "https://api.github.com/users/WhiteWolf47/repos",
"events_url": "https://api.github.com/users/WhiteWolf47/events{/privacy}",
"received_events_url": "https://api.github.com/users/WhiteWolf47/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger can you please help?",
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks good, but a few comments!\r\n\r\n1) You'll need to run `make fixup` to ensure our code style stays consistent. You might need to install the dev dependencies for that, [see here](https://huggingface.co/docs/transformers/contributing#start-contributing-pull-requests)\r\n2) We'd prefer to just use the built-in `bool` rather than `traitlets.Bool`\r\n3) The most important methods to add type hints to are the `forward()` methods on the main model classes (e.g. in `modeling_yolos.py`). Still, I'll totally accept PRs covering other methods!",
"> This looks good, but a few comments!\r\n> \r\n> 1. You'll need to run `make fixup` to ensure our code style stays consistent. You might need to install the dev dependencies for that, [see here](https://huggingface.co/docs/transformers/contributing#start-contributing-pull-requests)\r\n> 2. We'd prefer to just use the built-in `bool` rather than `traitlets.Bool`\r\n> 3. The most important methods to add type hints to are the `forward()` methods on the main model classes (e.g. in `modeling_yolos.py`). Still, I'll totally accept PRs covering other methods!\r\n\r\nhey, i installed the dev dependencies but make fixup is giving error: no such file or directory. I'll totally try to cover the methods you mentioned in my next pr :)\r\n",
"Alright, let me see if I can run it here!",
"> Looks good to me now - I deleted the `traitlets.Bool` thing, but aside from that I'm happy with it!\r\n\r\nThanks man, It feels good finally get my first PR on transformers, looking forward to contributing more"
] | 1,665
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19545/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19545",
"html_url": "https://github.com/huggingface/transformers/pull/19545",
"diff_url": "https://github.com/huggingface/transformers/pull/19545.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19545.patch",
"merged_at": 1666013662000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19544
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19544/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19544/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19544/events
|
https://github.com/huggingface/transformers/pull/19544
| 1,406,645,282
|
PR_kwDOCUB6oc5ArO9F
| 19,544
|
Add normalize to image transforms module
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,666
| 1,666
|
COLLABORATOR
| null |
# What does this PR do?
Adds `normalize` to the image transforms modules, as well as a helper utility function `get_channel_dimension_axis`.
* `normalize`: performs equivalent normalization of an image as previous feature extractors
* `get_channel_dimension_axis`: Helper function which returns which axis number the channel dimension is on.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19544/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19544",
"html_url": "https://github.com/huggingface/transformers/pull/19544",
"diff_url": "https://github.com/huggingface/transformers/pull/19544.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19544.patch",
"merged_at": 1666022535000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19543
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19543/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19543/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19543/events
|
https://github.com/huggingface/transformers/pull/19543
| 1,406,635,090
|
PR_kwDOCUB6oc5ArMwu
| 19,543
|
added type hints for Yolos Pytorch model
|
{
"login": "WhiteWolf47",
"id": 91716569,
"node_id": "U_kgDOBXd72Q",
"avatar_url": "https://avatars.githubusercontent.com/u/91716569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WhiteWolf47",
"html_url": "https://github.com/WhiteWolf47",
"followers_url": "https://api.github.com/users/WhiteWolf47/followers",
"following_url": "https://api.github.com/users/WhiteWolf47/following{/other_user}",
"gists_url": "https://api.github.com/users/WhiteWolf47/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WhiteWolf47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WhiteWolf47/subscriptions",
"organizations_url": "https://api.github.com/users/WhiteWolf47/orgs",
"repos_url": "https://api.github.com/users/WhiteWolf47/repos",
"events_url": "https://api.github.com/users/WhiteWolf47/events{/privacy}",
"received_events_url": "https://api.github.com/users/WhiteWolf47/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19543/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19543",
"html_url": "https://github.com/huggingface/transformers/pull/19543",
"diff_url": "https://github.com/huggingface/transformers/pull/19543.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19543.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19542
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19542/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19542/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19542/events
|
https://github.com/huggingface/transformers/pull/19542
| 1,406,619,954
|
PR_kwDOCUB6oc5ArJeh
| 19,542
|
[Doctest] Add `configuration_beit.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_beit.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@sgugger could you take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19542/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19542",
"html_url": "https://github.com/huggingface/transformers/pull/19542",
"diff_url": "https://github.com/huggingface/transformers/pull/19542.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19542.patch",
"merged_at": 1665599893000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19541
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19541/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19541/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19541/events
|
https://github.com/huggingface/transformers/pull/19541
| 1,406,579,781
|
PR_kwDOCUB6oc5ArAxp
| 19,541
|
Albert config update
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Hey!
# What does this PR do?
Updates Albert config following the below issue (note that the first step is already done, thus no change was made)

Tests passed

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19541/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19541",
"html_url": "https://github.com/huggingface/transformers/pull/19541",
"diff_url": "https://github.com/huggingface/transformers/pull/19541.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19541.patch",
"merged_at": 1665597775000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19540
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19540/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19540/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19540/events
|
https://github.com/huggingface/transformers/pull/19540
| 1,406,576,629
|
PR_kwDOCUB6oc5ArAF2
| 19,540
|
[Doctest] `Add configuration_whisper.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@daspartho There is an extra empty line which fails the tests. Could you remove it? Thanks.\r\n\r\nIn fact, you can use `make style` to see the necessary change(s).",
"@ydshieh removed it, should work nicely now :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_whisper.py` to `utils/documentation_tests.txt` for doctest.
Based on issue https://github.com/huggingface/transformers/issues/19487
@sgugger could you please check it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19540/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19540",
"html_url": "https://github.com/huggingface/transformers/pull/19540",
"diff_url": "https://github.com/huggingface/transformers/pull/19540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19540.patch",
"merged_at": 1665597803000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19539
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19539/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19539/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19539/events
|
https://github.com/huggingface/transformers/pull/19539
| 1,406,562,234
|
PR_kwDOCUB6oc5Aq89P
| 19,539
|
[Doctest] Add `configuration_yolos.py`
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Add `configuration_yolos.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@sgugger could you please take a look at it?
Thanks =)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19539/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19539",
"html_url": "https://github.com/huggingface/transformers/pull/19539",
"diff_url": "https://github.com/huggingface/transformers/pull/19539.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19539.patch",
"merged_at": 1665597686000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19538
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19538/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19538/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19538/events
|
https://github.com/huggingface/transformers/pull/19538
| 1,406,522,630
|
PR_kwDOCUB6oc5Aq0YW
| 19,538
|
[Whisper] Fix gradient checkpointing
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19537
Sanity check:
```python
from transformers import WhisperFeatureExtractor, WhisperConfig, WhisperForConditionalGeneration
import numpy as np
feature_extractor = WhisperFeatureExtractor()
config = WhisperConfig()
model_encoder = WhisperForConditionalGeneration(config).model.encoder
# enable checkpointing
model_encoder.gradient_checkpointing_enable()
# create dummy audio input
sample = {"array": np.ones(1000), "sampling_rate": 16000}
# pre-process audio input
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
# forward pass
outputs = model_encoder(inputs)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19538/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19538",
"html_url": "https://github.com/huggingface/transformers/pull/19538",
"diff_url": "https://github.com/huggingface/transformers/pull/19538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19538.patch",
"merged_at": 1665594457000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19537
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19537/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19537/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19537/events
|
https://github.com/huggingface/transformers/issues/19537
| 1,406,513,415
|
I_kwDOCUB6oc5T1bEH
| 19,537
|
[Whisper] Gradient checkpointing fails
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Jumped the gun 😅 Doesn't quite yet work with the decoder!\r\n```python\r\nfrom transformers import WhisperFeatureExtractor, WhisperConfig, WhisperForConditionalGeneration\r\nimport numpy as np\r\nimport torch\r\n\r\nfeature_extractor = WhisperFeatureExtractor()\r\nconfig = WhisperConfig()\r\nmodel = WhisperForConditionalGeneration(config)\r\n\r\n# enable checkpointing\r\nmodel.gradient_checkpointing_enable()\r\n\r\n# create dummy audio input\r\nsample = {\"array\": np.ones(1000), \"sampling_rate\": 16000}\r\n\r\n# pre-process audio input\r\ninputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"], return_tensors=\"pt\").input_features\r\n\r\n# create dummy decoder input ids\r\ndecoder_input_ids = torch.arange(10).reshape(1, 10) # bsz, seq_len = (1, 10)\r\n\r\n# forward pass\r\noutputs = model(inputs, decoder_input_ids=decoder_input_ids)\r\n```\r\n\r\n<details>\r\n<summary> Traceback </summary>\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [2], in <cell line: 22>()\r\n 19 decoder_input_ids = torch.arange(10).reshape(1, 10) # bsz, seq_len = (1, 10)\r\n 21 # forward pass\r\n---> 22 outputs = model(inputs, decoder_input_ids=decoder_input_ids)\r\n\r\nFile ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)\r\n 1127 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1128 # this function, and just call forward.\r\n 1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1130 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1131 return forward_call(*input, **kwargs)\r\n 1132 # Do not call functions when jit is used\r\n 1133 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/transformers/src/transformers/models/whisper/modeling_whisper.py:1168, in WhisperForConditionalGeneration.forward(self, input_features, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1163 if decoder_input_ids is None:\r\n 1164 decoder_input_ids = shift_tokens_right(\r\n 1165 labels, self.config.pad_token_id, self.config.decoder_start_token_id\r\n 1166 )\r\n-> 1168 outputs = self.model(\r\n 1169 input_features,\r\n 1170 decoder_input_ids=decoder_input_ids,\r\n 1171 encoder_outputs=encoder_outputs,\r\n 1172 decoder_attention_mask=decoder_attention_mask,\r\n 1173 head_mask=head_mask,\r\n 1174 decoder_head_mask=decoder_head_mask,\r\n 1175 cross_attn_head_mask=cross_attn_head_mask,\r\n 1176 past_key_values=past_key_values,\r\n 1177 decoder_inputs_embeds=decoder_inputs_embeds,\r\n 1178 use_cache=use_cache,\r\n 1179 output_attentions=output_attentions,\r\n 1180 output_hidden_states=output_hidden_states,\r\n 1181 return_dict=return_dict,\r\n 1182 )\r\n 1183 lm_logits = self.proj_out(outputs[0])\r\n 1185 loss = None\r\n\r\nFile ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)\r\n 1127 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1128 # this function, and just call forward.\r\n 1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1130 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1131 return forward_call(*input, **kwargs)\r\n 1132 # Do not call functions when jit is used\r\n 1133 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/transformers/src/transformers/models/whisper/modeling_whisper.py:1044, in WhisperModel.forward(self, input_features, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1037 encoder_outputs = BaseModelOutput(\r\n 1038 last_hidden_state=encoder_outputs[0],\r\n 1039 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,\r\n 1040 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,\r\n 1041 )\r\n 1043 # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)\r\n-> 1044 decoder_outputs = self.decoder(\r\n 1045 input_ids=decoder_input_ids,\r\n 1046 attention_mask=decoder_attention_mask,\r\n 1047 encoder_hidden_states=encoder_outputs[0],\r\n 1048 head_mask=decoder_head_mask,\r\n 1049 cross_attn_head_mask=cross_attn_head_mask,\r\n 1050 past_key_values=past_key_values,\r\n 1051 inputs_embeds=decoder_inputs_embeds,\r\n 1052 use_cache=use_cache,\r\n 1053 output_attentions=output_attentions,\r\n 1054 output_hidden_states=output_hidden_states,\r\n 1055 return_dict=return_dict,\r\n 1056 )\r\n 1058 if not return_dict:\r\n 1059 return decoder_outputs + encoder_outputs\r\n\r\nFile ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)\r\n 1127 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1128 # this function, and just call forward.\r\n 1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1130 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1131 return forward_call(*input, **kwargs)\r\n 1132 # Do not call functions when jit is used\r\n 1133 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/transformers/src/transformers/models/whisper/modeling_whisper.py:912, in WhisperDecoder.forward(self, input_ids, attention_mask, encoder_hidden_states, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 908 return module(*inputs, output_attentions, use_cache)\r\n 910 return custom_forward\r\n--> 912 layer_outputs = torch.utils.checkpoint.checkpoint(\r\n 913 create_custom_forward(decoder_layer),\r\n 914 hidden_states,\r\n 915 attention_mask,\r\n 916 encoder_hidden_states,\r\n 917 head_mask[idx] if head_mask is not None else None,\r\n 918 cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None,\r\n 919 None,\r\n 920 )\r\n 921 else:\r\n 923 layer_outputs = decoder_layer(\r\n 924 hidden_states,\r\n 925 attention_mask=attention_mask,\r\n (...)\r\n 933 use_cache=use_cache,\r\n 934 )\r\n\r\nFile ~/venv/lib/python3.8/site-packages/torch/utils/checkpoint.py:235, in checkpoint(function, use_reentrant, *args, **kwargs)\r\n 232 raise ValueError(\"Unexpected keyword arguments: \" + \",\".join(arg for arg in kwargs))\r\n 234 if use_reentrant:\r\n--> 235 return CheckpointFunction.apply(function, preserve, *args)\r\n 236 else:\r\n 237 return _checkpoint_without_reentrant(\r\n 238 function,\r\n 239 preserve,\r\n 240 *args\r\n 241 )\r\n\r\nFile ~/venv/lib/python3.8/site-packages/torch/utils/checkpoint.py:96, in CheckpointFunction.forward(ctx, run_function, preserve_rng_state, *args)\r\n 93 ctx.save_for_backward(*tensor_inputs)\r\n 95 with torch.no_grad():\r\n---> 96 outputs = run_function(*args)\r\n 97 return outputs\r\n\r\nFile ~/transformers/src/transformers/models/whisper/modeling_whisper.py:908, in WhisperDecoder.forward.<locals>.create_custom_forward.<locals>.custom_forward(*inputs)\r\n 906 def custom_forward(*inputs):\r\n 907 # None for past_key_value\r\n--> 908 return module(*inputs, output_attentions, use_cache)\r\n\r\nFile ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)\r\n 1127 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1128 # this function, and just call forward.\r\n 1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1130 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1131 return forward_call(*input, **kwargs)\r\n 1132 # Do not call functions when jit is used\r\n 1133 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/transformers/src/transformers/models/whisper/modeling_whisper.py:397, in WhisperDecoderLayer.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, layer_head_mask, cross_attn_layer_head_mask, past_key_value, output_attentions, use_cache)\r\n 393 hidden_states = self.self_attn_layer_norm(hidden_states)\r\n 395 # Self Attention\r\n 396 # decoder uni-directional self-attention cached key/values tuple is at positions 1,2\r\n--> 397 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None\r\n 398 # add present self-attn cache to positions 1,2 of present_key_value tuple\r\n 399 hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n 400 hidden_states=hidden_states,\r\n 401 past_key_value=self_attn_past_key_value,\r\n (...)\r\n 404 output_attentions=output_attentions,\r\n 405 )\r\n\r\nTypeError: 'bool' object is not subscriptable\r\n```\r\n\r\n</details>\r\n\r\nNeed to pass `encoder_attention_mask` and `past_key_value` correctly to the decoder layer..."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
cc @ArthurZucker for info
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import WhisperFeatureExtractor, WhisperConfig, WhisperForConditionalGeneration
import numpy as np
feature_extractor = WhisperFeatureExtractor()
config = WhisperConfig()
model_encoder = WhisperForConditionalGeneration(config).model.encoder
# enable checkpointing
model_encoder.gradient_checkpointing_enable()
# create dummy audio input
sample = {"array": np.ones(1000), "sampling_rate": 16000}
# pre-process audio input
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
# forward pass
outputs = model_encoder(inputs)
```
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [1], in <cell line: 18>()
15 inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
17 # forward pass
---> 18 outputs = model_encoder(inputs)
File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)
1127 # If we don't have any hooks, we want to skip the rest of the logic in
1128 # this function, and just call forward.
1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1130 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1131 return forward_call(*input, **kwargs)
1132 # Do not call functions when jit is used
1133 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:682, in WhisperEncoder.forward(self, input_features, head_mask, output_attentions, output_hidden_states, return_dict)
678 return module(*inputs, output_attentions)
680 return custom_forward
--> 682 layer_outputs = torch.utils.checkpoint.checkpoint(
683 create_custom_forward(encoder_layer),
684 hidden_states,
685 None,
686 (head_mask[idx] if head_mask is not None else None),
687 )
688 else:
689 layer_outputs = encoder_layer(
690 hidden_states,
691 None,
692 layer_head_mask=(head_mask[idx] if head_mask is not None else None),
693 output_attentions=output_attentions,
694 )
AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
### Expected behavior
Forward pass without any hitch-ups!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19537/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19536
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19536/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19536/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19536/events
|
https://github.com/huggingface/transformers/pull/19536
| 1,406,504,789
|
PR_kwDOCUB6oc5AqwjF
| 19,536
|
Added type hints to `DebertaV2ForMultipleChoice` Pytorch
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 I have added the output type",
"@IMvision12 Looks perfect, thanks!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Type Hints for DebertaV2ForMultipleChoice
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19536/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19536",
"html_url": "https://github.com/huggingface/transformers/pull/19536",
"diff_url": "https://github.com/huggingface/transformers/pull/19536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19536.patch",
"merged_at": 1665669163000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19535
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19535/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19535/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19535/events
|
https://github.com/huggingface/transformers/pull/19535
| 1,406,471,484
|
PR_kwDOCUB6oc5AqpbD
| 19,535
|
Throw an error if `getattribute_from_module` can't find anything
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Throw an error if `getattribute_from_module` can't find anything - to avoid `RecursionError: maximum recursion depth exceeded while calling a Python object`.
**New error:**
```bash
ValueError: Could not find MarkupLMForMaskedLM neither <module 'transformers.models.markuplm' from '/home/yih_dar_huggingface_co/transformers-ydshieh/src/transformers/models/markuplm/__init__.py'> in nor in <module 'transformers' from 'src/transformers/__init__.py'>!
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19535/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19535",
"html_url": "https://github.com/huggingface/transformers/pull/19535",
"diff_url": "https://github.com/huggingface/transformers/pull/19535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19535.patch",
"merged_at": 1665598186000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19534
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19534/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19534/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19534/events
|
https://github.com/huggingface/transformers/pull/19534
| 1,406,460,095
|
PR_kwDOCUB6oc5AqnAb
| 19,534
|
Remove `MarkupLMForMaskedLM` from `MAPPING`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
There is no `MarkupLMForMaskedLM`.
BTW, could we check if the arguments passed to the recursive call are identical to the inputs: if so, don't call recursively.
https://github.com/huggingface/transformers/blob/4edb3e49f6bd3d1a4f6862452ecaf07108d62ff7/src/transformers/models/auto/auto_factory.py#L548-L558
I can work on that if you are OK, @sgugger .
I freak out when I see
```bash
RecursionError: maximum recursion depth exceeded while calling a Python object
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19534/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19534",
"html_url": "https://github.com/huggingface/transformers/pull/19534",
"diff_url": "https://github.com/huggingface/transformers/pull/19534.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19534.patch",
"merged_at": 1665595309000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19533
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19533/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19533/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19533/events
|
https://github.com/huggingface/transformers/pull/19533
| 1,406,435,824
|
PR_kwDOCUB6oc5AqhyW
| 19,533
|
Add typing to activations.py
|
{
"login": "saksham-chawla",
"id": 51916697,
"node_id": "MDQ6VXNlcjUxOTE2Njk3",
"avatar_url": "https://avatars.githubusercontent.com/u/51916697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saksham-chawla",
"html_url": "https://github.com/saksham-chawla",
"followers_url": "https://api.github.com/users/saksham-chawla/followers",
"following_url": "https://api.github.com/users/saksham-chawla/following{/other_user}",
"gists_url": "https://api.github.com/users/saksham-chawla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saksham-chawla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saksham-chawla/subscriptions",
"organizations_url": "https://api.github.com/users/saksham-chawla/orgs",
"repos_url": "https://api.github.com/users/saksham-chawla/repos",
"events_url": "https://api.github.com/users/saksham-chawla/events{/privacy}",
"received_events_url": "https://api.github.com/users/saksham-chawla/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
# What does this PR do?
- Adds typing
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19533/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19533",
"html_url": "https://github.com/huggingface/transformers/pull/19533",
"diff_url": "https://github.com/huggingface/transformers/pull/19533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19533.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19532
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19532/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19532/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19532/events
|
https://github.com/huggingface/transformers/pull/19532
| 1,406,416,207
|
PR_kwDOCUB6oc5AqdeZ
| 19,532
|
Build Push CI images also in a daily basis
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
PR #19170 separated the images for push CI and daily CI. However, it only build the push CI images (that with the postfix `-push-ci`) when changes in `setup.py` are detected.
**We should also re-build the push CI images in a daily basis**, as 3rd party libraries might have newer versions, like `datasets`, `tokenizers` etc.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19532/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19532",
"html_url": "https://github.com/huggingface/transformers/pull/19532",
"diff_url": "https://github.com/huggingface/transformers/pull/19532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19532.patch",
"merged_at": 1665639073000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19531
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19531/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19531/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19531/events
|
https://github.com/huggingface/transformers/pull/19531
| 1,406,326,172
|
PR_kwDOCUB6oc5AqKFe
| 19,531
|
Make `MobileBert` tokenizers independent from `Bert`
|
{
"login": "501Good",
"id": 10570950,
"node_id": "MDQ6VXNlcjEwNTcwOTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/10570950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/501Good",
"html_url": "https://github.com/501Good",
"followers_url": "https://api.github.com/users/501Good/followers",
"following_url": "https://api.github.com/users/501Good/following{/other_user}",
"gists_url": "https://api.github.com/users/501Good/gists{/gist_id}",
"starred_url": "https://api.github.com/users/501Good/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/501Good/subscriptions",
"organizations_url": "https://api.github.com/users/501Good/orgs",
"repos_url": "https://api.github.com/users/501Good/repos",
"events_url": "https://api.github.com/users/501Good/events{/privacy}",
"received_events_url": "https://api.github.com/users/501Good/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Copied the code from `Bert` tokenizers into `MobileBert` tokenizers to make the latter self-contained.
Fixes #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19531/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19531",
"html_url": "https://github.com/huggingface/transformers/pull/19531",
"diff_url": "https://github.com/huggingface/transformers/pull/19531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19531.patch",
"merged_at": 1665589836000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19530
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19530/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19530/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19530/events
|
https://github.com/huggingface/transformers/pull/19530
| 1,406,248,498
|
PR_kwDOCUB6oc5Ap5Rt
| 19,530
|
Update README.md
|
{
"login": "code-with-rajeev",
"id": 68783059,
"node_id": "MDQ6VXNlcjY4NzgzMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/68783059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/code-with-rajeev",
"html_url": "https://github.com/code-with-rajeev",
"followers_url": "https://api.github.com/users/code-with-rajeev/followers",
"following_url": "https://api.github.com/users/code-with-rajeev/following{/other_user}",
"gists_url": "https://api.github.com/users/code-with-rajeev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/code-with-rajeev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/code-with-rajeev/subscriptions",
"organizations_url": "https://api.github.com/users/code-with-rajeev/orgs",
"repos_url": "https://api.github.com/users/code-with-rajeev/repos",
"events_url": "https://api.github.com/users/code-with-rajeev/events{/privacy}",
"received_events_url": "https://api.github.com/users/code-with-rajeev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19530). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
Fixed a grammatical error.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19530/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19530",
"html_url": "https://github.com/huggingface/transformers/pull/19530",
"diff_url": "https://github.com/huggingface/transformers/pull/19530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19530.patch",
"merged_at": 1668407798000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19529
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19529/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19529/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19529/events
|
https://github.com/huggingface/transformers/pull/19529
| 1,406,236,086
|
PR_kwDOCUB6oc5Ap2os
| 19,529
|
Use memory efficient attention in CLIP
|
{
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Is https://github.com/huggingface/diffusers/pull/532 for CLIP in Stable Diffusion? Is it necessary to port the implementation to `transformers` **CLIP** here to make running SD more efficient ...? @patrickvonplaten is the best one to make the decision.",
"Looks like this would be a great use of [custom modeling code](https://huggingface.co/docs/transformers/custom_models#writing-a-custom-model) instead of trying to change the model code.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Note that in `transformers` we don't want to necessarily support all the optimizations natively in the core library - also because the fundamental design is different (modeling code is copied rather than abstracted like in `diffusers`). \r\n\r\nMaybe also cc @michaelbenayoun here ",
"Hi @cccntu,\r\nI agree with @patrickvonplaten here. Also, note that we have a library for optimizations called [Optimum](https://github.com/huggingface/optimum). Altough we do not support custom optimization for each modeling, I think that you might be interested in contributing optimizations such as the ones on your PR there!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds support for clip model to use memory efficient attention, similar to https://github.com/huggingface/diffusers/pull/532
~This is critical when you want to use stable diffusion with large batch size. Because when unet is using memory efficient attention, the bottleneck becomes the initial text encoding step.~
edit: I just tested to see what's the batch size that it becomes the bottleneck, but it seems that's not the bottleneck now. I'm not sure now. 🤔
Anyways. With this improvement, I'm able to run stable diffusion with batch size 128 on RTX 3090, in here [cccntu/accelerated-stable-diffusion](https://github.com/cccntu/accelerated-stable-diffusion)
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19529/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19529",
"html_url": "https://github.com/huggingface/transformers/pull/19529",
"diff_url": "https://github.com/huggingface/transformers/pull/19529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19529.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19528
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19528/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19528/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19528/events
|
https://github.com/huggingface/transformers/issues/19528
| 1,406,176,136
|
I_kwDOCUB6oc5T0IuI
| 19,528
|
Allow TFBertTokenizer to use Tensorflow text BertTokenizer (and not FastBertTokenizer) to make it servable by TF Serving
|
{
"login": "piEsposito",
"id": 47679710,
"node_id": "MDQ6VXNlcjQ3Njc5NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/47679710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piEsposito",
"html_url": "https://github.com/piEsposito",
"followers_url": "https://api.github.com/users/piEsposito/followers",
"following_url": "https://api.github.com/users/piEsposito/following{/other_user}",
"gists_url": "https://api.github.com/users/piEsposito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piEsposito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piEsposito/subscriptions",
"organizations_url": "https://api.github.com/users/piEsposito/orgs",
"repos_url": "https://api.github.com/users/piEsposito/repos",
"events_url": "https://api.github.com/users/piEsposito/events{/privacy}",
"received_events_url": "https://api.github.com/users/piEsposito/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### Feature request
I would like to serve a bundle of Tokenizer + Model on TF Serving, but can't do it because TF Serving still have no support for TF FastBertTokenizer annd FastBertNormalize operations (https://github.com/tensorflow/serving/issues/2064).
It would be good if we could let [TFBertTokenizer ](https://github.com/huggingface/transformers/blob/4ed0fa3676ad8900eaa982a6c5c2ad6b75c8ea46/src/transformers/models/bert/tokenization_bert_tf.py) give the user an option not to use Tensorflow FastBertTokenizer when creating a TFBertTokenizer, so that it is servable on TFServing.
It would consist of moving (or creating an option to change) this
https://github.com/huggingface/transformers/blob/4ed0fa3676ad8900eaa982a6c5c2ad6b75c8ea46/src/transformers/models/bert/tokenization_bert_tf.py#L67-L69
To this:
```python
# to avoid naming collision with transformers BertTokenizer
from tensorflow_text import BertTokenizer as TFBertTokenizerLayer
lookup_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(
keys=vocab_list,
key_dtype=tf.string,
values=tf.range(
tf.size(vocab_list, out_type=tf.int64), dtype=tf.int64),
value_dtype=tf.int64
),
num_oov_buckets=1
)
self.tf_tokenizer = TFBertTokenizerLayer(
lookup_table, token_out_type=tf.int64, lower_case=do_lower_case
)
```
### Motivation
I would like to serve a bundle of Tokenizer + Model on TF Serving, but can't do it because TF Serving still have no support for TF FastBertTokenizer annd FastBertNormalize operations (https://github.com/tensorflow/serving/issues/2064).
As this lib is much faster to solve this kind of thing than TF Serving, I thought it was worth it trying to solve it from here.
### Your contribution
I can definitely submit a PR with that if you approve the idea.
EDIT: I've created https://github.com/huggingface/transformers/pull/19590 to showcase the idea.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19528/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19528/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19527
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19527/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19527/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19527/events
|
https://github.com/huggingface/transformers/pull/19527
| 1,406,168,258
|
PR_kwDOCUB6oc5Apn-n
| 19,527
|
[Whisper] Freeze params of encoder
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,687
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds a method to Whisper to freeze the parameters of the encoder and two associated tests.
API:
```python
whisper_model.freeze_encoder()
```
This is in-line with Wav2Vec2 where we freeze the feature encoder with the method [`.freeze_feature_encoder()`](https://github.com/huggingface/transformers/blob/bbd150e92f84db72e7507d0c3ce69474b2948839/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1220):
```python
wav2vec2_model.freeze_feature_encoder()
```
## Sanity check
```python
from transformers import WhisperConfig, WhisperForConditionalGeneration
config = WhisperConfig()
model = WhisperForConditionalGeneration(config)
# check if params are frozen
encoder_grads = [param.requires_grad for param in model.model.encoder.parameters()]
decoder_grads = [param.requires_grad for param in model.model.decoder.parameters()]
print("Before freezing encoder...")
print(f"All encoder params trainable: {all(encoder_grads)}")
print(f"All decoder params trainable: {all(decoder_grads)}")
# freeze params of encoder
model.freeze_encoder()
# check if params are frozen
encoder_grads = [param.requires_grad for param in model.model.encoder.parameters()]
decoder_grads = [param.requires_grad for param in model.model.decoder.parameters()]
print("After freezing encoder...")
print(f"All encoder params trainable: {all(encoder_grads)}")
print(f"All decoder params trainable: {all(decoder_grads)}")
```
```
Before freezing encoder...
All encoder params trainable: True
All decoder params trainable: True
After freezing encoder...
All encoder params trainable: False
All decoder params trainable: True
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19527/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19527",
"html_url": "https://github.com/huggingface/transformers/pull/19527",
"diff_url": "https://github.com/huggingface/transformers/pull/19527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19527.patch",
"merged_at": 1665651003000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19526
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19526/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19526/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19526/events
|
https://github.com/huggingface/transformers/pull/19526
| 1,406,167,512
|
PR_kwDOCUB6oc5Apn0f
| 19,526
|
Fix MarkupLMProcessor option flag in MarkupLMProcessor documentation
|
{
"login": "davanstrien",
"id": 8995957,
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davanstrien",
"html_url": "https://github.com/davanstrien",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a mistake in the docs for MarkupLMProcessor for use case 5. Currently the heading says `apply_ocr=False`, I believe this should be `parse_html=False`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
cc @NielsRogge @SaulLu @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19526/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19526",
"html_url": "https://github.com/huggingface/transformers/pull/19526",
"diff_url": "https://github.com/huggingface/transformers/pull/19526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19526.patch",
"merged_at": 1665580128000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19525
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19525/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19525/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19525/events
|
https://github.com/huggingface/transformers/pull/19525
| 1,406,110,689
|
PR_kwDOCUB6oc5Apbcw
| 19,525
|
Added onnx config whisper
|
{
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lewtun @echarlaix The bug is fixed now. The incorrect results were because the export was happening with seqlength 1 due to typo in onnx config generate_dummy_input function.",
"Thanks for the review @sgugger and catching those last issues - I've checked the latest changes and think this can now be merged @mht-sharma ",
"Hello,\r\n\r\nI tried to generate onnx model using [docs](https://huggingface.co/docs/transformers/serialization#configuration-based-approach),\r\nTo inference, I passed audio features and decoder_input_id , and the output was two array with the shape of (1, 2, 768), (1, 1500, 768). Could you please help me how should I use these outputs for generating transcription? \r\n\r\nThank you.",
"Hi @zara0m please follow the example in this PR for export and inference using ONNX model. https://github.com/huggingface/optimum/pull/420",
"> Hi @zara0m please follow the example in this PR for export and inference using ONNX model. [huggingface/optimum#420](https://github.com/huggingface/optimum/pull/420)\r\n\r\nThank you very much for your help and quick response!\r\n\r\nI tested it with base and small model for some non-english audios, but the outputs were not similar to whisper model, or maybe it translates instead of transcribing, how can I fix this?\r\nAlso, is there any way that I can have begin/end time of each sentence? (like the transcribe function of whisper model)\r\n\r\nThank you."
] | 1,665
| 1,675
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes # (issue)
This PR adds onnx config and helper functions for export to onnx via optimum and transformers.onnx
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19525/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19525/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19525",
"html_url": "https://github.com/huggingface/transformers/pull/19525",
"diff_url": "https://github.com/huggingface/transformers/pull/19525.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19525.patch",
"merged_at": 1667303443000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19524
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19524/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19524/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19524/events
|
https://github.com/huggingface/transformers/pull/19524
| 1,406,068,692
|
PR_kwDOCUB6oc5ApSMU
| 19,524
|
Bart configuration update
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @ydshieh! It turned out that copy-pasting the \"(with random weights)\" text was causing the check_code_quality test failure. But unfortunately I don't know what is causing this ConnectionResetError: [Errno 104] while building PR Documentation",
"> Hey @ydshieh! It turned out that copy-pasting the \"(with random weights)\" text was causing the check_code_quality test failure.\r\nGlad it works now 💯 Thank you!\r\n\r\n\r\n> But unfortunately I don't know what is causing this ConnectionResetError: [Errno 104] while building PR Documentation\r\nYou can ignore this :-)\r\n",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Update to bart config.

Tests passed

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19524/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19524",
"html_url": "https://github.com/huggingface/transformers/pull/19524",
"diff_url": "https://github.com/huggingface/transformers/pull/19524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19524.patch",
"merged_at": 1665580306000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19523
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19523/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19523/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19523/events
|
https://github.com/huggingface/transformers/pull/19523
| 1,406,058,611
|
PR_kwDOCUB6oc5ApP-x
| 19,523
|
[X-CLIP] Fix doc tests
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #19513
This PR fixes X-CLIP's AutoProcessor and adds doc tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19523/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19523",
"html_url": "https://github.com/huggingface/transformers/pull/19523",
"diff_url": "https://github.com/huggingface/transformers/pull/19523.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19523.patch",
"merged_at": 1665587113000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19522
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19522/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19522/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19522/events
|
https://github.com/huggingface/transformers/pull/19522
| 1,406,043,948
|
PR_kwDOCUB6oc5ApMwd
| 19,522
|
Update configuration_bart.py
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Updated bart config in accordance with this issue:

Tests passed

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19522/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19522",
"html_url": "https://github.com/huggingface/transformers/pull/19522",
"diff_url": "https://github.com/huggingface/transformers/pull/19522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19522.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19521
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19521/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19521/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19521/events
|
https://github.com/huggingface/transformers/pull/19521
| 1,406,010,281
|
PR_kwDOCUB6oc5ApFTI
| 19,521
|
[Whisper] Don't return attention mask in feat extractor
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Whisper pads all audio inputs to a fixed max length (=30s) by appending the silence token (zero) to the end of any sequences shorter than the max length. Hence, the model does **not** use an attention mask: all inputs have length 30s, padding is treated through use of the silence token rather than an attention mask.
This PR sets the default value of `return_attention_mask` to `False` in the feature extractor. In doing so, feature extractor methods such as `__call__` and `pad` will **not** return an `attention_mask` by default.
This behaviour can be overridden by passing the arg `return_attention_mask=True` to these methods.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19521/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19521",
"html_url": "https://github.com/huggingface/transformers/pull/19521",
"diff_url": "https://github.com/huggingface/transformers/pull/19521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19521.patch",
"merged_at": 1665754564000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19520
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19520/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19520/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19520/events
|
https://github.com/huggingface/transformers/pull/19520
| 1,405,968,242
|
PR_kwDOCUB6oc5Ao8IK
| 19,520
|
Remove bert fast dependency from electra
|
{
"login": "Threepointone4",
"id": 22583613,
"node_id": "MDQ6VXNlcjIyNTgzNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22583613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Threepointone4",
"html_url": "https://github.com/Threepointone4",
"followers_url": "https://api.github.com/users/Threepointone4/followers",
"following_url": "https://api.github.com/users/Threepointone4/following{/other_user}",
"gists_url": "https://api.github.com/users/Threepointone4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Threepointone4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Threepointone4/subscriptions",
"organizations_url": "https://api.github.com/users/Threepointone4/orgs",
"repos_url": "https://api.github.com/users/Threepointone4/repos",
"events_url": "https://api.github.com/users/Threepointone4/events{/privacy}",
"received_events_url": "https://api.github.com/users/Threepointone4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
- Related to #19303
- Removed `Bert` fast dependency from `Electra` code base.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19520/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19520",
"html_url": "https://github.com/huggingface/transformers/pull/19520",
"diff_url": "https://github.com/huggingface/transformers/pull/19520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19520.patch",
"merged_at": 1665584078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19519
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19519/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19519/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19519/events
|
https://github.com/huggingface/transformers/pull/19519
| 1,405,963,075
|
PR_kwDOCUB6oc5Ao6_d
| 19,519
|
[Examples] Generalise Seq2Seq ASR to handle Whisper
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint.",
"@sgugger this one's ready to go! Just an FYI in-case you wanted to take a look :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
Generalises `run_speech_recognition_seq2seq.py` to handle Whisper.
To train the "tiny.en" model on LibriSpeech dummy:
<details>
<summary> Bash script </summary>
```
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_seq2seq.py \
--dataset_name="hf-internal-testing/librispeech_asr_dummy" \
--model_name_or_path="openai/whisper-tiny.en" \
--dataset_config_name="clean" \
--train_split_name="validation" \
--eval_split_name="validation" \
--output_dir="./" \
--preprocessing_num_workers="1" \
--length_column_name="input_length" \
--overwrite_output_dir \
--num_train_epochs="1" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="text" \
--save_strategy="no" \
--evaluation_strategy="epoch" \
--logging_steps="10" \
--save_total_limit="1" \
--generation_max_length="40" \
--generation_num_beams="1" \
--fp16 \
--gradient_checkpointing \
--group_by_length \
--predict_with_generate \
--do_train --do_eval \
--do_lower_case
```
</details>
To train the "medium.en" model on LibriSpeech 960h:
<details>
<summary> Bash script </summary>
```
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_seq2seq.py \
--model_name_or_path="openai/whisper-medium.en" \
--dataset_name="librispeech_asr" \
--dataset_config_name="all" \
--train_split_name="train.clean.100+train.clean.360+train.other.500" \
--eval_split_name="validation.clean" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-librispeech" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--generation_num_beams="1" \
--length_column_name="input_length" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--predict_with_generate \
--use_auth_token
```
</details>
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19519/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19519/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19519",
"html_url": "https://github.com/huggingface/transformers/pull/19519",
"diff_url": "https://github.com/huggingface/transformers/pull/19519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19519.patch",
"merged_at": 1668447946000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19518
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19518/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19518/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19518/events
|
https://github.com/huggingface/transformers/pull/19518
| 1,405,923,715
|
PR_kwDOCUB6oc5AoyXc
| 19,518
|
Fix whisper doc
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fixes the whisper doc of the forward pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19518/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19518",
"html_url": "https://github.com/huggingface/transformers/pull/19518",
"diff_url": "https://github.com/huggingface/transformers/pull/19518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19518.patch",
"merged_at": 1665571471000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19517
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19517/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19517/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19517/events
|
https://github.com/huggingface/transformers/pull/19517
| 1,405,921,367
|
PR_kwDOCUB6oc5Aox2z
| 19,517
|
Update configuration_bart.py
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19517). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Update following #19487 issue

Tests passed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19517/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19517",
"html_url": "https://github.com/huggingface/transformers/pull/19517",
"diff_url": "https://github.com/huggingface/transformers/pull/19517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19517.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19516
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19516/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19516/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19516/events
|
https://github.com/huggingface/transformers/pull/19516
| 1,405,853,929
|
PR_kwDOCUB6oc5Aojao
| 19,516
|
Fix bugs when LayoutLMv3Tokenizer use model microsoft/layoutlmv3-base-chinese
|
{
"login": "rogerdehe",
"id": 6434311,
"node_id": "MDQ6VXNlcjY0MzQzMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6434311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rogerdehe",
"html_url": "https://github.com/rogerdehe",
"followers_url": "https://api.github.com/users/rogerdehe/followers",
"following_url": "https://api.github.com/users/rogerdehe/following{/other_user}",
"gists_url": "https://api.github.com/users/rogerdehe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rogerdehe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rogerdehe/subscriptions",
"organizations_url": "https://api.github.com/users/rogerdehe/orgs",
"repos_url": "https://api.github.com/users/rogerdehe/repos",
"events_url": "https://api.github.com/users/rogerdehe/events{/privacy}",
"received_events_url": "https://api.github.com/users/rogerdehe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi,\r\n\r\nCould you try using [LayoutXLMTokenizer](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizer)/[LayoutXLMTokenizerFast](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizerFast) instead?\r\n\r\nNormally, these should be compatible with microsoft/layoutlmv3-base-chinese.",
"> Hi,\r\n> \r\n> Could you try using [LayoutXLMTokenizer](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizer)/[LayoutXLMTokenizerFast](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizerFast) instead?\r\n> \r\n> Normally, these should be compatible with microsoft/layoutlmv3-base-chinese.\r\n\r\n@NielsRogge sorry, I didn't find that `LayoutXLMTokenizer` is used for this model, maybe this will make code easier,but maybe more document is better. I will close this pr"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
# What does this PR do?
`microsoft/layoutlmv3-base-chinese` **use `sentencepiece` tokenizer** file "sentencepiece.bpe.model" from XLMRoberta instead of "tokenizer.json", when `LayoutLMv3Tokenizer` load this model with `LayoutLMv3Tokenizer.from_pretrained("microsoft/layoutlmv3-base-chinese")`, it will raise exception as follow:
```
[/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv3/tokenization_layoutlmv3.py](https://localhost:8080/#) in __init__(self, vocab_file, merges_file, errors, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, add_prefix_space, cls_token_box, sep_token_box, pad_token_box, pad_token_label, only_label_first_subword, **kwargs)
322 )
323
--> 324 with open(vocab_file, encoding="utf-8") as vocab_handle:
325 self.encoder = json.load(vocab_handle)
326 self.decoder = {v: k for k, v in self.encoder.items()}
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
I have tried to fix this bugs by merge code from `XLMRobertaTokenizer`.
And also, after `LayoutLMv3Tokenizer` changed, `LayoutLMv3Converter` should also be updated to convert to fast tokenizer.
I have write a new test file to test chinese tokenizer.
!!!NOTICE
I have found some differences when processing `chinese` because of `sentencepiece`.
For example, when we process english document, we have a bound box text `hello word` consist of words `["hello", "word"]`, we tokenize by the words `["hello", "word"]`, get token ids`[[42891], [14742]]`, this is absolutely ok.
But if we have a chinese bound box text `汇丰` consist of words `["汇", "丰"]`, we tokenize by words `["汇", "丰"]`, will get token ids `[[6, 47360], [6, 49222]]`, this is a little strange, **both token ids has `6`, token id `6` is a "▁" (U+2581) , but we do not input `▁`**. It is because **sententcepiece will add space at the begining**(refer [this](https://github.com/google/sentencepiece/issues/15))
So the import thing is that, **when use chinese tokenizer, do not use bound box words as input, use bound box text as input**, here is `["汇丰"]`
```python
from transformers import AutoTokenizer
eng_tok = AutoTokenizer.from_pretrained("roberta-base") # microsot/layoutlmv3-base refer roberta-base
eng_tok(["hello", "word"], add_special_tokens=False)["input_ids"] # [[42891], [14742]]
xlm_tok = AutoTokenizer.from_pretrained("xlm-roberta-base") # microsoft/layoutlmv3-base-chinse refer xlm-roberta-base
xlm_tok(["汇", "丰"], add_special_tokens=False)["input_ids"] # [[6, 47360], [6, 49222]]
xlm_tok = AutoTokenizer.from_pretrained("xlm-roberta-base")
xlm_tok(["汇丰"], add_special_tokens=False)["input_ids"] # [[6, 47360, 49222]]
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19516/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19516",
"html_url": "https://github.com/huggingface/transformers/pull/19516",
"diff_url": "https://github.com/huggingface/transformers/pull/19516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19516.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19515
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19515/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19515/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19515/events
|
https://github.com/huggingface/transformers/issues/19515
| 1,405,804,108
|
I_kwDOCUB6oc5Tyt5M
| 19,515
|
There is a type annotation error
|
{
"login": "PVMPATCH",
"id": 92843246,
"node_id": "U_kgDOBYis7g",
"avatar_url": "https://avatars.githubusercontent.com/u/92843246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PVMPATCH",
"html_url": "https://github.com/PVMPATCH",
"followers_url": "https://api.github.com/users/PVMPATCH/followers",
"following_url": "https://api.github.com/users/PVMPATCH/following{/other_user}",
"gists_url": "https://api.github.com/users/PVMPATCH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PVMPATCH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PVMPATCH/subscriptions",
"organizations_url": "https://api.github.com/users/PVMPATCH/orgs",
"repos_url": "https://api.github.com/users/PVMPATCH/repos",
"events_url": "https://api.github.com/users/PVMPATCH/events{/privacy}",
"received_events_url": "https://api.github.com/users/PVMPATCH/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting. Note that we do not maintain the examples lying in the research project folder, and we only use type annotations for documentation purpose, not for type-checkers.",
"@sgugger Will you have any comming project related to vision transformer that I can contribute to. ",
"Can i get this issue?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
Transformer 4.22.0
Ubuntu 20.04
Python 3.10.6
### Who can help?
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The type annotation bug is [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/finetune_rag.py#:~:text=model%3A%20GenerativeQAModule%20%3D%20GenerativeQAModule(args))
Trying to annotate `model` as type `GenerativeQAModule` at line 586 violates the rules of type annotation,
because `model` had been firstly initialized as `None` or other value at line 536.
```
#transformers/examples/research_projects/rag/finetune_rag.py #Line 536
def main(args=None, model=None) -> GenerativeQAModule:
...
if model is None:
model: GenerativeQAModule = GenerativeQAModule(args)
```
I found this defect by using the tool called [Pyre](https://pyre-check.org/docs/getting-started/).
`pyre init`
`pyre`
### Expected behavior
I expected no defect reported
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19515/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19514
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19514/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19514/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19514/events
|
https://github.com/huggingface/transformers/pull/19514
| 1,405,799,851
|
PR_kwDOCUB6oc5AoX-w
| 19,514
|
[Examples] Fix typos in run speech recognition seq2seq
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes minor typos in comments of `run_speech_recognition_seq2seq`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19514/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19514",
"html_url": "https://github.com/huggingface/transformers/pull/19514",
"diff_url": "https://github.com/huggingface/transformers/pull/19514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19514.patch",
"merged_at": 1665585203000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19513
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19513/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19513/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19513/events
|
https://github.com/huggingface/transformers/issues/19513
| 1,405,776,039
|
I_kwDOCUB6oc5TynCn
| 19,513
|
X-CLIP example error
|
{
"login": "Bing-su",
"id": 37621276,
"node_id": "MDQ6VXNlcjM3NjIxMjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37621276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bing-su",
"html_url": "https://github.com/Bing-su",
"followers_url": "https://api.github.com/users/Bing-su/followers",
"following_url": "https://api.github.com/users/Bing-su/following{/other_user}",
"gists_url": "https://api.github.com/users/Bing-su/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bing-su/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bing-su/subscriptions",
"organizations_url": "https://api.github.com/users/Bing-su/orgs",
"repos_url": "https://api.github.com/users/Bing-su/repos",
"events_url": "https://api.github.com/users/Bing-su/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bing-su/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### System Info
google colab
python=3.7.14
transformers=4.23.1
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
example at <https://huggingface.co/docs/transformers/main/model_doc/xclip#transformers.XCLIPModel>
```python
from PIL import Image
import requests
from transformers import XCLIPProcessor, XCLIPModel
model = XCLIPModel.from_pretrained("microsoft/xclip-base-patch32")
processor = XCLIPProcessor.from_pretrained("microsoft/xclip-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_video = outputs.logits_per_video # this is the video-text similarity score
probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
```
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-3-bb99bb9a026f>](https://localhost:8080/#) in <module>
10
11 inputs = processor(
---> 12 text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
13 )
14
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2776 return_length=return_length,
2777 verbose=verbose,
-> 2778 **kwargs,
2779 )
2780
TypeError: _batch_encode_plus() got an unexpected keyword argument 'images'
```
- change `images` -> `videos`
```python
from PIL import Image
import requests
from transformers import XCLIPProcessor, XCLIPModel
model = XCLIPModel.from_pretrained("microsoft/xclip-base-patch32")
processor = XCLIPProcessor.from_pretrained("microsoft/xclip-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# change images -> videos
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], videos=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_video = outputs.logits_per_video # this is the video-text similarity score
probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
```
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-8c627970ac41>](https://localhost:8080/#) in <module>
11 # change images -> videos
12 inputs = processor(
---> 13 text=["a photo of a cat", "a photo of a dog"], videos=image, return_tensors="pt", padding=True
14 )
15
1 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/videomae/feature_extraction_videomae.py](https://localhost:8080/#) in __call__(self, videos, return_tensors, **kwargs)
147 if not valid_videos:
148 raise ValueError(
--> 149 "Videos must of type `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]` (single"
150 " example), `List[List[PIL.Image.Image]]`, `List[List[np.ndarray]]`, `List[List[torch.Tensor]]` (batch"
151 " of examples)."
ValueError: Videos must of type `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]` (single example), `List[List[PIL.Image.Image]]`, `List[List[np.ndarray]]`, `List[List[torch.Tensor]]` (batch of examples).
```
- change `videos=image` -> `videos=[image]`
```python
from PIL import Image
import requests
from transformers import XCLIPProcessor, XCLIPModel
model = XCLIPModel.from_pretrained("microsoft/xclip-base-patch32")
processor = XCLIPProcessor.from_pretrained("microsoft/xclip-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# change videos=image -> videos=[image]
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], videos=[image], return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_video = outputs.logits_per_video # this is the video-text similarity score
probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
```
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-5-cbcb07e98104>](https://localhost:8080/#) in <module>
14 )
15
---> 16 outputs = model(**inputs)
17 logits_per_video = outputs.logits_per_video # this is the video-text similarity score
18 probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
7 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/x_clip/modeling_x_clip.py](https://localhost:8080/#) in forward(self, hidden_states, attention_mask, causal_attention_mask, output_attentions)
435 batch_size = batch_time // self.num_frames
436 msg_token = self.message_fc(hidden_states[:, 0, :])
--> 437 msg_token = msg_token.view(batch_size, self.num_frames, hidden_size)
438
439 msg_token = msg_token + self.drop_path(self.message_attn(self.message_ln(msg_token))[0])
RuntimeError: shape '[0, 8, 768]' is invalid for input of size 768
```
[colab](https://colab.research.google.com/drive/1Qq8qTx1SWsdEE4PC3h6Guta8lrRQjVLk?usp=sharing)
### Expected behavior
run...
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19513/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19512
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19512/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19512/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19512/events
|
https://github.com/huggingface/transformers/pull/19512
| 1,405,771,699
|
PR_kwDOCUB6oc5AoR_J
| 19,512
|
[FLAX] Whisper
|
{
"login": "kamalkraj",
"id": 17096858,
"node_id": "MDQ6VXNlcjE3MDk2ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamalkraj",
"html_url": "https://github.com/kamalkraj",
"followers_url": "https://api.github.com/users/kamalkraj/followers",
"following_url": "https://api.github.com/users/kamalkraj/following{/other_user}",
"gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions",
"organizations_url": "https://api.github.com/users/kamalkraj/orgs",
"repos_url": "https://api.github.com/users/kamalkraj/repos",
"events_url": "https://api.github.com/users/kamalkraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamalkraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19512). All of your documentation changes will be reflected on that endpoint.",
"Hi,\r\n\r\nI need little clarification about implementing the `FlaxWhisperDecoder` Module.\r\n\r\nWhat would be the best way to pass `past_key_values_length` to the module?\r\n \r\nReference in Pytorch implementation. \r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L863-L873 \r\n\r\n@patrickvonplaten @ydshieh @patil-suraj ",
"Whisper on TPU will make :fire: colab demos",
"\r\n",
"Awesome work here! Feel free to ping me for a review once it is ready 😄 ",
"Hi,\r\n\r\nI have finished the model and working on the test cases now. \r\nThe pt<->flax equivalence test is failing, even though the `model.generate` produce the exact speech-to-text like the PyTorch model. \r\n\r\n\r\n\r\n\r\nI have attached steps to reproduce the issue in this notebook - https://colab.research.google.com/drive/1KmO8OBUpHfs1uYA_eSwamQAXnjsdbkRS?usp=sharing\r\n\r\nAny pointers will be helpful.\r\n\r\nThanks\r\n\r\n@patrickvonplaten @patil-suraj @ydshieh @ArthurZucker ",
"Hi @kamalkraj First, thank you for this awesome PR!\r\n\r\nRegarding the PT/Flax tests, I probably need to improve that PT/Flax equivalence tests to make it (a bit) easier to find out which layers gives the larger difference.\r\n\r\nIn the meantime, I have to say there is no easy way to debug such issue. We need patience to find out at which layer(s) we have the first large difference (greater than the tolerance) and see what's wrong inside that layer.\r\n\r\nThis is usually tedious and involving manually debugging process.\r\n\r\nAnyway, I can open a PR to make the process (a bit) easier - if you want to wait a bit. But notice that we still need similar process even that PR is merged.",
"Will try to get #18420 merged so that we can maybe use the `find_pt_fx_differences(pt_outputs, fx_outputs)` function! But in the mean time, you should set `output_hidden_states=True` and check where the lists differ 🤗 ",
"Hi @kamalkraj Actually that test is quite good enough, but we need to change a bit to debug more.\r\n\r\nThe last 2 commit in this [branch](https://github.com/huggingface/transformers/commits/temp-debug-whisper-flax) could log more information.\r\n\r\nIf you run the tests like\r\n\r\n```bash\r\nRUN_PT_FLAX_CROSS_TESTS=true python3 -m pytest -v tests/models/whisper/test_modeling_flax_whisper.py -k \"test_equivalence_pt_to_flax\"\r\n```\r\nit logs something\r\n```\r\nmax diff. in outputs.logits: 0.0020506680011749268\r\n```\r\nbut it doesn't fail the test -> it continues. So far, I got\r\n\r\n```bash\r\nE AssertionError: <class 'list'> != <class 'tuple'> : outputs.decoder_hidden_states: Output types differ between Flax and PyTorch\r\n```\r\nso you will have to look the output type of `decoder_hidden_states` and make sure the type is the same as the PyTorch one.\r\nContinue this process will eventually show you all the difference, and you can get a better idea where to debug in the modeling code.\r\n\r\nAlso, it seems when running the tests from `tests/models/whisper/test_modeling_whisper.py`, we have some shape issue. This is another thing to debug.\r\n\r\nHopefully this gives you some idea of how we can debug here 🤗 \r\n\r\n",
"Thanks, @ydshieh and @ArthurZucker\r\n\r\n",
"To make for a more consistent API across models, couldn't we swap out `past_key_values_length` and instead compute `position_ids` to get the current positional embeddings for the decoder? It feels like this would make it easier to fit Whisper in with other finetuning codebases (no need to create custom logic for computing `past_key_values_length` when dealing with Whisper). As the code currently stands, I think it would actually give incorrect outputs when decoding a batch when each element of the batch has different decoder prefix/prompt tokens. Computing position ids from the attention mask would also allow for either left or right padding. \r\n\r\nI have another Flax Whisper implementation with .from_pretrained(..., from_pt=True) working correctly and it giving correct outputs for variable length prompts that I'd be happy to share (or create a separate PR for). It also adds some stuff to the generation utilities to support prompt tokens to the decoder that already exist in the PyTorch utilities (using prompt tokens instead of `model.config.decoder_start_token_id` if specified).",
"I haven't look into this. But @andyehrenberg do you suggest a different way of computation in Flax Whisper than the one implemented in our PyTorch/TensorFlow Whisper?\r\n\r\nIt's also better for @kamalkraj to express if he would like to continue this PR before we go ahead.",
"@ydshieh @andyehrenberg\r\n\r\nIf there is already a working implementation, please continue.\r\nI am closing this one. \r\n\r\nThanks",
"@ydshieh I guess what I'm suggesting for this could also be helpful for the PyTorch/TF implementations to improve flexibility/compatibility with existing codebases that use `position_ids` for other models (such as when finetuning).\r\n\r\nFor example, the use-case I'm working on is fine-tuning Whisper with RL (trying to expose it to its own outputs to reduce hallucinations). At each step when collecting rollouts, it is given a batch of audio features and decoder prompts (from previous audio snippets) - these prompts are of varying lengths, so padding/attention masks are needed, and the position embeddings need to adjust accordingly. And then when doing PPO updates on these steps, the position embeddings need to be computed correctly based off of which timesteps (tokens) are padding.\r\n\r\nThe implementation in this PR wouldn't accommodate this scenario as it assumes the same `past_key_values_length` for each sequence in the batch, whereas the implementation I've worked on uses `position_ids` to keep track of where we are in each sequence of the batch. Earlier I had use a different method that only used the attention mask along with another caching method in the decoder, but using position_ids is much simpler and accommodates multiple padding schemes more simply."
] | 1,665
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19512/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19512/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19512",
"html_url": "https://github.com/huggingface/transformers/pull/19512",
"diff_url": "https://github.com/huggingface/transformers/pull/19512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19512.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19511
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19511/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19511/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19511/events
|
https://github.com/huggingface/transformers/issues/19511
| 1,405,679,203
|
I_kwDOCUB6oc5TyPZj
| 19,511
|
ERNIE and tensorflow2
|
{
"login": "Smile-L-up",
"id": 42887193,
"node_id": "MDQ6VXNlcjQyODg3MTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/42887193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Smile-L-up",
"html_url": "https://github.com/Smile-L-up",
"followers_url": "https://api.github.com/users/Smile-L-up/followers",
"following_url": "https://api.github.com/users/Smile-L-up/following{/other_user}",
"gists_url": "https://api.github.com/users/Smile-L-up/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Smile-L-up/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Smile-L-up/subscriptions",
"organizations_url": "https://api.github.com/users/Smile-L-up/orgs",
"repos_url": "https://api.github.com/users/Smile-L-up/repos",
"events_url": "https://api.github.com/users/Smile-L-up/events{/privacy}",
"received_events_url": "https://api.github.com/users/Smile-L-up/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"cc @amyeroberts ",
"Hi @Smile-L-up. Thanks for raising this issue. \r\n\r\nYou can check if there's any ongoing work to port a model to TensorFlow, by searching the [open issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+ernie) and [PRs](https://github.com/huggingface/transformers/pulls?q=is%3Apr+is%3Aopen+ernie). I don't think there's any plans or ongoing work to port ERNIE. \r\n\r\nIf you're interested in adding the model yourself, we have a great guide showing all the steps [here](https://huggingface.co/docs/transformers/v4.23.1/en/add_tensorflow_model) and we're of course happy to help along the way. "
] | 1,665
| 1,665
| null |
NONE
| null |
### Feature request
Checked the transformersss Explanatory document , I found that only ErnieModel that supports torch exists. Is there any plan to release TFErnieModel later?
### Motivation
I am using the tensoflow2 framework and want use ERNIE.
### Your contribution
sorry.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19511/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/19510
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19510/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19510/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19510/events
|
https://github.com/huggingface/transformers/issues/19510
| 1,405,609,582
|
I_kwDOCUB6oc5Tx-Zu
| 19,510
|
RFC: Add quantization capability to the Transformers Trainer API
|
{
"login": "ftian1",
"id": 16394660,
"node_id": "MDQ6VXNlcjE2Mzk0NjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16394660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ftian1",
"html_url": "https://github.com/ftian1",
"followers_url": "https://api.github.com/users/ftian1/followers",
"following_url": "https://api.github.com/users/ftian1/following{/other_user}",
"gists_url": "https://api.github.com/users/ftian1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ftian1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ftian1/subscriptions",
"organizations_url": "https://api.github.com/users/ftian1/orgs",
"repos_url": "https://api.github.com/users/ftian1/repos",
"events_url": "https://api.github.com/users/ftian1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ftian1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"This looks very exciting! The API suggested makes sense to me, I'd need to see the whole code to comment more :-)\r\nBy all means, please open a PR and tag me for review!",
"@ftian1 [Optimum](https://github.com/huggingface/optimum) has a quantization API which is device- and backend-agnostic. It also offers other kinds of optimization for accelerating inference so I think it makes more sense to keep all this in the same place. Do not hesitate to open a PR or create an issue there with your suggestions and we will be happy to discuss it!",
"@regisss @sgugger thanks for the comments. why we think it's valuable to contribute to transformers is because it would bring better user experience with fewer line code changes comparing with Optimum.\r\n\r\nwe can have a PR at first and then do further discussion with your guidance. thanks",
"@ftian1 While I agree with you that it would provide a better UX for some users, there are a couple of points that make me think about it twice:\r\n\r\n- I do not think that such a quantization API should be tailored for accuracy-aware quantization only or for a specific backend. Users should be able to use ONNX Runtime, Torch FX, Intel Neural Compressor or any other available backend. The ways these backends work and are configured are different from each other, so the API should be able to manage this. Optimum enables to do it so a possible solution would be to have a wrapper around Optimum's quantization API.\r\n- I believe that having different places in the Hugging Face ecosystem where users can perform quantization will create quite a lot of confusion and will draw them away from other cool Optimum's optimization features, making it more difficult to deploy fast optimized models.",
"@regisss thanks for the comments. I will invite Optimum owner @echarlaix to review this RFC and see what's her inputs.",
"Hi @ftian1,\r\n\r\nWe already support INC quantization aware training (as well as static and dynamic quantization) in `optimum` so it would be redundant to add this feature to `transformers` in my opinion and could create some confusion on which library to use to perform optimization as @regisss mentionned. As we already discussed, I think it makes sense to keep everything related to `neural-compressor` in `optimum` and to increase promotion around it. Also happy to discuss any modifications you would like us to apply on the `IncTrainer` ! (especially given our plan to refactorize the different `IncQuantizer` and `IncTrainer` classes after `neural-compressor` next big release)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,669
| 1,669
|
NONE
| null |
### Feature request
Add quantization capability to the Transformers Trainer API
### Motivation
Quantization is one of popular model compression technologies and widely used in the industry. At Intel(R) Xeon platforms and Nvidia GPU platforms it could bring significant performance speedup with slight accuray loss. Having a easy-of-use quantization interface in Transfomers will benifit customers.
Comparing with vanilla PyTorch quantization feature, the proposed interface supports accuracy-aware tuning capability to solve common accuracy loss issue when applying quantization technology.
**Design**
The proposed interface is like below:
```
class AccuracyAwareTuningConf:
def __init__(self, accuracy_criterion='relative', accuracy_loss=0.01, metric_name='F1', timeout=0):
# The tuning configuration used to define accuracy goal.
# Args:
# accuracy_criterion: String. relative loss or absolute loss.
# accuracy_loss: Float. The tolerated accuracy loss value.
# metric_name: String. The metric user cares about.
# timeout: Integer. 0 means early stop. non-zero means returns within defined time scope. unit is minute.
class Trainer:
...
def quantize(self, model=None, approach='static', calib_dataset=None, tuning_config=None)
# The interface used to quantize model
#
# Args:
# model: Optional. if None, the model in Trainer initialization will be used.
# approach: String. "auto", "static" and "dynamic" are three supported quantization approaches.
# calib_dataset: Optional. if None, the train_dataset in Trainer initialization will be used.
# tuning_config: Optional. if none, it means doing quantization without tuning.
```
Note this interface is device agnostic. user could specify the device running calibration and quantization by training_args.device.
**Use Case**
Take Transformers Text-Classification task as an example, user could add below code to quantize model.
***Quantization without tuning***
```
# Initialize our Trainer [original code]
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
# one line code change to do quantization without accuracy-aware tuning
q_model = trainer.quantize()
```
***Quantization with tuning***
```
# Initialize our Trainer [original code]
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
# two line code changes to do quantization with accuracy-aware tuning
conf = AccuracyAwareTuningConf(accuracy_loss=0.005, metric='F1')
q_model = trainer.quantize(tuning_conf=conf)
```
**Performance**
below are the accuracy and performance data of some quantized NLP models tuned on Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz with 4 cores per instance, batch size 1.
<table class="tg">
<thead>
<tr>
<th class="tg-9wq8" rowspan="3"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Model</span> </th>
<th class="tg-9wq8" colspan="3" rowspan="2"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Accuracy</span> </th>
<th class="tg-9wq8" colspan="3" rowspan="2"> <br> <span style="font-weight:bold;color:#24292F">Throughput (samples/sec)</span> </th>
</tr>
<tr>
</tr>
<tr>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">INT8</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">FP32</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Acc Ratio[(INT8-FP32)/FP32]</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">INT8</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">FP32</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Performance Ratio[INT8/FP32]</span> </th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">Barthez MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">83.92%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">83.81%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">0.14%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">161.06</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.61</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.80x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.90%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.69%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.88%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">244.27</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">125.28</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.95x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT base RTE</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">69.31%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">69.68%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.52%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">259.21</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">125.72</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.06x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT base SST2</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">91.06%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">91.86%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.87%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">262.73</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">125.69</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.09x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT large MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.50%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.38%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.97%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.92</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">36.55</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.43x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">CamemBERT base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">86.70%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">86.82%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.14%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">236.6</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">121.81</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.94x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">Deberta MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.88%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.91%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.04%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">149.76</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">84.72</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.77x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">DistilBERT base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.23%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.16%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-1.05%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">426.4</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">246.13</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.73x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">mBart WNLI</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">56.34%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">56.34%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">0.00%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">66.23</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">30.86</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.15x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">lvwerra/pegasus-samsum</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">42.39</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">42.67</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.67%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">3.86</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.14</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">3.38x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">Roberta Base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.25%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.18%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">0.08%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">245.05</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">123.53</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.98x</span> </td>
</tr>
</tbody>
</table>
### Your contribution
We are glad to contribute PR to HF community if this idea gets approved.
We would love to hear feedback from HF maintainer about this proposal.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19510/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19509
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19509/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19509/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19509/events
|
https://github.com/huggingface/transformers/issues/19509
| 1,405,566,265
|
I_kwDOCUB6oc5Txz05
| 19,509
|
INF encountered when using sampling with temperature.
|
{
"login": "ElliottYan",
"id": 10862038,
"node_id": "MDQ6VXNlcjEwODYyMDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/10862038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ElliottYan",
"html_url": "https://github.com/ElliottYan",
"followers_url": "https://api.github.com/users/ElliottYan/followers",
"following_url": "https://api.github.com/users/ElliottYan/following{/other_user}",
"gists_url": "https://api.github.com/users/ElliottYan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ElliottYan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElliottYan/subscriptions",
"organizations_url": "https://api.github.com/users/ElliottYan/orgs",
"repos_url": "https://api.github.com/users/ElliottYan/repos",
"events_url": "https://api.github.com/users/ElliottYan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ElliottYan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"Hi @ElliottYan 👋 Thank you for pointing it out, it seems like a bug indeed. I will look into it.",
"Great! Looking forward to your solution. \r\nFor now, I just swap these two lines (L2566 && 2567) and the error disappears. But I'm not sure what I do is correct. ",
"Are you using half or full precision here? Also `inf` values are not necessarily the reason for a bug, it might also be that `mBart` has some default logit processor settings that 0 out values which the lead to `inf` (cc @gante) "
] | 1,665
| 1,668
| null |
NONE
| null |
### System Info
latest transformers version == 4.24.0
When generating samples with mBART, I encounter this problem:

Looking deeply into the codes, I find the problem roots from the beam score added to next_token_scores here:
https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/generation_utils.py#L2566
The original value of beam_scores is 0, but when using temperature like 0.5, the score is also divided the temperature value in logit_warper and gets larger and larger. And finally it causes the overflow of next_token_scores.
### Who can help?
@patrickvonplaten @Narsil @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**I provide a simple code that can reproduce this issue.**
import transformers
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
model = model.cuda()
src = 'In einem Notruf erzählte Professor Shannon Lamb mit einer etwas zittrigen Stimme der Polizei, dass er seine Freundin erschossen habe und dass die Beamten zu seinem Haus kommen müssten.'
encoded_hi = tokenizer(src, return_tensors="pt", padding=True).to('cuda') # do_sample=True
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id['en_XX'], temperature=0.5, do_sample=True, num_beams=10, num_return_sequences=10)
tgt_txt = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
### Expected behavior
I think this should be solved but I'm not sure about the effect of the beam_scores.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19509/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/19508
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19508/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19508/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19508/events
|
https://github.com/huggingface/transformers/pull/19508
| 1,405,520,128
|
PR_kwDOCUB6oc5AncN_
| 19,508
|
Fix fairseq wav2vec2-xls-r pretrained weights conversion scripts
|
{
"login": "heatz123",
"id": 33706329,
"node_id": "MDQ6VXNlcjMzNzA2MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/33706329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heatz123",
"html_url": "https://github.com/heatz123",
"followers_url": "https://api.github.com/users/heatz123/followers",
"following_url": "https://api.github.com/users/heatz123/following{/other_user}",
"gists_url": "https://api.github.com/users/heatz123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heatz123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heatz123/subscriptions",
"organizations_url": "https://api.github.com/users/heatz123/orgs",
"repos_url": "https://api.github.com/users/heatz123/repos",
"events_url": "https://api.github.com/users/heatz123/events{/privacy}",
"received_events_url": "https://api.github.com/users/heatz123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes: #19319
This PR fixes some bug on wav2vec2 fairseq weights conversion scripts, that [wav2vec2-xls-r-kind](https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec/xlsr) weights files fail to be loaded on this line.
https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py#L249
This can be resolved by specifying fairseq task as `audio_pretraining`, and loading fairseq weights with the task context.
This change follows the way of fairseq library, which loads pretrained model weights via passing `task` argument on cli.
Conversion of other non-finetuned weights works without any side effects. (tested with [wav2vec2-base](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt), [wav2vec2-conformer](dl.fbaipublicfiles.com/fairseq/conformer/wav2vec2/librilight/LL_relpos_PT_no_FT))
Referenced model weights are in the following url: https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19508/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19508",
"html_url": "https://github.com/huggingface/transformers/pull/19508",
"diff_url": "https://github.com/huggingface/transformers/pull/19508.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19508.patch",
"merged_at": 1665658122000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19507
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19507/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19507/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19507/events
|
https://github.com/huggingface/transformers/issues/19507
| 1,405,484,790
|
I_kwDOCUB6oc5Txf72
| 19,507
|
Create dependencies file
|
{
"login": "DIvkov575",
"id": 79413560,
"node_id": "MDQ6VXNlcjc5NDEzNTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/79413560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DIvkov575",
"html_url": "https://github.com/DIvkov575",
"followers_url": "https://api.github.com/users/DIvkov575/followers",
"following_url": "https://api.github.com/users/DIvkov575/following{/other_user}",
"gists_url": "https://api.github.com/users/DIvkov575/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DIvkov575/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DIvkov575/subscriptions",
"organizations_url": "https://api.github.com/users/DIvkov575/orgs",
"repos_url": "https://api.github.com/users/DIvkov575/repos",
"events_url": "https://api.github.com/users/DIvkov575/events{/privacy}",
"received_events_url": "https://api.github.com/users/DIvkov575/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### Feature request
### Motivation
### Your contribution
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19507/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19506
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19506/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19506/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19506/events
|
https://github.com/huggingface/transformers/pull/19506
| 1,405,442,010
|
PR_kwDOCUB6oc5AnMQ1
| 19,506
|
update doc for perf_train_cpu_many, add oneccl_bindings_for_pytorch 1.12.100
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @yao-matrix @liangan1 please have a review",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,666
| 1,665
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19506/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19506",
"html_url": "https://github.com/huggingface/transformers/pull/19506",
"diff_url": "https://github.com/huggingface/transformers/pull/19506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19506.patch",
"merged_at": 1665543260000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19505
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19505/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19505/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19505/events
|
https://github.com/huggingface/transformers/issues/19505
| 1,405,399,972
|
I_kwDOCUB6oc5TxLOk
| 19,505
|
Special Language Token for PLBART needs to be updated
|
{
"login": "wasiahmad",
"id": 17520413,
"node_id": "MDQ6VXNlcjE3NTIwNDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/17520413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wasiahmad",
"html_url": "https://github.com/wasiahmad",
"followers_url": "https://api.github.com/users/wasiahmad/followers",
"following_url": "https://api.github.com/users/wasiahmad/following{/other_user}",
"gists_url": "https://api.github.com/users/wasiahmad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wasiahmad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasiahmad/subscriptions",
"organizations_url": "https://api.github.com/users/wasiahmad/orgs",
"repos_url": "https://api.github.com/users/wasiahmad/repos",
"events_url": "https://api.github.com/users/wasiahmad/events{/privacy}",
"received_events_url": "https://api.github.com/users/wasiahmad/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gchhablani, could you take a look at this?",
"@LysandreJik I can have a look at this if it isn't being looked at.",
"Please go ahead, thank you @jordiclive!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
### System Info
The `FAIRSEQ_LANGUAGE_CODES` in PLBartTokenizer [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/plbart/tokenization_plbart.py#L90) need to be as follows.
```
FAIRSEQ_LANGUAGE_CODES = {
"base": ["__java__", "__python__", "__en_XX__"],
"multi": ["__java__", "__python__", "__en_XX__", "__javascript__", "__php__", "__ruby__", "__go__"],
}
```
The current PLBartTokenizer treats `java` as a special token, and thus it removes the token when decoding is performed. An example is given below.
```
code = "public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
tokenizer = model_tokenizer_class.from_pretrained("uclanlp/plbart-base", language_codes="base")
model_inputs = tokenizer([code])
print(tokenizer.decode(model_inputs['input_ids'][0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
# The code output is: "public void METHOD_1 ( TYPE_1 VAR_1 ) throws .lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
```
### Who can help?
@gunjan
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
code = "public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
tokenizer = model_tokenizer_class.from_pretrained("uclanlp/plbart-base", language_codes="base")
model_inputs = tokenizer([code])
print(tokenizer.decode(model_inputs['input_ids'][0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
# public void METHOD_1 ( TYPE_1 VAR_1 ) throws .lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }
```
### Expected behavior
```
code = "public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
tokenizer = model_tokenizer_class.from_pretrained("uclanlp/plbart-base", language_codes="base")
model_inputs = tokenizer([code])
print(tokenizer.decode(model_inputs['input_ids'][0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
# public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19505/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19504
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19504/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19504/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19504/events
|
https://github.com/huggingface/transformers/pull/19504
| 1,405,380,232
|
PR_kwDOCUB6oc5Am_vK
| 19,504
|
[Re-submit] Compute true loss Flax examples
|
{
"login": "duongna21",
"id": 38061659,
"node_id": "MDQ6VXNlcjM4MDYxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38061659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duongna21",
"html_url": "https://github.com/duongna21",
"followers_url": "https://api.github.com/users/duongna21/followers",
"following_url": "https://api.github.com/users/duongna21/following{/other_user}",
"gists_url": "https://api.github.com/users/duongna21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duongna21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duongna21/subscriptions",
"organizations_url": "https://api.github.com/users/duongna21/orgs",
"repos_url": "https://api.github.com/users/duongna21/repos",
"events_url": "https://api.github.com/users/duongna21/events{/privacy}",
"received_events_url": "https://api.github.com/users/duongna21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Re-submit #18458.
cc @patrickvonplaten @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19504/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19504",
"html_url": "https://github.com/huggingface/transformers/pull/19504",
"diff_url": "https://github.com/huggingface/transformers/pull/19504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19504.patch",
"merged_at": 1665657216000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19503
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19503/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19503/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19503/events
|
https://github.com/huggingface/transformers/pull/19503
| 1,405,355,979
|
PR_kwDOCUB6oc5Am6wY
| 19,503
|
Create the arange tensor on device for enabling CUDA-Graph for Clip Encoder
|
{
"login": "RezaYazdaniAminabadi",
"id": 44502768,
"node_id": "MDQ6VXNlcjQ0NTAyNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/44502768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RezaYazdaniAminabadi",
"html_url": "https://github.com/RezaYazdaniAminabadi",
"followers_url": "https://api.github.com/users/RezaYazdaniAminabadi/followers",
"following_url": "https://api.github.com/users/RezaYazdaniAminabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/RezaYazdaniAminabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RezaYazdaniAminabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RezaYazdaniAminabadi/subscriptions",
"organizations_url": "https://api.github.com/users/RezaYazdaniAminabadi/orgs",
"repos_url": "https://api.github.com/users/RezaYazdaniAminabadi/repos",
"events_url": "https://api.github.com/users/RezaYazdaniAminabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/RezaYazdaniAminabadi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR changes the allocation of a tensor at the [modeling_clip.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_clip.py#L665) to happen at device side, to be able to use CUDA-Graph at DeepSpeed-Inference, which can help improve the performance for the Stable-Diffusion model inference. Here is the [PR ](https://github.com/microsoft/DeepSpeed/pull/2381)that includes the optimization for improving the SD performance.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19503/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19503/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19503",
"html_url": "https://github.com/huggingface/transformers/pull/19503",
"diff_url": "https://github.com/huggingface/transformers/pull/19503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19503.patch",
"merged_at": 1665610371000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19502
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19502/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19502/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19502/events
|
https://github.com/huggingface/transformers/pull/19502
| 1,405,308,380
|
PR_kwDOCUB6oc5AmwtM
| 19,502
|
Add multi-node conditions in trainer_qa.py and trainer_seq2seq.py
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The QA example currently fails during evaluation when it is run on several nodes. This happens because seconday nodes are trying to write in `output_dir` while this directory only exists on the main node.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19502/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19502",
"html_url": "https://github.com/huggingface/transformers/pull/19502",
"diff_url": "https://github.com/huggingface/transformers/pull/19502.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19502.patch",
"merged_at": 1665542936000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19501
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19501/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19501/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19501/events
|
https://github.com/huggingface/transformers/pull/19501
| 1,405,299,601
|
PR_kwDOCUB6oc5Amu10
| 19,501
|
Remove roberta dependency from longformer fast tokenizer
|
{
"login": "sirmammingtonham",
"id": 3794630,
"node_id": "MDQ6VXNlcjM3OTQ2MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3794630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirmammingtonham",
"html_url": "https://github.com/sirmammingtonham",
"followers_url": "https://api.github.com/users/sirmammingtonham/followers",
"following_url": "https://api.github.com/users/sirmammingtonham/following{/other_user}",
"gists_url": "https://api.github.com/users/sirmammingtonham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sirmammingtonham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sirmammingtonham/subscriptions",
"organizations_url": "https://api.github.com/users/sirmammingtonham/orgs",
"repos_url": "https://api.github.com/users/sirmammingtonham/repos",
"events_url": "https://api.github.com/users/sirmammingtonham/events{/privacy}",
"received_events_url": "https://api.github.com/users/sirmammingtonham/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks again for your contribution!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR removes the RoBERTA fast tokenizer dependency from the Longformer fast tokenizer, as tasked in #19303.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19501/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19501",
"html_url": "https://github.com/huggingface/transformers/pull/19501",
"diff_url": "https://github.com/huggingface/transformers/pull/19501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19501.patch",
"merged_at": 1665583920000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19500
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19500/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19500/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19500/events
|
https://github.com/huggingface/transformers/issues/19500
| 1,405,246,559
|
I_kwDOCUB6oc5Twlxf
| 19,500
|
Misalignment between documentation and implementation of mBART50 tokenisation for the decoder
|
{
"login": "devaansh100",
"id": 56511236,
"node_id": "MDQ6VXNlcjU2NTExMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/56511236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devaansh100",
"html_url": "https://github.com/devaansh100",
"followers_url": "https://api.github.com/users/devaansh100/followers",
"following_url": "https://api.github.com/users/devaansh100/following{/other_user}",
"gists_url": "https://api.github.com/users/devaansh100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devaansh100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devaansh100/subscriptions",
"organizations_url": "https://api.github.com/users/devaansh100/orgs",
"repos_url": "https://api.github.com/users/devaansh100/repos",
"events_url": "https://api.github.com/users/devaansh100/events{/privacy}",
"received_events_url": "https://api.github.com/users/devaansh100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@ArthurZucker, when you have bandwidth, would you like to take a look at this?",
"Not stale, still looking forward to a response!",
"Hey! This is very similar to #18133.\r\nFirst, I was not really able to reproduce the bug as the output of \r\n```python \r\ntokenizer = MBart50TokenizerFast.from_pretrained(\"facebook/mbart-large-50\", src_lang=\"en_XX\", tgt_lang=\"ro_RO\")\r\n\r\nsrc_text = \" UN Chief Says There Is No Military Solution in Syria\"\r\ntgt_text = \"Şeful ONU declară că nu există o soluţie militară în Siria\"\r\n\r\nmodel_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors=\"pt\").input_ids\r\n```\r\nGave : \r\n```\r\ntensor([[250004, 8274, 127873, 25916, 7, 8622, 2071, 438, 67485,\r\n 53, 187895, 23, 51712, 2]])\r\n````\r\n\r\nBut in any case, I think that you are right, the documentation for both model is not aligned as the input is not shifted in the tokenizer but rather in the model. This was already mentioned so might as well adresse it !\r\n\r\n\r\n",
"Hey! While the output of the tokenizer is correct(both input_ids and labels in the same format), the labels are going to pass through the `shift_tokens_right` to create the `decoder_input_ids`.\r\n\r\nThe `shift_tokens_right` expects the LID token at the end, however, `MBart50Tokenizer` will give an EOS token, therefore, the input to the decoder will end up being wrong.\r\n\r\nRegarding the reproduction of the issue - do you mean reproducing the referenced issue? If yes, they are using the `MBartTokenizer`, while the code mentioned here uses the `MBart50Tokenizer`",
"I'll come back to this soon 😉 ",
"It seems that this has been confusing a lot of people (including me, see #20610, ). \r\n\r\nLet's work with your example:\r\n- src_text : `en_XX UN Chief Says There Is No Military Solution in Syria</s>`\r\n- labels : `ro_RO Şeful ONU declară că nu există o soluţie militară în Siria</s>`\r\n- [shifted labels](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mbart/modeling_mbart.py#L1348-L1349) : `</s>ro_RO Şeful ONU declară că nu există o soluţie militară în Siria` (= decoder_inputs_ids)\r\n\r\nWe are interested in supervised training where you feed the model with `inputs_ids` and `labels`. For most of the encoder decoder models, the labels are shifted to the right, so that the model will predict the next token in a MLM manner. \r\n\r\nThis means that if the `decoder_input_ids` are \r\n```\r\n[ 2, 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577]\r\n```\r\nThen the model (if it is a perfect model) predicts \r\n```\r\n[250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]\r\n```\r\nWhich is then compared to the loss. \r\nThat is also why when you generate (inference) you force the beginning token with `</s>`.\r\n\r\n\r\n",
"Thanks for the clarification, @ArthurZucker! It still seems a bit wrong to expect the model to predict `ro_RO` given `</s>`. The comment `wrap the last non pad token (the <LID> token)` in code is also somewhat confusing! ",
"I agree with @LoicGrobol.\r\n\r\nI also want to clarify this example from the [docs](https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50):\r\n\r\n```python\r\nfrom transformers import MBartForConditionalGeneration, MBart50TokenizerFast\r\n\r\narticle_hi = \"संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है\"\r\narticle_ar = \"الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا.\"\r\n\r\nmodel = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-50-many-to-many-mmt\")\r\ntokenizer = MBart50TokenizerFast.from_pretrained(\"facebook/mbart-large-50-many-to-many-mmt\")\r\n\r\n# translate Hindi to French\r\ntokenizer.src_lang = \"hi_IN\"\r\nencoded_hi = tokenizer(article_hi, return_tensors=\"pt\")\r\ngenerated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id[\"fr_XX\"])\r\ntokenizer.batch_decode(generated_tokens, skip_special_tokens=True)\r\n# => \"Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria.\"\r\n\r\n# translate Arabic to English\r\ntokenizer.src_lang = \"ar_AR\"\r\nencoded_ar = tokenizer(article_ar, return_tensors=\"pt\")\r\ngenerated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id[\"en_XX\"])\r\ntokenizer.batch_decode(generated_tokens, skip_special_tokens=True)\r\n# => \"The Secretary-General of the United Nations says there is no military solution in Syria.\"\r\n```\r\n\r\n> That is also why when you generate (inference) you force the beginning token with ```</s>```.\r\n\r\nIs this the same ```forced_bos_token``` mentioned in the code? If yes, then should we force it be the ```<\\s>``` token, rather than ```ro_RO``` as done in the code?",
"Okay, indeed @LoicGrobol we should not compute the loss on all the forced decoder ids. This seems to apply to a few models, so will open a PR to fix these and add some documentation to properly explain all of this. \r\n\r\n@devaansh100, no we have the `bos_token_id # </s>` and the `forced_decoder_ids #<LID>` which should ensures that we start with 2 tokens. \r\n\r\nThanks both of you for your feedback. ",
"Note that currently this training seems to be needed to work well with `generate` in e.g. mBART, which uses `</s> LANGID` as its forced prompt (loss on the LANGID could indeed be skipped though). I guess that would have to changed too but I don't know how to make that change work with existing pretrained models.",
"Not sure I understand why we need to change the generate? We should not \r\n",
"The only logic that need to be updated is computing the loss on the lang token. The rest of the training procedure etc is still correct! It's just that we don't want the model to learn a distribution/update it's distribution when predicting the second token, because it can be variable at training, but will always be fixed at inference ",
"Got it! I guess the only change then would be in the \"labels\" from the tokenizer then - where there was LANGID initially, we would have a 1/-100?",
"Yep! That should be it 😉 ",
"Thank you for all the help!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,682
| 1,682
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj @SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The bug has been reproduced in the outputs of [this](https://colab.research.google.com/drive/1XUHZNKdxMLnV3AV8eZtKj7LyGNVL63Dy?usp=sharing) colab notebook. The following are the steps to be followed:
1. Make a copy of the notebook.
2. Execute the first 2 cells.
3. In the source file for mbart(`/usr/local/bin/python3.7/dist-packages/transformers/models/mbart/modeling_mbart.py`), on line 1352(above `outputs = self.model(...`, after the `if labels is not None` block), add `print(f'Decoder Input Ids: {decoder_input_ids}\nLabels: {labels}')`.
4. Restart the runtime for the changes in the library to take place.
5. Run the third cell. The output is:
```
Decoder Input Ids: tensor([[ 2, 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362,
315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577]])
Labels: tensor([[250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315,
42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]])
```
### Expected behavior
I was looking into fine-tuning `facebook/mbart-large-50` through [this](https://huggingface.co/docs/transformers/main/en/model_doc/mbart#training-of-mbart50) example in the documentation. As per the description, the expected input for the model is of the form `[lang_id] tokens [eos]` for both the encoder and the decoder.
While the `MBart50Tokenizer` produces outputs in the expected format, the `decoder_input_ids` get transformed to an incorrect one - `[eos] [lang_id] tokens`. Specifically, I believe the output should have been the following(do correct me if I am wrong here though):
```
Decoder Input Ids: tensor([[ 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362,
315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]])
Labels: tensor([[47711, 7844, 127666, 8, 18347, 18147, 1362, 315,
42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])
```
This is caused since the `shift_tokens_right` function does not seem to be adapted for mbart50. As per the docstring of this function,
> wrap the last non pad token (the [LID] token)
however, for an mbart50, the last non pad token would be an `eos`.
**Additional question:** Why should the `[eos]` token predict the `[lang_id]`? This happens in both mbart and mbart50. If not, should the last token in the labels be `-100`? If yes, there would be a subsequent issue, since the labels matrix from the tokenizer seems to be using `1` as the padding token instead of `-100`. Do let me know if I would be required to open the same!
If this bug seems legitimate, I would be glad to provide a fix for the same! I believe the `labels` key from MBart50Tokenizer would have to be updated to give the same output as the MBartTokenizer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19500/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19500/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19499
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19499/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19499/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19499/events
|
https://github.com/huggingface/transformers/pull/19499
| 1,405,243,559
|
PR_kwDOCUB6oc5Ami9f
| 19,499
|
bart config changes
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Not sure why it fails the check_code_quality test. Doing exactly what is written here: https://github.com/huggingface/transformers/pull/19485\r\n",
"> Not sure why it fails the check_code_quality test. Doing exactly what is written here: #19485\r\n\r\nHi @imarekkus Could you try to run `make style` and see what happens?\r\nAlso, please do not change the `configuration_bert.py` in this PR, it is done in #19485, thank you 🙏 "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
PR for issue below:

Test passed:

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19499/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19499",
"html_url": "https://github.com/huggingface/transformers/pull/19499",
"diff_url": "https://github.com/huggingface/transformers/pull/19499.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19499.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19498
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19498/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19498/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19498/events
|
https://github.com/huggingface/transformers/pull/19498
| 1,405,223,895
|
PR_kwDOCUB6oc5Ame00
| 19,498
|
Add a decorator for flaky tests
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds a new decorator to mark flaky tests, which will then automatically re-run them up to five times each time they fail (that param can be adapted). I've marked as flaky three tests I have seen recently fail for no reason as a demo.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19498/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19498/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19498",
"html_url": "https://github.com/huggingface/transformers/pull/19498",
"diff_url": "https://github.com/huggingface/transformers/pull/19498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19498.patch",
"merged_at": 1665597617000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19497
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19497/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19497/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19497/events
|
https://github.com/huggingface/transformers/pull/19497
| 1,405,162,234
|
PR_kwDOCUB6oc5AmRrz
| 19,497
|
Update Whisper docs clarifying inference support for long-form decoding
|
{
"login": "akashmjn",
"id": 13268767,
"node_id": "MDQ6VXNlcjEzMjY4NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13268767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akashmjn",
"html_url": "https://github.com/akashmjn",
"followers_url": "https://api.github.com/users/akashmjn/followers",
"following_url": "https://api.github.com/users/akashmjn/following{/other_user}",
"gists_url": "https://api.github.com/users/akashmjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akashmjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akashmjn/subscriptions",
"organizations_url": "https://api.github.com/users/akashmjn/orgs",
"repos_url": "https://api.github.com/users/akashmjn/repos",
"events_url": "https://api.github.com/users/akashmjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/akashmjn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the addition 😉 "
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
@ArthurZucker @patrickvonplaten
I have updated the [Whisper docs page](https://huggingface.co/docs/transformers/v4.23.1/en/model_doc/whisper#transformers.WhisperProcessor) to clarify that the current `decode()` implementation doesn't support long-form yet. Hope it'll save folks some time, vs digging in to find that out :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19497/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19497",
"html_url": "https://github.com/huggingface/transformers/pull/19497",
"diff_url": "https://github.com/huggingface/transformers/pull/19497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19497.patch",
"merged_at": 1665650343000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19496
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19496/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19496/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19496/events
|
https://github.com/huggingface/transformers/pull/19496
| 1,405,151,564
|
PR_kwDOCUB6oc5AmPb2
| 19,496
|
Avoid Push CI failing to report due to many commits being merged
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger Probably it's good for L to know this change - your call :-)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
We have increasing commits merged recently, and when it happens in a very period of short time (4 merges in ~ 1min yesterday), we get an error when using `actions/checkout@v2` in the `workflow_run` event for **Push CI**
```bash
fatal: reference is not a tree: 5f5e264a12956bd7cce47dcb422b80ed68e4c24e
```
So this PR increases the fetch depth to 20, and hopefully we are safe with this number 😆
```
fetch-depth: 20
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19496/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19496",
"html_url": "https://github.com/huggingface/transformers/pull/19496",
"diff_url": "https://github.com/huggingface/transformers/pull/19496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19496.patch",
"merged_at": 1665559505000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19495
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19495/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19495/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19495/events
|
https://github.com/huggingface/transformers/pull/19495
| 1,405,123,801
|
PR_kwDOCUB6oc5AmJm3
| 19,495
|
Bert config changes
|
{
"login": "imarekkus",
"id": 49692939,
"node_id": "MDQ6VXNlcjQ5NjkyOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49692939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imarekkus",
"html_url": "https://github.com/imarekkus",
"followers_url": "https://api.github.com/users/imarekkus/followers",
"following_url": "https://api.github.com/users/imarekkus/following{/other_user}",
"gists_url": "https://api.github.com/users/imarekkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imarekkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imarekkus/subscriptions",
"organizations_url": "https://api.github.com/users/imarekkus/orgs",
"repos_url": "https://api.github.com/users/imarekkus/repos",
"events_url": "https://api.github.com/users/imarekkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/imarekkus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19495). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes # (issue)
Fixed done in accordance with below issue

Tests passed

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19495/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19495",
"html_url": "https://github.com/huggingface/transformers/pull/19495",
"diff_url": "https://github.com/huggingface/transformers/pull/19495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19495.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19494
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19494/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19494/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19494/events
|
https://github.com/huggingface/transformers/pull/19494
| 1,405,113,130
|
PR_kwDOCUB6oc5AmHXG
| 19,494
|
Fix grad loss computation
|
{
"login": "andyehrenberg",
"id": 32784181,
"node_id": "MDQ6VXNlcjMyNzg0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/32784181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyehrenberg",
"html_url": "https://github.com/andyehrenberg",
"followers_url": "https://api.github.com/users/andyehrenberg/followers",
"following_url": "https://api.github.com/users/andyehrenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/andyehrenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyehrenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyehrenberg/subscriptions",
"organizations_url": "https://api.github.com/users/andyehrenberg/orgs",
"repos_url": "https://api.github.com/users/andyehrenberg/repos",
"events_url": "https://api.github.com/users/andyehrenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyehrenberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19494). All of your documentation changes will be reflected on that endpoint.",
"Hey @andyehrenberg! This is a great spot! Indeed the loss isn't computed correctly. I've formalised an argument for this mathematically here (not rendering very well... easier viewed in markdown or LaTex):\r\n\r\n<details>\r\n\r\n<summary> Mathematical Proof </summary>\r\n\r\nTechnically speaking, in the `train_step`, the `pmap` won't compute a 'true' mean over devices. Here, what we're doing is computing a normalised loss on each device, and then averaging these losses over devices. This isn't strictly equal to summing the losses over all devices, and then dividing by the number of samples. \r\n\r\nLet $K$ denote the number of devices. Denote the loss on the $i$-th device as $L_i$ (`loss.sum()`) and the number of samples $N_i$ (`label_mask.sum()`). In the `loss_fn`, we compute the normalised loss on each device (`loss.sum() / label_mask.sum()`):\r\n\r\n$$\\bar{L}_i = \\frac{L_i}{N_i}$$\r\n\r\nand then average over devices with the `pmap`:\r\n\r\n$$\\mathcal{L} = \\frac{1}{K} \\sum_{i=1}^{K} \\frac{L_i}{N_i}$$\r\n\r\nWhereas, for a 'true' loss, we should first add up all the losses over devices:\r\n\r\n$$L_{tot} = \\sum_{i=1}^{K} L_i $$\r\n\r\nand then divide by the total number of labels:\r\n\r\n$$\\mathcal{L}' = \\frac{L_{tot}}{N} = \\frac{1}{N}\\sum_{i=1}^{K} L_i $$\r\n\r\nwhere $N$ is the total number of labels:\r\n\r\n$$ N = \\sum_{i=1}^{K} N_i $$\r\n\r\nIf we compare the two and ignore the constant $K$ in the `pmap` average:\r\n\r\n$$\\mathcal{L} = \\sum_{i=1}^{K} \\frac{L_i}{N_i}$$\r\n\r\n$$ \\mathcal{L}' = \\frac{1}{N}\\sum_{i=1}^{K} L_i $$\r\n\r\nwe see that the losses are in-fact different. The first expression is what you get if you average the losses on each device, then average these terms over devices with a `pmap`. The second expression is a 'true' loss, what you get by summing the losses on each device, summing these losses over devices, and then dividing by the total number of terms in your batch (= sum of the `label_mask` per device, summing these terms over devices).\r\n\r\n</details>\r\n\r\nA PR to address this was merged here: https://github.com/huggingface/transformers/pull/19504\r\n\r\nHere, we compute the 'true' number of loss terms by summing the num labels on each device, and then normalising our loss by the sum of the labels over devices.\r\n\r\nYou should be able to rebase onto main to get these changes and compute a 'true' loss, both for summarisation and all the other Flax training examples 🤗\r\n\r\nClosing this PR as https://github.com/huggingface/transformers/pull/19504 (merged) addresses this issue. \r\n\r\nAll the best with your Flax experiments!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
In the flax summarization fine-tuning example, the loss is only computed where the decoder_attention_mask is 1. Different batches on different devices will have different decoder_attention_masks, and `jax.lax.pmean(loss, axis_name="batch")` doesn't take this into account, so it won't be equivalent to if the loss was computed on all batches put together on a single device. To fix this, this PR computes the number of tokens for which the loss was computed on each device, and multiply the per-device losses and gradients by these weights, then `lax.psum` the losses, gradients and weights before then dividing by the psummed weights.
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19494/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19494",
"html_url": "https://github.com/huggingface/transformers/pull/19494",
"diff_url": "https://github.com/huggingface/transformers/pull/19494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19494.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19493
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19493/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19493/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19493/events
|
https://github.com/huggingface/transformers/pull/19493
| 1,404,950,975
|
PR_kwDOCUB6oc5AllBh
| 19,493
|
Create ID3-Decision-Tree.py
|
{
"login": "ashutoshkosti1919",
"id": 97934166,
"node_id": "U_kgDOBdZbVg",
"avatar_url": "https://avatars.githubusercontent.com/u/97934166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashutoshkosti1919",
"html_url": "https://github.com/ashutoshkosti1919",
"followers_url": "https://api.github.com/users/ashutoshkosti1919/followers",
"following_url": "https://api.github.com/users/ashutoshkosti1919/following{/other_user}",
"gists_url": "https://api.github.com/users/ashutoshkosti1919/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashutoshkosti1919/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashutoshkosti1919/subscriptions",
"organizations_url": "https://api.github.com/users/ashutoshkosti1919/orgs",
"repos_url": "https://api.github.com/users/ashutoshkosti1919/repos",
"events_url": "https://api.github.com/users/ashutoshkosti1919/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashutoshkosti1919/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4720676470,
"node_id": "LA_kwDOCUB6oc8AAAABGV_Odg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/spam",
"name": "spam",
"color": "fbca04",
"default": false,
"description": "Hacktoberfest spam"
}
] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19493). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
ID3 Algorithm .Machine Learning.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19493/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19493",
"html_url": "https://github.com/huggingface/transformers/pull/19493",
"diff_url": "https://github.com/huggingface/transformers/pull/19493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19493.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19492
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19492/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19492/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19492/events
|
https://github.com/huggingface/transformers/pull/19492
| 1,404,926,139
|
PR_kwDOCUB6oc5Alfxe
| 19,492
|
`python3` instead of `python` in Push CI setup job
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually, `python` is not found. I think we need to tweak a bit if we want to use `python` as `python3`. Do you want me to work on the docker image for this purpose?\r\n\r\n```bash\r\n/__w/_temp/5f33851e-ac7e-4126-b3d7-88092ae0b56d.sh: 2: python: not found\r\n```",
"No, don't worry. I was just complaining about the action env, not your PR :-) "
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
The `setup` job in push CI uses image `transformers-all-latest-gpu-push-ci`, which should use `python3` instead of `python`.
(I forgot this detail when working on #19054)
Currently, the setup job failed, and no test to run.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19492/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19492",
"html_url": "https://github.com/huggingface/transformers/pull/19492",
"diff_url": "https://github.com/huggingface/transformers/pull/19492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19492.patch",
"merged_at": 1665508720000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19491
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19491/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19491/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19491/events
|
https://github.com/huggingface/transformers/issues/19491
| 1,404,826,036
|
I_kwDOCUB6oc5Tu_G0
| 19,491
|
Dev build of TensorFlow causing issue with pre-trained BERT
|
{
"login": "BenWilson2",
"id": 39283302,
"node_id": "MDQ6VXNlcjM5MjgzMzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/39283302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenWilson2",
"html_url": "https://github.com/BenWilson2",
"followers_url": "https://api.github.com/users/BenWilson2/followers",
"following_url": "https://api.github.com/users/BenWilson2/following{/other_user}",
"gists_url": "https://api.github.com/users/BenWilson2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenWilson2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenWilson2/subscriptions",
"organizations_url": "https://api.github.com/users/BenWilson2/orgs",
"repos_url": "https://api.github.com/users/BenWilson2/repos",
"events_url": "https://api.github.com/users/BenWilson2/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenWilson2/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Maybe of interest to @gante @Rocketknight1 \r\n\r\n",
"Hmm, this looks like they changed something about saving in H5 format. The bug report is appreciated, but since the API might be unstable, I think we probably won't change anything in `transformers` yet. However, if the bug still occurs in TF 2.11-rc0 then we definitely have a problem and will try to fix things before 2.11 final. Thank you!",
"> Hmm, this looks like they changed something about saving in H5 format. The bug report is appreciated, but since the API might be unstable, I think we probably won't change anything in `transformers` yet. However, if the bug still occurs in TF 2.11-rc0 then we definitely have a problem and will try to fix things before 2.11 final. Thank you!\r\n\r\nSounds great! I just wanted to give you a heads up and save you some debugging time for when the rc branch is cut :) ",
"cc @gante and @ydshieh - this might be nothing, but we should remember to do some testing once the RC arrives.",
"Interesting 🤔 BTW, the problematic import (`save_attributes_to_hdf5_group`) is only used to save shards",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I see some TF release candidates out - @BenWilson2 have you tried them and encountered the same issue?",
"@Rocketknight1 they've been consistently failing with our CI testing (we pull nightlies and main branches). \r\nUpdate on this:\r\n\r\nTF 2.11 released today with these breaking changes. \r\n\r\nHere's the stack trace we're getting on this release:\r\n\r\n\r\n```\r\nself = <module 'transformers' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__init__.py'>\r\nmodule_name = 'modeling_tf_utils'\r\n\r\n def _get_module(self, module_name: str):\r\n try:\r\n> return importlib.import_module(\".\" + module_name, self.__name__)\r\n\r\nmodule_name = 'modeling_tf_utils'\r\nself = <module 'transformers' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__init__.py'>\r\n\r\n/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1076: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nname = '.modeling_tf_utils', package = 'transformers'\r\n\r\n def import_module(name, package=None):\r\n \"\"\"Import a module.\r\n \r\n The 'package' argument is required when performing a relative import. It\r\n specifies the package to use as the anchor point from which to resolve the\r\n relative import to an absolute import.\r\n \r\n \"\"\"\r\n level = 0\r\n if name.startswith('.'):\r\n if not package:\r\n msg = (\"the 'package' argument is required to perform a relative \"\r\n \"import for {!r}\")\r\n raise TypeError(msg.format(name))\r\n for character in name:\r\n if character != '.':\r\n break\r\n level += 1\r\n> return _bootstrap._gcd_import(name[level:], package, level)\r\n\r\ncharacter = 'm'\r\nlevel = 1\r\nname = '.modeling_tf_utils'\r\npackage = 'transformers'\r\n\r\n/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/importlib/__init__.py:127: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nname = 'transformers.modeling_tf_utils', package = 'transformers', level = 1\r\n\r\n> ???\r\n\r\nlevel = 1\r\nname = 'transformers.modeling_tf_utils'\r\npackage = 'transformers'\r\n\r\n<frozen importlib._bootstrap>:1014: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nname = 'transformers.modeling_tf_utils'\r\nimport_ = <function _gcd_import at 0x7f8c4fb15430>\r\n\r\n> ???\r\n\r\nimport_ = <function _gcd_import at 0x7f8c4fb15430>\r\nmodule = <object object at 0x7f8c4faec060>\r\nname = 'transformers.modeling_tf_utils'\r\n\r\n<frozen importlib._bootstrap>:[991](https://github.com/mlflow/mlflow/actions/runs/3500477115/jobs/5863188221#step:9:992): \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nname = 'transformers.modeling_tf_utils'\r\nimport_ = <function _gcd_import at 0x7f8c4fb15430>\r\n\r\n> ???\r\n\r\nimport_ = <function _gcd_import at 0x7f8c4fb15430>\r\nname = 'transformers.modeling_tf_utils'\r\nparent = 'transformers'\r\nparent_module = <module 'transformers' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__init__.py'>\r\npath = ['/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers']\r\nspec = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')\r\n\r\n<frozen importlib._bootstrap>:975: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nspec = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')\r\n\r\n> ???\r\n\r\nmodule = <module 'transformers.modeling_tf_utils' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'>\r\nspec = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')\r\n\r\n<frozen importlib._bootstrap>:671: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>\r\nmodule = <module 'transformers.modeling_tf_utils' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'>\r\n\r\n> ???\r\n\r\ncode = <code object <module> at 0x7f8b21782030, file \"/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py\", line 16>\r\nmodule = <module 'transformers.modeling_tf_utils' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'>\r\nself = <_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>\r\n\r\n<frozen importlib._bootstrap_external>:843: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nf = <built-in function exec>\r\nargs = (<code object <module> at 0x7f8b21782030, file \"/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tra...d' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/backend.py'>, ...})\r\nkwds = {}\r\n\r\n> ???\r\n\r\nargs = (<code object <module> at 0x7f8b21782030, file \"/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tra...d' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/backend.py'>, ...})\r\nf = <built-in function exec>\r\nkwds = {}\r\n\r\n<frozen importlib._bootstrap>:219: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n \"\"\"TF general model utils.\"\"\"\r\n \r\n import functools\r\n import gc\r\n import inspect\r\n import json\r\n import os\r\n import pickle\r\n import re\r\n import warnings\r\n from collections.abc import Mapping\r\n from pathlib import Path\r\n from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union\r\n \r\n import h5py\r\n import numpy as np\r\n import tensorflow as tf\r\n from tensorflow.python.keras import backend as K\r\n from tensorflow.python.keras.engine import data_adapter\r\n from tensorflow.python.keras.engine.keras_tensor import KerasTensor\r\n from tensorflow.python.keras.saving import hdf5_format\r\n \r\n from huggingface_hub import Repository, list_repo_files\r\n> from keras.saving.hdf5_format import save_attributes_to_hdf5_group\r\nE ModuleNotFoundError: No module named 'keras.saving.hdf5_format'\r\n\r\nAny = typing.Any\r\nCallable = typing.Callable\r\nDict = typing.Dict\r\nK = <module 'tensorflow.python.keras.backend' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/backend.py'>\r\nKerasTensor = <class 'tensorflow.python.keras.engine.keras_tensor.KerasTensor'>\r\nList = typing.List\r\nMapping = <class 'collections.abc.Mapping'>\r\nOptional = typing.Optional\r\nPath = <class 'pathlib.Path'>\r\nRepository = <class 'huggingface_hub.repository.Repository'>\r\nTYPE_CHECKING = False\r\nUnion = typing.Union\r\n__builtins__ = <builtins>\r\n__cached__ = '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__pycache__/modeling_tf_utils.cpython-38.pyc'\r\n__doc__ = 'TF general model utils.'\r\n__file__ = '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'\r\n__loader__ = <_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>\r\n__name__ = 'transformers.modeling_tf_utils'\r\n__package__ = 'transformers'\r\n__spec__ = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')\r\ndata_adapter = <module 'tensorflow.python.keras.engine.data_adapter' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py'>\r\nfunctools = <module 'functools' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/functools.py'>\r\ngc = <module 'gc' (built-in)>\r\nh5py = <module 'h5py' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/h5py/__init__.py'>\r\nhdf5_format = <module 'tensorflow.python.keras.saving.hdf5_format' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py'>\r\ninspect = <module 'inspect' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/inspect.py'>\r\njson = <module 'json' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/json/__init__.py'>\r\nlist_repo_files = <bound method HfApi.list_repo_files of <huggingface_hub.hf_api.HfApi object at 0x7f8b30547730>>\r\nnp = <module 'numpy' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/numpy/__init__.py'>\r\nos = <module 'os' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/os.py'>\r\npickle = <module 'six.moves.cPickle' (<six._SixMetaPathImporter object at 0x7f8c4d2066d0>)>\r\nre = <module 're' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/re.py'>\r\ntf = <module 'tensorflow' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/__init__.py'>\r\nwarnings = <module 'warnings' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/warnings.py'>\r\n\r\n/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:39: ModuleNotFoundError\r\n```",
"Looks like you're all set already to support and just need an import version check now that these changes are in the official release :) \r\n\r\nAlthough it looks like the team has some additional serious breaking changes on the horizon: https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0 ",
"Hi @BenWilson2 👋 \r\n\r\nWe have made recent changes related to the imports from Keras (https://github.com/huggingface/transformers/pull/20317) -- it should also solve this issue, correct? ",
"Those changes look great and should completely fix the breaking issues that 2.11 introduced. Thank you for the very fast response to address this! \r\nIs there a scheduled release for 4.24.1 coming up soon?",
"No, we won't make a patch release for this: it's not a regression from us, but breaking changes from TensorFlow. The next release of Transformers will be next week, probably on December 1st :-)",
"Awesome! (I wouldn't have expected a patch release for this; I was just curious, based on the history of releases this year, if you had another micro release queued anyway). Thank you for the timeline for the next minor release. We'll be sure to unblock users an unpin the version right after your next minor release. \r\nThanks again! :) "
] | 1,665
| 1,669
| 1,669
|
NONE
| null |
### System Info
Python version: 3.7
TF branch: dev
(this is part of our nightly CI checks for MLflow to test dev builds; sorry for not executing `transformers-cli env` for this report)
installed packages:
absl-py-1.2.0
astunparse-1.6.3
cachetools-5.2.0
flatbuffers-22.9.24
gast-0.4.0
google-auth-2.12.0
google-auth-oauthlib-0.4.6
google-pasta-0.2.0
grpcio-1.49.1
h5py-3.7.0
keras-nightly-2.11.0.dev2022101007
libclang-14.0.6
markdown-3.4.1
opt-einsum-3.3.0
protobuf-3.19.6
pyasn1-0.4.8
pyasn1-modules-0.2.8
rsa-4.9 tb-nightly-2.11.0a20221010
tensorboard-data-server-0.6.1
tensorboard-plugin-wit-1.8.1
tensorflow-io-gcs-filesystem-0.27.0
termcolor-2.0.1
tf-estimator-nightly-2.11.0.dev2022101008
tf-nightly-2.11.0.dev20221010 wrapt-1.14.1
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
simply import `transformers.models.bert`
Issue line: https://www.google.com/url?q=https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/src/transformers/modeling_tf_utils.py%23L39&sa=D&source=docs&ust=1665453601638298&usg=AOvVaw1r381k-VA_PhIdIhALrmxc
The stack trace:
``` shell
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
module_name = 'modeling_tf_bert'
def _get_module(self, module_name: str):
try:
> return importlib.import_module("." + module_name, self.__name__)
module_name = 'modeling_tf_bert'
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/utils/import_utils.py:1031:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = '.modeling_tf_bert', package = 'transformers.models.bert'
def import_module(name, package=None):
"""Import a module.
The 'package' argument is required when performing a relative import. It
specifies the package to use as the anchor point from which to resolve the
relative import to an absolute import.
"""
level = 0
if name.startswith('.'):
if not package:
msg = ("the 'package' argument is required to perform a relative "
"import for {!r}")
raise TypeError(msg.format(name))
for character in name:
if character != '.':
break
level += 1
> return _bootstrap._gcd_import(name[level:], package, level)
character = 'm'
level = 1
name = '.modeling_tf_bert'
package = 'transformers.models.bert'
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/importlib/__init__.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.models.bert.modeling_tf_bert'
package = 'transformers.models.bert', level = 1
> ???
level = 1
name = 'transformers.models.bert.modeling_tf_bert'
package = 'transformers.models.bert'
<frozen importlib._bootstrap>:1006:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.models.bert.modeling_tf_bert'
import_ = <function _gcd_import at 0x7f8299e66b00>
> ???
import_ = <function _gcd_import at 0x7f8299e66b00>
module = <object object at 0x7f8299e4e060>
name = 'transformers.models.bert.modeling_tf_bert'
<frozen importlib._bootstrap>:983:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.models.bert.modeling_tf_bert'
import_ = <function _gcd_import at 0x7f8299e66b00>
> ???
import_ = <function _gcd_import at 0x7f8299e66b00>
name = 'transformers.models.bert.modeling_tf_bert'
parent = 'transformers.models.bert'
parent_module = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
path = ['/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert']
spec = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
<frozen importlib._bootstrap>:967:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
spec = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
> ???
module = <module 'transformers.models.bert.modeling_tf_bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'>
spec = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
<frozen importlib._bootstrap>:677:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_frozen_importlib_external.SourceFileLoader object at 0x7f8[208](https://github.com/mlflow/mlflow/actions/runs/3219669785/jobs/5266077564#step:12:209)0ff6d0>
module = <module 'transformers.models.bert.modeling_tf_bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'>
> ???
code = <code object <module> at 0x7f82080b9ae0, file "/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py", line 16>
module = <module 'transformers.models.bert.modeling_tf_bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'>
self = <_frozen_importlib_external.SourceFileLoader object at 0x7f82080ff6d0>
<frozen importlib._bootstrap_external>:728:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
f = <built-in function exec>
args = (<code object <module> at 0x7f82080b9ae0, file "/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tra...ngAndCrossAttentions': <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions'>, ...})
kwds = {}
> ???
args = (<code object <module> at 0x7f82080b9ae0, file "/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tra...ngAndCrossAttentions': <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions'>, ...})
f = <built-in function exec>
kwds = {}
<frozen importlib._bootstrap>:219:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
""" TF 2.0 BERT model."""
import math
import warnings
from dataclasses import dataclass
from typing import Dict, Optional, Tuple, Union
import numpy as np
import tensorflow as tf
from ...activations_tf import get_tf_activation
from ...modeling_tf_outputs import (
TFBaseModelOutputWithPastAndCrossAttentions,
TFBaseModelOutputWithPoolingAndCrossAttentions,
TFCausalLMOutputWithCrossAttentions,
TFMaskedLMOutput,
TFMultipleChoiceModelOutput,
TFNextSentencePredictorOutput,
TFQuestionAnsweringModelOutput,
TFSequenceClassifierOutput,
TFTokenClassifierOutput,
)
> from ...modeling_tf_utils import (
TFCausalLanguageModelingLoss,
TFMaskedLanguageModelingLoss,
TFModelInputType,
TFMultipleChoiceLoss,
TFNextSentencePredictionLoss,
TFPreTrainedModel,
TFQuestionAnsweringLoss,
TFSequenceClassificationLoss,
TFTokenClassificationLoss,
get_initializer,
keras_serializable,
unpack_inputs,
)
Dict = typing.Dict
Optional = typing.Optional
TFBaseModelOutputWithPastAndCrossAttentions = <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions'>
TFBaseModelOutputWithPoolingAndCrossAttentions = <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions'>
TFCausalLMOutputWithCrossAttentions = <class 'transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions'>
TFMaskedLMOutput = <class 'transformers.modeling_tf_outputs.TFMaskedLMOutput'>
TFMultipleChoiceModelOutput = <class 'transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput'>
TFNextSentencePredictorOutput = <class 'transformers.modeling_tf_outputs.TFNextSentencePredictorOutput'>
TFQuestionAnsweringModelOutput = <class 'transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput'>
TFSequenceClassifierOutput = <class 'transformers.modeling_tf_outputs.TFSequenceClassifierOutput'>
TFTokenClassifierOutput = <class 'transformers.modeling_tf_outputs.TFTokenClassifierOutput'>
Tuple = typing.Tuple
Union = typing.Union
__builtins__ = <builtins>
__cached__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__pycache__/modeling_tf_bert.cpython-37.pyc'
__doc__ = ' TF 2.0 BERT model.'
__file__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'
__loader__ = <_frozen_importlib_external.SourceFileLoader object at 0x7f82080ff6d0>
__name__ = 'transformers.models.bert.modeling_tf_bert'
__package__ = 'transformers.models.bert'
__spec__ = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
dataclass = <function dataclass at 0x7f8289b9cdd0>
get_tf_activation = <function get_tf_activation at 0x7f82080be680>
math = <module 'math' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/lib-dynload/math.cpython-37m-x86_64-linux-gnu.so'>
np = <module 'numpy' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/numpy/__init__.py'>
tf = <module 'tensorflow' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/__init__.py'>
warnings = <module 'warnings' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/warnings.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
"""TF general model utils."""
import functools
import gc
import inspect
import json
import os
import pickle
import re
import warnings
from collections.abc import Mapping
from pathlib import Path
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union
import h5py
import numpy as np
import tensorflow as tf
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.engine import data_adapter
from tensorflow.python.keras.engine.keras_tensor import KerasTensor
from tensorflow.python.keras.saving import hdf5_format
from huggingface_hub import Repository, list_repo_files
> from keras.saving.hdf5_format import save_attributes_to_hdf5_group
E ModuleNotFoundError: No module named 'keras.saving.hdf5_format'
Any = typing.Any
Callable = typing.Callable
Dict = typing.Dict
K = <module 'tensorflow.python.keras.backend' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/python/keras/backend.py'>
KerasTensor = <class 'tensorflow.python.keras.engine.keras_tensor.KerasTensor'>
List = typing.List
Mapping = <class 'collections.abc.Mapping'>
Optional = typing.Optional
Path = <class 'pathlib.Path'>
Repository = <class 'huggingface_hub.repository.Repository'>
TYPE_CHECKING = False
Union = typing.Union
__builtins__ = <builtins>
__cached__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/__pycache__/modeling_tf_utils.cpython-37.pyc'
__doc__ = 'TF general model utils.'
__file__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/modeling_tf_utils.py'
__loader__ = <_frozen_importlib_external.SourceFileLoader object at 0x7f82080b6890>
__name__ = 'transformers.modeling_tf_utils'
__package__ = 'transformers'
__spec__ = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f82080b6890>, origin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/modeling_tf_utils.py')
data_adapter = <module 'tensorflow.python.keras.engine.data_adapter' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py'>
functools = <module 'functools' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/functools.py'>
gc = <module 'gc' (built-in)>
h5py = <module 'h5py' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/h5py/__init__.py'>
hdf5_format = <module 'tensorflow.python.keras.saving.hdf5_format' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py'>
inspect = <module 'inspect' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/inspect.py'>
json = <module 'json' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/json/__init__.py'>
list_repo_files = <bound method HfApi.list_repo_files of <huggingface_hub.hf_api.HfApi object at 0x7f8231bb3310>>
np = <module 'numpy' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/numpy/__init__.py'>
os = <module 'os' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/os.py'>
pickle = <module 'pickle' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/pickle.py'>
re = <module 're' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/re.py'>
tf = <module 'tensorflow' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/__init__.py'>
warnings = <module 'warnings' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/warnings.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:39: ModuleNotFoundError
The above exception was the direct cause of the following exception:
@pytest.mark.skipif(
not (_is_importable("transformers") and keras_version >= Version("2.6.0")),
reason="This test requires transformers, which is no longer compatible with Keras < 2.6.0",
)
def test_pyfunc_serve_and_score_transformers():
> from transformers import BertConfig, TFBertModel # pylint: disable=import-error
tests/keras/test_keras_model_export.py:662:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<frozen importlib._bootstrap>:1032: in _handle_fromlist
???
fromlist = ('BertConfig', 'TFBertModel')
import_ = <built-in function __import__>
module = <module 'transformers' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/__init__.py'>
recursive = False
x = 'TFBertModel'
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/utils/import_utils.py:1022: in __getattr__
value = getattr(module, name)
module = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
name = 'TFBertModel'
self = <module 'transformers' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/__init__.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/utils/import_utils.py:1021: in __getattr__
module = self._get_module(self._class_to_module[name])
name = 'TFBertModel'
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
module_name = 'modeling_tf_bert'
def _get_module(self, module_name: str):
try:
return importlib.import_module("." + module_name, self.__name__)
except Exception as e:
raise RuntimeError(
f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
f" traceback):\n{e}"
> ) from e
E RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback):
E No module named 'keras.saving.hdf5_format'
```
### Expected behavior
Changes made to Keras namespace (the addition of a `legacy` mode for serialization / deserialization) in this commit: https://github.com/keras-team/keras/commit/c06aa015e900a2029b5b379f374e5d4dc615fcbf will likely require an update for pre-trained huggingface models.
We wanted to make you aware of this if you hadn't already known about it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19491/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19490
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19490/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19490/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19490/events
|
https://github.com/huggingface/transformers/issues/19490
| 1,404,632,821
|
I_kwDOCUB6oc5TuP71
| 19,490
|
ASR pipeline does not work with openai/whisper on current master
|
{
"login": "niedakh",
"id": 291663,
"node_id": "MDQ6VXNlcjI5MTY2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/291663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/niedakh",
"html_url": "https://github.com/niedakh",
"followers_url": "https://api.github.com/users/niedakh/followers",
"following_url": "https://api.github.com/users/niedakh/following{/other_user}",
"gists_url": "https://api.github.com/users/niedakh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/niedakh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/niedakh/subscriptions",
"organizations_url": "https://api.github.com/users/niedakh/orgs",
"repos_url": "https://api.github.com/users/niedakh/repos",
"events_url": "https://api.github.com/users/niedakh/events{/privacy}",
"received_events_url": "https://api.github.com/users/niedakh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"I will have a look! But chunking is not supported yet with whisper. (Should take care of it next week) \nNormally a warning should pop instead of an error",
"@ArthurZucker are we sure `Whisper` can handle chunking ? \r\n\r\n> Whisper is not a CTC model meaning that chunking as shown in Nico's [blog](https://huggingface.co/blog/asr-chunking) does not work.\r\n\r\nfrom internal conversation.\r\n\r\nHappy to jump into a design call to discuss whether we can do it or not. \r\n\r\nNot being CTC means it's harder to handle the boundaries. Boundaries at silence are sort of OK, but unfortunately can never really a *complete* solution (because you can never be sure you're going to get a silence, and you MUST be able to handle chunking regardless).\r\nThis might be deemed acceptable in whisper btw, but when we checked for regular models, the regular silence detection was not good enough to be ran automatically (meaning you have to tune settings always to get decent silence results with most silence detectors)",
"Really sorry about my miss-communication. The chunking that will be supported is different from CTC. Let's organize a call to speak in more details about that 😉 \nThe goal would be to be able to specify a chunk length and stride length (if people want to customize it) but default Whisper has its own parameters. Let's talk more about that when we call 🤗",
"Would also be interested to hear more about how chunking will be supported for the Whisper ASR pipeline! 😁\r\n\r\nRelated to this: is there a way to avoid the transcription being cut off too early with a HF ASR pipeline? It seems the ASR pipeline will only transcribe the 1st section if we have a longer audio file with silence in between. ",
"Hi @CarloLepelaars \r\n\r\nIt's actually quite challenging to do chunking with whisper. The reason is that the suggested way by OpenAI needs to run the inference on the first 30s before being able to run inference on the next 30s starting at 30s - X. X depends on the output of the first run.\r\n\r\nThis violates an important property of a pipeline, which is that the generations shouldn't depend on each other (in order to enable batching).\r\n\r\nThat doesn't mean it's impossible, but the stiching back of actual predictions becomes hairy:\r\n - We have no control where the timestamps are created, and we can't force them to appear within the strides.\r\n - It also require an extremely custom `LogitsProcessor` to \"force\" timestamp tokens to appear. \r\n \r\n For the audio being cut off, would you rather have an error being thrown ? Maybe @ArthurZucker has better ideas what we should do when the audio is too long.",
"Thanks @Narsil ! \r\nFor long audio, we can just enable the chunking without `timestamp` prediction. Though the results won't be extremely good, I remember attempting this (ultra naive way, with no `stride` ) and it gave pretty decent outputs : \r\n\r\n```python \r\n\"\"\"\r\nJe mappelle Claude. Je decoupe plouf. Let's just try it again. Je mappelle Claude. Je te plie mlu. Huh. It's not quite what I'm saying. Really? Sounds exactly the same to me. It does? Really? Yeah. All right, let's just try it again. Really listen.\r\n Okay. Je mappelle Claude. Je te flou... flie. Oh, mon Dieu. Oh, de fouf. Je mappelle Claude. Je te call blue. No! Okay, maybe if we just break it down. Okay, let's just... let's try it one syllable at a time. Okay, so repeat after me. Je... Je... Ma... Ma... Pelle.\r\n Great! Okay, faster. Je m'mappelle. Je m'mappelle. Me poo poo! It's too hard. I can't teach you. What are you doing? I have to go before I put your head through a wall. Don't go! Don't go! I need you! My audition is tomorrow! Jableau Blanc! Mille lapis! Au Blanc! Pou!\r\n\"\"\"\r\n```\r\nFor this clip : https://www.youtube.com/watch?v=H3dToD7_ATU , which seem like apart from maj, is extremely good! \r\n",
"@ArthurZucker What about erroring out when the input audio is too long ?",
"I'd rather add a warning saying that the audio will be automatically cropped! WDYT? ",
"IMO error is better here.\r\n\r\n@Narsil if there is a chance to run Whisper's silence detection + chunking mechanism in a pipeline I think this would be very useful/impactful ",
"> I will have a look! But chunking is not supported yet with whisper. (Should take care of it next week)\r\n> Normally a warning should pop instead of an error\r\n\r\nHas this been implemented? Where can I check the upgrades for when it is functional? \r\n\r\n(I understand it is not an easy task, just wanted to make sure that I have the tools to find out about the implementation when it is available).",
"Hey, here is one of the PR: #20104",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Resolved in https://github.com/huggingface/transformers/pull/20104"
] | 1,665
| 1,670
| 1,670
|
NONE
| null |
### System Info
transformers @ git+https://github.com/huggingface/transformers.git@b651efe59ea506d38173e3a60a4228e7e74719f9
python 3.6
Standard AWS Ubuntu Deep Learning AMI (Ubuntu 18.04) Version 30.0
### Who can help?
@Narsil @anton-l @sanchit-gandhi @patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce run the following code, from asr pipeline example and whisper:
```python
from datasets import load_dataset
from transformers import pipeline
pipe = pipeline(model="openai/whisper-large")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
output = pipe(ds[0]['file'], chunk_length_s=30, stride_length_s=(4, 2))
```
yields:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-efceed64cd5c> in <module>
----> 1 output = pipe(ds[0]['file'], chunk_length_s=30, stride_length_s=(4, 2))
~/venv38/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py in __call__(self, inputs, **kwargs)
181 `"".join(chunk["text"] for chunk in output["chunks"])`.
182 """
--> 183 return super().__call__(inputs, **kwargs)
184
185 def _sanitize_parameters(self, **kwargs):
~/venv38/lib/python3.8/site-packages/transformers/pipelines/base.py in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1073 else:
-> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
1075
1076 def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params):
~/venv38/lib/python3.8/site-packages/transformers/pipelines/base.py in run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1093 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
1094 all_outputs = []
-> 1095 for model_inputs in self.preprocess(inputs, **preprocess_params):
1096 model_outputs = self.forward(model_inputs, **forward_params)
1097 all_outputs.append(model_outputs)
~/venv38/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py in preprocess(self, inputs, chunk_length_s, stride_length_s)
260 # Currently chunking is not possible at this level for `seq2seq` so
261 # it's ok.
--> 262 align_to = self.model.config.inputs_to_logits_ratio
263 chunk_len = int(round(chunk_length_s * self.feature_extractor.sampling_rate / align_to) * align_to)
264 stride_left = int(round(stride_length_s[0] * self.feature_extractor.sampling_rate / align_to) * align_to)
~/venv38/lib/python3.8/site-packages/transformers/configuration_utils.py in __getattribute__(self, key)
252 if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
253 key = super().__getattribute__("attribute_map")[key]
--> 254 return super().__getattribute__(key)
255
256 def __init__(self, **kwargs):
AttributeError: 'WhisperConfig' object has no attribute 'inputs_to_logits_ratio'
```
### Expected behavior
I would've expected to obtain the transcript in `output`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19490/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19489
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19489/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19489/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19489/events
|
https://github.com/huggingface/transformers/pull/19489
| 1,404,618,102
|
PR_kwDOCUB6oc5Akdnm
| 19,489
|
Try replacing tf.int32 with tf.int64 across all tests
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19489). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Bumping this to keep it open - a great int dtype purge is still on my list, but I got sidetracked with some other high-priority stuff!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this because I think (I hope) that our dtype issues are mostly resolved by now"
] | 1,665
| 1,670
| 1,670
|
MEMBER
| null |
This is a draft PR where I just search-replaced `tf.int32` with `tf.int64` in our tests to check what breaks. I'll probably also need to cast our dummy inputs correctly to make this work, at least!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19489/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19489",
"html_url": "https://github.com/huggingface/transformers/pull/19489",
"diff_url": "https://github.com/huggingface/transformers/pull/19489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19489.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19488
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19488/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19488/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19488/events
|
https://github.com/huggingface/transformers/issues/19488
| 1,404,563,548
|
I_kwDOCUB6oc5Tt_Bc
| 19,488
|
PreTrainedTokenizerBase issue produced by PR #19073
|
{
"login": "gugarosa",
"id": 4120639,
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gugarosa",
"html_url": "https://github.com/gugarosa",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi everyone! Hope everything is going well with you.\r\n\r\nPlease let me know if I could be clear enough in describing the issue.\r\n\r\nThanks for your attention and best regards,\r\nGustavo.",
"Thank you for the issue @gugarosa!\r\n\r\nPinging @sgugger ",
"Thanks for the report. I understand the bug and your analysis seems correct for its cause. Will work on a fix as soon I have some free time (might be early next week only)!",
"Got time today actually, this should be fixed by the PR linked above!",
"Thanks so much for the prompt response @sgugger!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.4.0-125-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@saull
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Reproduction
Have a local `tokenizer.json` file (different than Hub's file and in the same folder as invoked code) and invoke the following code:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
print(tokenizer)
```
### Error
Depends on how the local `tokenizer.json` is filled with. It could be an error stating that both tokenizers are distinct in the number of tokens, etc. Nevertheless, here is the trace of what could be the issue:
#### transformers.tokenization_utils_base (Line 1726)
```
for file_id, file_path in vocab_files.items():
if file_path is None:
resolved_vocab_files[file_id] = None
elif os.path.isfile(file_path):
resolved_vocab_files[file_id] = file_path
...
```
If we print the `vocab_files` dictionary, most of the time its output will be as expected:
```
{'vocab_file': 'vocab.json', 'merges_file': 'merges.txt', 'tokenizer_file': 'tokenizer.json', 'added_tokens_file': 'added_tokens.json', 'special_tokens_map_file': 'special_tokens_map.json', 'tokenizer_config_file': 'tokenizer_config.json'}
```
With the added lines in PR #19073, there will be a time when the code will check if `tokenizer.json` is a file that exists in the system, and if it does, it will mark it as the `file_path` for the `resolved_vocab_files` dictionary. Unfortunately, this is not expected, because we need the `file_path` to come from the Hub's download (since we are loading a pre-trained tokenizer from a identifier that is found on Hub) and not from a local file.
If we print the `resolved_vocab_files` dictionary with the added lines from PR #19073, this is its output:
```
{... 'tokenizer_file': 'tokenizer.json' ...}
```
Without the added lines:
```
{... 'tokenizer_file': '/home/gderosa/.cache/huggingface/hub/models--Salesforce--codegen-350M-mono/snapshots/40b7a3b6e99e73bdb497a14b740e7167b3413c74/tokenizer.json' ...}
```
My assumption is that this very same behavior should occur if users have any local files defined by the `vocab_files` dictionary in the same folder as they are running their scripts.
### Solutions
Maybe the `cached_file` loading should become prior to the added lines? And if the cached version could not be found, it resorts to local files?
### Expected behavior
Expected behavior is to use the `tokenizer_file` from the `pretrained_model_name_or_path` instead of the local file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19488/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19487
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19487/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19487/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19487/events
|
https://github.com/huggingface/transformers/issues/19487
| 1,404,367,540
|
I_kwDOCUB6oc5TtPK0
| 19,487
|
🔥[Community Event] Doc Tests Sprint - Configuration files🔥
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 4608548278,
"node_id": "LA_kwDOCUB6oc8AAAABErDdtg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/HACKTOBERFEST-ACCEPTED",
"name": "HACKTOBERFEST-ACCEPTED",
"color": "FF5733",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"I'd like to work on this; I'll start with YOLOS and open a PR :)",
"I'll take on Whisper!",
"I'll work on Beit!",
"FYI Bart #19524 and Albert #19541 are already done :) ",
"I'll take on GPT2 next!",
"I can take imageGPT ",
"i'll work on yoso\r\n",
"I'll work on\r\n- RoBERTa\r\n- ViT\r\n- DeiT\r\n- Reformer\r\n\r\nand open up PR soon :)",
"Also will raise for Transformer-XL !\r\n",
"I'll work on bloom",
"I'll work on ctrl",
"Hi @ydshieh, I am new here and want to contribute to this issue. Can you please help me to find the remaining files? Thanks,",
"> Hi @ydshieh, I am new here and want to contribute to this issue. Can you please help me to find the remaining files? Thanks,\r\n\r\nHi @SD-13 You can check the latest file \r\nhttps://github.com/IzicTemi/transformers/blob/main/utils/documentation_tests.txt\r\n\r\nAny configuration file that is not on that list (on `main` branch) **and [not claimed here yet, no PR opened yet]** are all welcome :-)\r\n",
"Hey @ydshieh, could I fix multiple models in a single PR, or do I have to open a single PR for each fix?",
"I will be taking blenderbots as well\r\n",
"> Hey @ydshieh, could I fix multiple models in a single PR, or do I have to open a single PR for each fix?\r\n\r\n2 or 3 might be good. But don't take more in a single PR - the sprint is for everyone to contribute, so leave some to others :-)",
"> > Hi @ydshieh, I am new here and want to contribute to this issue. Can you please help me to find the remaining files? Thanks,\r\n> \r\n> Hi @SD-13 You can check the latest file https://github.com/IzicTemi/transformers/blob/main/utils/documentation_tests.txt\r\n> \r\n> Any configuration file that is not on that list (on `main` branch) **and [not claimed here yet, no PR opened yet]** are all welcome :-)\r\n\r\nThanks @ydshieh , that was helpful. I can take `vision_text_dual_encoder.py`",
"Hey @ydshieh, I am getting this error\r\n\r\ncan you please help me to understand the reason and fix it? Thanks,",
"@SD-13 Could you open a PR (even if it is not complete yet), and post the command you run and the complete error message in that PR? Using image is not very convivence to search and debug 🙏 ",
"> @SD-13 Could you open a PR (even if it is not complete yet), and post the command you run and the complete error message in that PR? Using image is not very convivence to search and debug pray\r\n\r\nYeah true. I created [this](https://github.com/huggingface/transformers/pull/19580) PR and let's leave those errors since all checks are passing. Thanks, ",
"I will also take `time_series_transformer.py`, `vision_encoder_decoder.py`, and `trajectory_transformer.py`. Thanks,",
"took up blenderbot_small too.\r\n",
"I'll take on\r\n- SEW\r\n- SEW-D\r\n- Swin\r\n- Swin V2\r\n- UniSpeech\r\n\r\nand will open up PR :)",
"Hi @ydshieh , I want to contribute to this issue. Can you please help me to find the remaining files?",
"> Hi @ydshieh , I want to contribute to this issue. Can you please help me to find the remaining files?\r\n\r\nHey @SaurabhBudhwani26 , please check [this](https://github.com/huggingface/transformers/issues/19487#issuecomment-1277518647). I hope it will be helpful. Thanks, ",
"I'll work on `configuration_visual_bert.py`",
"I will work on:\r\n- `big_bird`\r\n- `bigbird_pegasus`",
"I will work on\r\n- `configuration_xlm_roberta.py`\r\n- `configuration_xlm_roberta_xl.py`",
"Can I work on this?\r\n@ydshieh ",
"I will work on **`flava`**"
] | 1,665
| 1,696
| 1,696
|
COLLABORATOR
| null |
This sprint is similar to #16292 - but for model **configuration files**, i.e. `configuration_[model_name].py`.
For example, `src/transformers/models/bert/configuration_bert.py`
# The expected changes
The changes we expect could be find #19485:
1. **Change the import order of the model and configuration classes**
2. **Add `(with random weights)` in the comment before model initialization line**
3. **Add `configuration_[model_name].py` to `utils/documentation_tests.txt`** (respecting the order)
Please do step 3. only after **Running the doctest and make sure all tests pass** (see below) 🙏
# How to run doctests
Suppose you are working on `src/transformers/models/bert/configuration_bert.py`. The steps to run the test are:
0. **Stage your changes**
```bash
git add src/transformers/models/bert/configuration_bert.py
```
1. **Prepare the files to be tested**
```python
python utils/prepare_for_doc_test.py src
```
or if you prefer to be more specific
```python
python utils/prepare_for_doc_test.py src/transformers/models/bert/configuration_bert.py
```
This will change some files (doc-testing needs to add additional lines that we don't include in the doc source files).
2. **Launch the test:**
```python
python -m pytest --doctest-modules src/transformers/models/bert/configuration_bert.py -sv --doctest-continue-on-failure
```
3. **Cleanup git status**
```bash
git checkout -- .
```
to clean up the changes in step 1.
# Ready (or not)?
If all tests pass, you can commit, push and open a PR 🔥 🚀 , otherwise iterate the above steps 💯 !
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19487/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19486
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19486/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19486/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19486/events
|
https://github.com/huggingface/transformers/pull/19486
| 1,404,355,133
|
PR_kwDOCUB6oc5Ajkz7
| 19,486
|
🚨 🚨 🚨 Fix CvT parameter initialization
|
{
"login": "mathieujouffroy",
"id": 45208116,
"node_id": "MDQ6VXNlcjQ1MjA4MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/45208116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathieujouffroy",
"html_url": "https://github.com/mathieujouffroy",
"followers_url": "https://api.github.com/users/mathieujouffroy/followers",
"following_url": "https://api.github.com/users/mathieujouffroy/following{/other_user}",
"gists_url": "https://api.github.com/users/mathieujouffroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathieujouffroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathieujouffroy/subscriptions",
"organizations_url": "https://api.github.com/users/mathieujouffroy/orgs",
"repos_url": "https://api.github.com/users/mathieujouffroy/repos",
"events_url": "https://api.github.com/users/mathieujouffroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathieujouffroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR aims to rectify the difference in parameter initialization between the HF implementation and the original implementation (microsoft).
- Initializes torch dense layer weights with trunc_normal instead of normal.
- Initializes cls_token with trunc_normal
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
As [discussed](https://github.com/huggingface/transformers/pull/18597#issuecomment-1271354673) @amyeroberts here's the PR regarding the changes for the CvT pytorch model 😊
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19486/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19486",
"html_url": "https://github.com/huggingface/transformers/pull/19486",
"diff_url": "https://github.com/huggingface/transformers/pull/19486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19486.patch",
"merged_at": 1665595247000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19485
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19485/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19485/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19485/events
|
https://github.com/huggingface/transformers/pull/19485
| 1,404,321,277
|
PR_kwDOCUB6oc5AjdeO
| 19,485
|
[Doctest] Add `configuration_bert.py`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Add `configuration_bert.py` to `utils/documentation_tests.txt` for doctest.
This PR will be used as a template for a new sprint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19485/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19485",
"html_url": "https://github.com/huggingface/transformers/pull/19485",
"diff_url": "https://github.com/huggingface/transformers/pull/19485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19485.patch",
"merged_at": 1665560648000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19484
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19484/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19484/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19484/events
|
https://github.com/huggingface/transformers/pull/19484
| 1,404,289,476
|
PR_kwDOCUB6oc5AjWqy
| 19,484
|
Update TF whisper doc tests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fixes doctests for the TF whisper model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19484/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19484",
"html_url": "https://github.com/huggingface/transformers/pull/19484",
"diff_url": "https://github.com/huggingface/transformers/pull/19484.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19484.patch",
"merged_at": 1665500732000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19483
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19483/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19483/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19483/events
|
https://github.com/huggingface/transformers/pull/19483
| 1,404,218,186
|
PR_kwDOCUB6oc5AjHht
| 19,483
|
Fix issue #19300
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger How do I get output_dir in on_train_end callback ? ",
"@sgugger Never mind, Let me fix the failing tests. ",
"Let me know if you need any help!",
"@sgugger The tests pass now. There was bug in my change and it is great that our tests caught it. ",
"Thanks again for all your work on this!"
] | 1,665
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixing issue #19300
<!-- Remove if not applicable -->
Fixes # (issue)
#19300
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19483/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19483",
"html_url": "https://github.com/huggingface/transformers/pull/19483",
"diff_url": "https://github.com/huggingface/transformers/pull/19483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19483.patch",
"merged_at": 1666187738000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.