url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/18675
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18675/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18675/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18675/events
|
https://github.com/huggingface/transformers/pull/18675
| 1,342,652,472
|
PR_kwDOCUB6oc49XnaG
| 18,675
|
Add hallucination filter
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @KMFODA are you still planning to work on this? We can reopen the PR :)",
"Hey @gante yes I still plan to work on this. My apologies I had fallen ill this past month and couldn't spend time on this. If you re-open the PR I will prioritise working on this over the next few weeks.",
"@KMFODA absolutely no rush, take your time -- personal health always takes precedence! I hope you're feeling better π€ ",
"(@KMFODA let us know when you'd like a new review)",
"Thanks @gante. Just managed to get all the tests to pass so a review now would be much appreciated.",
"Hi @gante, I added a `test_encoder_repetition_penalty_dist_process` to cover the 1st type of test. The 2nd test you've linked seems to be more focused on beam searches and stopping criteria. What type of test did you have in mind here for the encoder_repetition_penalty? Would ensuring it's initialised by adding it to this [test](https://github.com/huggingface/transformers/blob/0e83c9664b2a440ade59066a77fb01d0143e4d18/tests/generation/test_generation_utils.py#L101) cover this?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18675). All of your documentation changes will be reflected on that endpoint.",
"Done, thanks for all the helpful comments to get this merged and apologies it took so long.",
"@KMFODA did this processor help with your use case? :)",
"Thanks for asking @gante! It worked yes, although I had to use a very small penalty to eventually remove the hallucination. In a call with Karim and Joao (changing the names to protect the real data) the model I'm using generates the following action:\r\n\r\n`Tom will give Joao his email address.`\r\n\r\nWhere Tom is a hallucination and an individual not in this call. After applying a penalty of 0.001 I get the following output:\r\n\r\n`Karim will send Joao an email today.`",
"Hey @gante, let me know if anything else is needed to get this merged. Using this in the inference pipeline / API would be of really helpful.",
"Thanks @ArthurZucker. Added the doc strings in the 3 different files you mentioned. I've only got one test failing which I can't recreate locally:\r\n\r\n`tests/pipelines/test_pipelines_zero_shot.py::ZeroShotClassificationPipelineTests::test_small_model_tf`\r\n\r\nIt seems like the outputs changed in the `zero-shot-classification` pipeline although I'm not sure why. Are you able to point me towards what might be causing this to fail?",
"Managed to fix the failing test by rebasing to main. Hopefully should be good to merge now but if not let me know!",
"Cool let's just ask for a final review from @sgugger ! π€ ",
"Hey @KMFODA -- the big change we just merged clashed with your PR, as @sgugger mentioned above.\r\n\r\nIn a nutshell, new generation parameters should go in `GenerationConfig` ([here](https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/src/transformers/generation/configuration_utils.py#L38)), and generate always uses a generation config (related [docs](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)). We need to change this PR to make it consistent with the new changes :D\r\n\r\nI understand this PR has been a long process, so it's okay if you don't want to make further changes. Just let me know if you are no longer interested in working on it :)",
"Hey @gante thanks for getting back to me. Not a problem, I'd still like to work on this as it'll help me learn about the new generation engine. I'll get working on this after the holidays.",
"Hey @gante, moved encoder_repetition_penalty to `GenerationConfig` and fixed all failing tests. Let me know if more is needed to merge this PR.",
"Thanks @gante helpful changes. Just implemented and fixed failing tests.",
"@KMFODA awesome, thanks π \r\n\r\n@sgugger this PR should be ready to go in, feel free to merge if you are also happy with it :)",
"Hopefully should be good to merge now. If not let me know.",
"Thanks for your contribution!"
] | 1,660
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
As per the discussions in https://github.com/huggingface/transformers/issues/18354, this PR aims to add a `HallucinationPenaltyLogitsProcessor` that takes in a `hallucination_penalty` that is applied to any tokens that are not in the original input. This acts as a hallucination filter and the higher it is set the more likely the text is going to contain only the input tokens. For summarisation this means the higher the hallucination penalty the more extractive the summary is going to be and the less likely it is to have a hallucination.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante, @patrickvonplaten
Library:
- text generation: @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18675/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18675",
"html_url": "https://github.com/huggingface/transformers/pull/18675",
"diff_url": "https://github.com/huggingface/transformers/pull/18675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18675.patch",
"merged_at": 1674145226000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18674
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18674/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18674/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18674/events
|
https://github.com/huggingface/transformers/pull/18674
| 1,342,338,779
|
PR_kwDOCUB6oc49Wowy
| 18,674
|
Deberta MaskedLM Corrections
|
{
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18674). All of your documentation changes will be reflected on that endpoint.",
"Thanks for working on this, @nbroad1881! This is a good improvement, but it will unfortunately break all existing models that have a head named `cls`. I'm trying to see if there is a non-backward breaking approach that would enable loading the existing model head; it'll likely mean updating the weights in the repo rather than updating the code here.\r\n\r\nI wonder what would be the most breaking. It would be better to have a non breaking approach, but I'm not entirely sure we can get away with it.",
"> I wonder what would be the most breaking. It would be better to have a non breaking approach, but I'm not entirely sure we can get away with it.\r\n\r\nCould there be two versions and anytime AutoModelForMaskedLM gets called in the future, it defaults to the new implementation but also checks the config.json file or state dict to see if it uses the old implementation?\r\n\r\nScenarios\r\n1. AutoModelForMaskedLM.from_pretrained/from_config(canonical repo) --> use new implementation\r\n2. AutoModelForMaskedLM.from_pretrained/from_config(custom repo/local path) --> check config.json/state dict to decide if using new/old implementation\r\n\r\n\r\nOne other question: \r\nWhat should the `get_output_embeddings` function do? BERT's implementation makes it look like it just returns the linear layer (decoder) that maps output_embeddings to token logits. This layer is slightly different for deberta. Instead of `Linear(hidden_size, vocab_size)` it goes `Linear(hidden_size, hidden_size)` and then [there is another step where the output of that is multiplied by word embeddings.](https://github.com/huggingface/transformers/blob/0038a3caa5c7d0c5014704005dd67ab347451ddc/src/transformers/models/deberta/modeling_deberta.py#L1095-L1112)\r\n\r\n \r\n",
"@sgugger, do you have an opinion on this?",
"I am not sure I fully understand the problem here. It looks like the canonical repos have weights that mismatch our code. If this is the case, those weights should be updated to match the code in Transformers, not the opposite, to avoid breaking all other checkpoints.",
"It's not just a naming issue. The current code uses a different mechanism to make masked LM predictions.\r\n\r\n[Current way](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta/modeling_deberta.py#L1121-L1156): hidden_states * linear layer -> logits for each token\r\n[batch_size, sequence_length, hidden_size] * [hidden_size, vocab_size] -> [batch_size, sequence_length, vocab_size]\r\n\r\n[The way it is done in the official deberta repo](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py#L17-L38)\r\nhidden_states * linear layer * word embeddings.T -> logits for each token\r\n[batch_size, sequence_length, hidden_size] * [hidden_size, hidden_size] * [hidden_size, vocab_size] -> [batch_size, sequence_length, vocab_size]\r\n\r\nI skipped some operations that don't change the size of the tensors, but I think this proves my point.\r\n\r\nIf it is done the second way, then the fillmask pipeline will work (for deberta v1 and v2) from the canonical weights",
"Thanks for explaining @nbroad1881 I now understand the problem a little bit better. I don't think we can avoid having two classes for masked LM (for instance `OldDebertaForMaskedLM` and `NewDebertaForMaskedLM`) along with `DebertaForMaskedLM` to dispatch to the proper one depending on the config, to be able to maintain backward compatibility.\r\n\r\nIf you want to amend the PR to write a new class for masked LM for now with the proper changes (leaving the current masked LM class as is), I can then follow-up with the rest and write this in a fully backward-compatible manner.",
"@sgugger, that sounds good to me. Do you know what I should put for the `get_output_embeddings` and `set_output_embeddings` functions? ",
"It needs to be the weights/bias that have the vocab_size dim.",
"> It needs to be the weights/bias that have the vocab_size dim.\r\n\r\nThere are weights that are [hidden_size, hidden_size] and a bias that has [vocab_size] dimensions. Which one do I use?",
"Leave those two to None for now then. I'll add that in the followup PR.",
"@sgugger , I implemented both Old and New Deberta(V1/V2)ForMaskedLM and I'm wondering which should be used for AutoModelForMaskedLM. Since the other version doesn't have an associated Auto class, it will fail some tests",
"The classes `OldDebertaForMaskedLM` and `NewDebertaForMaskedLM` are not meant to be public. This is an internal artifact to maintain backward compatibility, the user will only use the `DebertaForMaskedLM` class and a config parameter will internally decide which of the classes should be used.\r\n\r\nFor this PR, you should just add the `NewDebertaForMaskedLM` without any change to the doc/auto classes and don't touch the current `DebertaForMaskedLM`.",
"Ah ok. Got it. Thanks",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@nbroad1881 Do you want me to fully take over on this?",
"@sgugger, I made the changes and then made the mistake of diving too deeply into checking whether the EMD is correctly implemented. I don't think it is, but I'll leave that for someone else or another time. Let me push the changes, and I'll ping you when I do. \r\n\r\nThanks for following up π€ ",
"On second thought, you should just take it over @sgugger. Let me know if you have questions",
"Ok, will have a look early next week!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"unstale, still planning to address this!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ArthurZucker has taken over this as part of his refactor of the DeBERTa model.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
The current implementations of DebertaForMaskedLM and DebertaV2ForMaskedLM do not load all of the weights from the checkpoints. After consulting the [original repo](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/bert.py), I modified the MaskedLM classes to load the weights correctly and to be able to be used for fill-mask task out of the box (for v1 and v2, v3 wasn't trained for that).
I didn't know what to implement for `get_output_embeddings` and `set_output_embeddings`.
## TODO:
- [ ] Implement `get_output_embeddings`
- [ ] Implement `set_output_embeddings`
- [ ] Implement `resize_token_embeddings`
Fixes # (issue)
https://github.com/huggingface/transformers/issues/15216
https://github.com/huggingface/transformers/issues/15673
https://github.com/huggingface/transformers/issues/16456
https://github.com/huggingface/transformers/issues/18659
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @sgugger
I'm sorry this took so long.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18674/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/18674/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18674",
"html_url": "https://github.com/huggingface/transformers/pull/18674",
"diff_url": "https://github.com/huggingface/transformers/pull/18674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18674.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18673
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18673/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18673/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18673/events
|
https://github.com/huggingface/transformers/pull/18673
| 1,342,288,800
|
PR_kwDOCUB6oc49WeRK
| 18,673
|
Allow empty reference summaries
|
{
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh, would you like to take a look at what is proposed here?",
"@JohnGiorgi I am not familiar with this dataset. But I am wondering if a model trained on this dataset is expected to learn to deal with empty document (this seems to strange) and/or learn to predict empty summary for some document?",
"@ydshieh They are just using the empty string as a placeholder. The reference summaries are held-out and not available publically. I imagine there could be other cases like this.\r\n\r\nI guess another option is to log a warning if the inputs are empty but still proceed.",
"Log a warning sounds good to me. But before doing this, I am wondering what would be the benefits to process those held-out (empty string) test examples. Those examples won't be useful, right?",
"You may still want to generate predictions for examples even if they donβt have reference summaries (as is the case for the test set of MS^2). So another option still is to make the check just look for empty _inputs_, and ignore (but log a warning) for empty _reference summaries_\r\n",
"@ydshieh Okay how's that? The only change now is that examples with empty reference summaries are still included, but a warning will be logged.",
"Thank you @JohnGiorgi, this LGTM overall!\r\n\r\nThe logging on each example with empty summary might be too spam. (Imagine the test dataset has 10K examples).\r\nWe can probably set a flag. If an empty summary is found -> and if flag is False -> warning -> set flag to True.\r\n\r\nLet's wait one of the core maintainers to give a final review.",
"> Thank you @JohnGiorgi, this LGTM overall!\r\n> \r\n> The logging on each example with empty summary might be too spam. (Imagine the test dataset has 10K examples). We can probably set a flag. If an empty summary is found -> and if flag is False -> warning -> set flag to True.\r\n> \r\n> Let's wait one of the core maintainers to give a final review.\r\n\r\nAh very good point! Is there an example somewhere else in the codebase of something like this? I'm happy to update my change following that",
"Hi! Something like `deprecation_warnings` in\r\n\r\nhttps://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/tokenization_utils_base.py#L1520\r\n\r\ncould work (of course with another name), but in the training script, I think a single Boolean variable is enough.",
"Hi @ydshieh, is there anything blocking this that I can address?",
"Hi @JohnGiorgi . Sorry for being late in the review. As you can see [in this failed CI job](https://app.circleci.com/pipelines/github/huggingface/transformers/46308/workflows/5bf0bbad-5a99-47da-aa71-d6db2ed0ae37/jobs/544078), there is some issue of variable scope.\r\n\r\nA quick solution could be\r\n\r\n ```python\r\n ...\r\n\r\n # A placeholder to determine whether we have already warned about empty summaries.\r\n empty_summary_warning = {\"warned\": False}\r\n\r\n def preprocess_function(examples):\r\n \r\n if not examples[summary_column][i] and not empty_summary_warning[\"warned\"]:\r\n ...\r\n empty_summary_warning[\"warned\"] = True\r\n ```\r\nLet me know your opinion. Once the CIs all pass, I will request a final review from my colleagues :-)\r\n\r\nThanks a lot, for the PR and for the patience π€ ",
"Ah, didn't look carefully at why the build was failing. Thanks, @ydshieh, your solution causes all the tests to pass!",
"This is too complicated for one of the example, which are just examples, not production-ready apps that should work in every case. Readability is more important than fixing this edge case IMO.",
"> not production-ready apps that should work in every case. Readability is more important than fixing this edge case IMO.\r\n\r\nFair point.\r\n\r\n@JohnGiorgi Still thank you a lot for the PR. I am sorry that I haven't thought in another angle as a repository maintainer, and let you spend quite some time working on several of my suggestions.\r\n\r\nYou can definitely tweak the example script to meet your own needs. I am closing the issue."
] | 1,660
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The `run_summarization.py` has a check to skip examples where either the `text_column` or `summary_column` is `None`. However, the way the check was written would catch any falsely values, like an empty string.
This caused all examples to be skipped for datasets where either the `text_column` or `summary_column` was the empty string (e.g. the test set of the [MS^2 dataset](https://huggingface.co/datasets/allenai/mslr2022)).
This PR just updates the check so it looks for `None` values explicitly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18673/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18673",
"html_url": "https://github.com/huggingface/transformers/pull/18673",
"diff_url": "https://github.com/huggingface/transformers/pull/18673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18673.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18672
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18672/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18672/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18672/events
|
https://github.com/huggingface/transformers/pull/18672
| 1,342,273,659
|
PR_kwDOCUB6oc49WbGd
| 18,672
|
[WIP] Inputs embeds for flax gpt neo
|
{
"login": "mattf1n",
"id": 13317807,
"node_id": "MDQ6VXNlcjEzMzE3ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/13317807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mattf1n",
"html_url": "https://github.com/mattf1n",
"followers_url": "https://api.github.com/users/mattf1n/followers",
"following_url": "https://api.github.com/users/mattf1n/following{/other_user}",
"gists_url": "https://api.github.com/users/mattf1n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mattf1n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mattf1n/subscriptions",
"organizations_url": "https://api.github.com/users/mattf1n/orgs",
"repos_url": "https://api.github.com/users/mattf1n/repos",
"events_url": "https://api.github.com/users/mattf1n/events{/privacy}",
"received_events_url": "https://api.github.com/users/mattf1n/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18672). All of your documentation changes will be reflected on that endpoint.",
"Hey @mattf1n - let me know if you're still interested in completing this PR and I'd be happy to help with any questions/queries!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,666
| 1,666
|
NONE
| null |
# What does this PR do?
Adds the option to pass `inputs_embeds` to FlaxGPTNeoForCausalLM.
This is already an option for the PyTorch version of the model GPTNeoForCausalLM.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18036
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18672/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18672",
"html_url": "https://github.com/huggingface/transformers/pull/18672",
"diff_url": "https://github.com/huggingface/transformers/pull/18672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18672.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18671
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18671/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18671/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18671/events
|
https://github.com/huggingface/transformers/pull/18671
| 1,342,248,370
|
PR_kwDOCUB6oc49WVuv
| 18,671
|
[bnb] Move documentation
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"yeah, I wasn't sure, probably you're right and then like perf_train_gpu_many.mdx says on top to first read perf_train_gpu_one.mdx - add the same to perf_infer_gpu_many.mdx?",
"Yep makes sense! I propose a small refactoring at 2018285dd60c34d2654b0c51e523d7b1e815b989 !\r\nLet me know if this works for you ",
"The proposed change would be difficult to maintain and 2 copies will get out of sync. Only one copy please - if you prefer the one gpu doc that's where it should be. the other one linking to it.",
"Proposed a change in 5f8a3aeb035e8b086d0f9045550000b6be0fb630 ! Let me know if this works for you ",
"Thanks a lot @stas00 for iterating on the changes πͺ "
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
- move bnb documentation to `perf_infer_gpu_many.mdx` as it was previously set to `perf_train_gpu_one.mdx` which is not relevant in the case of `bitsandbytes` integration since it supports inference only.
cc @stas00
I do have a question though, what about `perf_infer_gpu_one.mdx`? I think that the `bnb` documentation could fit well in this file as well since it supports single GPU inference too.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18671/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18671",
"html_url": "https://github.com/huggingface/transformers/pull/18671",
"diff_url": "https://github.com/huggingface/transformers/pull/18671.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18671.patch",
"merged_at": 1660836889000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18670
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18670/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18670/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18670/events
|
https://github.com/huggingface/transformers/issues/18670
| 1,342,101,959
|
I_kwDOCUB6oc5P_tnH
| 18,670
|
TFClipModel fails to train because of None loss
|
{
"login": "taymills",
"id": 93292086,
"node_id": "U_kgDOBY-GNg",
"avatar_url": "https://avatars.githubusercontent.com/u/93292086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taymills",
"html_url": "https://github.com/taymills",
"followers_url": "https://api.github.com/users/taymills/followers",
"following_url": "https://api.github.com/users/taymills/following{/other_user}",
"gists_url": "https://api.github.com/users/taymills/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taymills/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taymills/subscriptions",
"organizations_url": "https://api.github.com/users/taymills/orgs",
"repos_url": "https://api.github.com/users/taymills/repos",
"events_url": "https://api.github.com/users/taymills/events{/privacy}",
"received_events_url": "https://api.github.com/users/taymills/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Also, I do not see a test case for the fit method anywhere in https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_tf_clip.py, which should probably be added",
"> Also, I do not see a test case for the fit method anywhere in [https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_tf_clip.py](https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_tf_clip.py?rgh-link-date=2022-08-17T19%3A45%3A04Z), which should probably be added\r\n\r\nHi! There is \r\nhttps://github.com/huggingface/transformers/blob/0ea53822f8bdc2c0c41b45bcd04fa6c031e5e700/tests/test_modeling_tf_common.py#L1406\r\n\r\ndefined in the parent class `TFModelTesterMixin`.",
"For the issue, could you check if there is `labels` key in `inputs`? cc @Rocketknight1 ",
"Hi @taymills, the issue is caused by the `TFClipModel` needing `return_loss=True` to be set to return a loss - you can see this in [the model docs](https://huggingface.co/docs/transformers/model_doc/clip#transformers.TFCLIPModel).\r\n\r\nI agree it's unintuitive that `fit()` does not set this to True by default, and I'm not sure why the issue is not detected in the tests - I'll investigate now and hopefully push a fix soon.",
"Update: The `test_keras_fit` test skips models that do not have a `hf_compute_loss` method. This check was added to skip the test on 'base' models like `TFBertModel` that do not have a specific output head or loss, because these models cannot be fit directly. However, `TFClipModel` does have a specific task and loss, but does not have `hf_compute_loss`.\r\n\r\nThe solution is to rewrite the check to correctly identify model types that can be fit, while still not running the test on base models that cannot. I'm working on that now.",
"@taymills your code sample runs correctly for me with the latest patch. Can you check and confirm it works for you too? To use the PR branch, run `pip install --upgrade git+https://github.com/huggingface/transformers.git@return_loss_fix`",
"Fantastic thanks for the quick response. I will give it a whirl. Also @Rocketknight1 I agree that it is non-intuitive given the docs as it is a reasonable assumption that defaulting to the \"loss you probably want\" implies that it actually would return said loss.",
"@taymills yes, that's part of this PR! When using the built-in loss, we now force `return_loss=True` for models where it is an argument. That should avoid this for CLIP and for other similar models in future.",
"Everything looks good! Really appreciate the quick turn around on that @Rocketknight1 . Kudos π !!!",
"@taymills No problem! Fixing the tests has exposed a few other issues though, which that PR will need to fix as well. Unfortunately, you're stuck in the PR branch for now, but I'll ping you and close this issue when it's merged to main!",
"@Rocketknight1 I am curious, when you fixed the test for the model fit for TFClipModel, did you start hitting a bunch of issues with the func `shape_list` in `tf_utils`? I am running into lots of bugs with that when trying to infer the shape.",
"@taymills \r\n\r\nMy colleague is currently off. The PR is not completed yet, there are still some failing tests to fix, but I don't know if it is related to the issue you encounter.\r\n\r\nCould you provide a short code snippet to show the issues?",
"Actually I have tried this with a couple of TFModels now and it appears that Transformers model fit in general does not work with tf.data.TFRecordDataset tensors. Seems it only works with EagerTensors unless I am missing something. Only way I have been able to get it to work is coercing to numpy iterator.\r\n\r\nThis is a bit off topic so feel free to ignore this as it turns out it is not germane to the current issue.\r\n\r\ne.g.\r\n\r\n`conftest.py`\r\n```python\r\nfrom typing import *\r\n\r\nimport io\r\nimport pickle\r\nimport random\r\n\r\nimport numpy as np\r\nimport pytest\r\nimport requests\r\nimport tensorflow as tf\r\nimport transformers\r\nfrom PIL import Image\r\nfrom product_ds_shared_utils.model_io.tensorflow.tfdata_helpers import (\r\n create_example_proto,\r\n)\r\n\r\n# This is an example image pulled from public dataset to use for model and data loarder smoke tests\r\n# TODO jmills: Might be better to store this on GCS or in repo\r\nEXAMPLE_IMAGE_URL: str = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\n# This is the schema for inputs to clip\r\n\r\n# This is the size of the test dataset. One example image and example text is repeated this number of times\r\nDATASET_LENGTH: int = 5\r\n# This is the clip pretrained model name for any clip fixtures\r\nCLIP_PRETRAINED_NAME = \"openai/clip-vit-base-patch32\"\r\n# This is the clip pretrained model name for any clip fixtures\r\nBERT_PRETRAINED_NAME = \"distilbert-base-uncased\"\r\n\r\n\r\n@pytest.fixture(scope=\"session\")\r\ndef example_image_raw() -> bytes:\r\n image = requests.get(EXAMPLE_IMAGE_URL, stream=True).content\r\n return image\r\n\r\n\r\n@pytest.fixture(scope=\"session\")\r\ndef example_text() -> str:\r\n return \"This is an image of a cat\"\r\n\r\n\r\n@pytest.fixture(scope=\"session\")\r\ndef example_dataset_clip_tfrecords(\r\n example_image_raw, example_text, tmpdir_factory\r\n) -> Tuple[str, str]:\r\n ds_path = str(tmpdir_factory.mktemp(\"test_clip_model\").join(\"dataset.tfrecords\"))\r\n desc_path = str(tmpdir_factory.mktemp(\"test_clip_model\").join(\"feature_desc.pkl\"))\r\n example_image = Image.open(io.BytesIO(example_image_raw))\r\n processor = transformers.AutoProcessor.from_pretrained(CLIP_PRETRAINED_NAME)\r\n\r\n feature_dict_processed = dict(\r\n processor(\r\n images=example_image, text=example_text, return_tensors=\"tf\", padding=True\r\n )\r\n )\r\n\r\n with tf.io.TFRecordWriter(ds_path) as file_writer:\r\n for i in range(0, DATASET_LENGTH, 1):\r\n feature_dict_processed[\"labels\"] = tf.constant([random.choice([0, 1])])\r\n example_proto, feature_description = create_example_proto(\r\n feature_dict_processed\r\n )\r\n file_writer.write(example_proto.SerializeToString())\r\n with open(desc_path, \"wb\") as f:\r\n pickle.dump(feature_description, f)\r\n\r\n return ds_path, desc_path\r\n```\r\n\r\ntest_train.py - note `trainer.model` is a TFClipModel.from_pretrained\r\n```python\r\n\"\"\"\r\nTest trainer class using clip pretrained model\r\n\"\"\"\r\nfrom typing import *\r\n\r\nimport pathlib\r\nimport pickle\r\n\r\nimport pytest\r\nimport tensorflow as tf\r\nimport tensorflow_addons as tfa\r\n\r\nfrom product_embeddings.trainer import EmbeddingTrainer\r\n\r\nMODEL_NAME: str = \"openai/clip-vit-base-patch32\"\r\n\r\n\r\n@pytest.fixture(scope=\"function\")\r\ndef parsed_dataset_tfrecords(example_dataset_clip_tfrecords):\r\n dataset_path, schema_path = example_dataset_clip_tfrecords\r\n with open(schema_path, \"rb\") as f:\r\n schema = pickle.load(f)\r\n\r\n def decode_fn(record_bytes: bytes) -> Tuple[Dict[str, tf.Tensor], tf.Tensor]:\r\n parsed_example = tf.io.parse_single_example(record_bytes, schema)\r\n parsed_example[\"input_ids\"] = tf.io.parse_tensor(\r\n parsed_example[\"input_ids\"], tf.int32\r\n )\r\n parsed_example[\"pixel_values\"] = tf.io.parse_tensor(\r\n parsed_example[\"pixel_values\"], tf.float32\r\n )\r\n parsed_example[\"attention_mask\"] = tf.io.parse_tensor(\r\n parsed_example[\"attention_mask\"], tf.int32\r\n )\r\n parsed_example[\"labels\"] = tf.io.parse_tensor(\r\n parsed_example[\"labels\"], tf.int32\r\n )\r\n labels = parsed_example.pop(\"labels\")\r\n return parsed_example, labels\r\n\r\n dataset = tf.data.TFRecordDataset([str(dataset_path)]).map(\r\n decode_fn, num_parallel_calls=tf.data.AUTOTUNE\r\n )\r\n\r\n return dataset\r\n\r\n\r\ndef test_dataset_format(parsed_dataset_tfrecords):\r\n for features, label in parsed_dataset_tfrecords:\r\n assert list(features.keys()) == [\"attention_mask\", \"input_ids\", \"pixel_values\"]\r\n for key, val in features.items():\r\n assert isinstance(\r\n val, tf.Tensor\r\n ), f\"Feature {key} is not a tensor but got {val}\"\r\n assert isinstance(label, tf.Tensor)\r\n assert features[\"input_ids\"].shape == (1, 9)\r\n assert features[\"pixel_values\"].shape == (1, 3, 224, 224)\r\n assert label.shape == (1)\r\n\r\n\r\ndef test_clip_model_trainer_tfrecords(tmpdir_factory, parsed_dataset_tfrecords):\r\n checkpoint_path = pathlib.Path(\r\n tmpdir_factory.mktemp(\"test_clip_model\").join(\"checkpoints\")\r\n )\r\n log_path = pathlib.Path(tmpdir_factory.mktemp(\"test_clip_model\").join(\"logs\"))\r\n\r\n trainer = EmbeddingTrainer(\r\n model_name=MODEL_NAME,\r\n checkpoint_dir=str(checkpoint_path),\r\n log_dir=str(log_path),\r\n )\r\n trainer.compile(loss=tfa.losses.contrastive_loss)\r\n\r\n trainer.model.run_train(\r\n training_dataset=parsed_dataset_tfrecords.as_numpy_iterator(),\r\n validation_dataset=parsed_dataset_tfrecords.as_numpy_iterator(),\r\n epochs=1,\r\n )\r\n```",
"@taymills That's quite an odd issue - it's definitely unrelated to this one, but if you haven't managed to figure it out, can you copy it into a new issue and tag me? It looks like your model is coming from some external library in the second example though (our models don't have a `run_train` method), so if that's the case we probably can't do much about the problem."
] | 1,660
| 1,662
| 1,662
|
NONE
| null |
### System Info
transformers version: 4.21.1
Platform: MacOS BigSur 11.6.7
Python version: 3.8.13
Huggingface_hub version: 0.8.1
Tensorflow version (GPU?): 2.7.3 (False)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: no
Using distributed or parallel set-up in script?: no
@patil-suraj
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is the script run to attempt to fit the model to the example data. It is verbatim from the 4.21.1 docs with the addition of `model.fit`. The same error arose when working with my own project. The loss is always `None` as is `y` `y_pred` . Somewhere in the logic of https://github.com/huggingface/transformers/blob/132402d752044301b37e54405832738b16f49df6/src/transformers/modeling_tf_utils.py#L1116.
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, TFCLIPModel
import tensorflow as tf
model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=[image, image], return_tensors="tf", padding=True
)
outputs = model(**inputs)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001))
model.fit(dict(inputs))
```
Results with an error of `zero gradient` because the gradients are all 0s, which I expect is caused by `y` and `y_pred` both being empty dicts.
### Expected behavior
Model.fit() on inputs from preprocessor completes a training step without error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18670/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18669
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18669/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18669/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18669/events
|
https://github.com/huggingface/transformers/pull/18669
| 1,342,016,363
|
PR_kwDOCUB6oc49Vj-4
| 18,669
|
[LongT5] Correct docs long t5
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
MEMBER
| null |
Corrects the docs of LongT5 according to discussion in https://github.com/huggingface/transformers/issues/18502
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18669/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18669",
"html_url": "https://github.com/huggingface/transformers/pull/18669",
"diff_url": "https://github.com/huggingface/transformers/pull/18669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18669.patch",
"merged_at": 1660809830000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18668
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18668/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18668/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18668/events
|
https://github.com/huggingface/transformers/pull/18668
| 1,341,928,492
|
PR_kwDOCUB6oc49VRLb
| 18,668
|
Warn on TPUs when the custom optimizer and model device are not the same
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2107554019,
"node_id": "MDU6TGFiZWwyMTA3NTU0MDE5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models",
"name": "Distributed Training / Models",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR raises an error when a user creates a custom optimizer on a TPU and they did not move the model to the right TPU device beforehand. Not doing so will cause issues such as the ones described in https://github.com/pytorch/xla/issues/3675#issuecomment-1171702988 and https://github.com/huggingface/transformers/issues/18635. This check is performed similar to the one in Accelerate
Fixes # (issue)
https://github.com/huggingface/transformers/issues/18635
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18668/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18668",
"html_url": "https://github.com/huggingface/transformers/pull/18668",
"diff_url": "https://github.com/huggingface/transformers/pull/18668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18668.patch",
"merged_at": 1661949992000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18667
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18667/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18667/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18667/events
|
https://github.com/huggingface/transformers/pull/18667
| 1,341,861,019
|
PR_kwDOCUB6oc49VCqQ
| 18,667
|
remvoe `_create_and_check_torch_fx_tracing` in specific test files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The common case test for TorchScript, and if I recall correctly there was an issue for those models on that aspect?\r\n\r\nYou suggest to add a flag called `torch_script_compatible`? If so, [that is what I suggested back then](https://github.com/huggingface/transformers/pull/17206#discussion_r875989932), pinging @sgugger here.\r\n\r\nAlso, I think that some of those models can actually be torchscripted with torch 1.12, but the issue was that we are (were?) testing in torch 1.11.",
"> The common case test for TorchScript, and if I recall correctly there was an issue for those models on that aspect?\r\n\r\nThere might be before. But as far as I can tell, the issue probably came from the input and label names preparation. As the tests pass after I remove their re-definitions from the specific model test files, I think it's fine and better to clean them up. (The only failure is from Hubert).\r\n\r\n> You suggest to add a flag called torch_script_compatible? \r\n\r\nThis is to allow torch trace test still run while skip the torch script test, as currently Hubert test will fail on torchscript.\r\nBut I would prefer to add this flag (if the idea is approved) in a separate PR (and where we can enable the test for Wav2Vec2 too, for example)\r\n\r\n> Also, I think that some of those models can actually be torchscripted with torch 1.12, but the issue was that we are (were?) testing in torch 1.11.\r\n\r\nWe can re-evaluate this, but again, let's not to do changes regarding this part in this PR.\r\n\r\nThis PR is merely to avoid overwriting `_create_and_check_torch_fx_tracing` unnecessary :-)",
"@michaelbenayoun If this PR is OK on your side, I am going to merge. Regarding the flag, let's see what we can do in a separate PR."
] | 1,660
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
Remvoe `_create_and_check_torch_fx_tracing` in specific model test files, as the common one can handle them correctly.
The only exception is `Hubert` model, but we can also remove it, and set `fx_compatible` to `False` (just as for `Wav2Vec2`).
It might be better to add `torch_script_compatible` to handle `Hubert` and related models.
**Motivation**: Make the change in #18547 available to all tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18667/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18667",
"html_url": "https://github.com/huggingface/transformers/pull/18667",
"diff_url": "https://github.com/huggingface/transformers/pull/18667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18667.patch",
"merged_at": 1662560530000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18666
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18666/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18666/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18666/events
|
https://github.com/huggingface/transformers/pull/18666
| 1,341,780,268
|
PR_kwDOCUB6oc49UxQm
| 18,666
|
Add evaluate to examples requirements
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds `evaluate` as part of the requirements to all of the examples scripts that need them
Fixes # (issue)
Closes https://github.com/huggingface/transformers/issues/18663
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18666/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18666",
"html_url": "https://github.com/huggingface/transformers/pull/18666",
"diff_url": "https://github.com/huggingface/transformers/pull/18666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18666.patch",
"merged_at": 1660834660000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18665
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18665/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18665/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18665/events
|
https://github.com/huggingface/transformers/issues/18665
| 1,341,687,403
|
I_kwDOCUB6oc5P-IZr
| 18,665
|
Unexpected keyword argument 'trust_remote_code' when using `table-question-answering` pipeline
|
{
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"It is not only for local model broken\r\n\r\n```python\r\nfrom transformers import pipeline\r\ntq = pipeline(\"table-question-answering\",model=\"microsoft/tapex-base-finetuned-wtq\")\r\n```\r\ncreates the same error \r\n",
"cc @NielsRogge @Narsil ",
"Hi @philschmid, I tried your code and raised the same error, but after I did some debugging, I found some info that may be useful to you.Here we take the code `pipeline(\"table-question-answering\",model=\"microsoft/tapex-base-finetuned-wtq\")`for example. \r\n\r\nUnder the hood, the pipeline for table question answering will infer the config type base on your model name which is `microsoft/tapex-base-finetuned-wtq` here. \r\nhttps://github.com/huggingface/transformers/blob/f0d496828d3da3bf1e3c8fbed394d7847e839fa6/src/transformers/pipelines/__init__.py#L574\r\nBut unfortunately, the config type of this model is `BartConfig`. To initialize the pipeline, we need to provide the model which can infer the type of config is `TapasConfig`, for example, model `google/tapas-base`. I think you can try to initialize the pipeline with the following code:\r\n\r\n`tq = pipeline(\"table-question-answering\", model=\"google/tapas-base\")`",
"@aRyBernAlTEglOTRO it works in `4.20.1` with `tapex` and the issue comes from `trust_remote_code`, which might be missing somewhere in the files.",
"Hi @philschmid, After I downgrade the transformers to 4.20.1, although I can run the code with `microsoft/tapex-base-finetuned-wtq`. but I will raise the info below:\r\n\r\n`The model 'BartForConditionalGeneration' is not supported for table-question-answering. Supported models are ['TapasForQuestionAnswering'].`\r\n\r\nTherefore, I think even if you can initialize the pipeline, the pipeline may work in the wrong way. I still think we should initialize the pipeline with the model which can infer the `tapasConfig`.",
"Hi @philschmid, if you insist to initialize the pipeline with `microsoft/tapex-base-finetuned-wtq`. I found some info that may be useful. When `AutoModelForTableQuestionAnswering` try to init the model, it will remove the `trust_remote_code` from `kwargs`. you can check the following code:\r\nhttps://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/models/auto/auto_factory.py#L420\r\nTherefore, If you want to initialize the pipeline with `microsoft/tapex-base-finetuned-wtq`, which will build a model `BartForConditionalGeneration` under the hood, thus you need to add some modifications to the `from_pretrained` method of `BartForConditionalGeneration`, which is code mentioned below:\r\nhttps://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/modeling_utils.py#L1606\r\nand you may already found that they add this update in version `4.22.0.dev0`\r\nhttps://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/modeling_utils.py#L1829\r\n\r\nAfter you upgrade the transformers to version `4.22.0.dev0`, everything should work fine, but you still get the info that:\r\n\r\n`The model 'BartForConditionalGeneration' is not supported for table-question-answering. Supported models are ['TapasForQuestionAnswering'].`\r\n",
"@philschmid I can't be able to reproduce on `@main` branch, so the issues seems to have been fixed. \r\n\r\nI confirm the error exists in `4.21.1` though, I am not sure how to backport things or workaround this.\r\n\r\n`trust_remote_code` was added recently @sgugger so he might know more.\r\n\r\nAs for the warning, the warnings is a bit outdated as the pipeline does support BartGeneration.\r\nAdded a PR to fix the warningS: https://github.com/huggingface/transformers/pull/18711",
"Hi @Narsil, thank you for your reply. I found the warning raised from the code below:\r\nhttps://github.com/huggingface/transformers/blob/0f257a87749e0a72bda260c6f319a45dae1e7c4d/src/transformers/pipelines/table_question_answering.py#L102\r\nwhich means we need to update the `OrderDict` in the following code to remove warning:\r\nhttps://github.com/huggingface/transformers/blob/0f257a87749e0a72bda260c6f319a45dae1e7c4d/src/transformers/models/auto/modeling_auto.py#L590\r\nI don't know except for the `BartForConditionalGeneration` model, what else model should be added to this `OrderDict`?",
"TAPEX wasn't added to the MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES because that mapping defines which models are supported by the `AutoModelForTableQuestionAnswering` class.\r\n\r\nHowever, the table QA pipeline does support both TAPAS and TAPEX, so we may want to suppress this warning.",
"@aRyBernAlTEglOTRO @NielsRogge The PR to fix the warnings is here. https://github.com/huggingface/transformers/pull/18711\r\n \r\nBasically any Seq2seq model should work (Well only the trained models will actually provide good results, but the pipeline WILL work.)\r\n\r\nIn general pipelines tries not to look at individual models, but only type for model (`ForXXX`)",
"The error was fixed in https://github.com/huggingface/transformers/pull/18428 @philschmid.\r\n\r\nI'll likely do a patch PR later today containing this fix (v4.21.2).",
"Closing as solved by https://github.com/huggingface/transformers/pull/18428"
] | 1,660
| 1,661
| 1,661
|
MEMBER
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-base-finetuned-wtq").save_pretrained("test")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-base-finetuned-wtq").save_pretrained("test")
```
2. create pipeline object
```python
from transformers import pipeline
tq = pipeline("table-question-answering",model="test")
```
3. receive error
```python
pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
655 task=task,
656 **hub_kwargs,
--> 657 **model_kwargs,
658 )
659
[/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
255
256 try:
--> 257 model = model_class.from_pretrained(model, **kwargs)
258 if hasattr(model, "eval"):
259 model = model.eval()
[/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2104
2105 with ContextManagers(init_contexts):
-> 2106 model = cls(config, *model_args, **model_kwargs)
2107
2108 if device_map == "auto":
TypeError: __init__() got an unexpected keyword argument 'trust_remote_code'
```
### Expected behavior
The model should normally load
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18665/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18664
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18664/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18664/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18664/events
|
https://github.com/huggingface/transformers/issues/18664
| 1,341,630,129
|
I_kwDOCUB6oc5P96ax
| 18,664
|
Cannot import pipelines from transformers
|
{
"login": "balachander1964",
"id": 68279037,
"node_id": "MDQ6VXNlcjY4Mjc5MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/68279037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balachander1964",
"html_url": "https://github.com/balachander1964",
"followers_url": "https://api.github.com/users/balachander1964/followers",
"following_url": "https://api.github.com/users/balachander1964/following{/other_user}",
"gists_url": "https://api.github.com/users/balachander1964/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balachander1964/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balachander1964/subscriptions",
"organizations_url": "https://api.github.com/users/balachander1964/orgs",
"repos_url": "https://api.github.com/users/balachander1964/repos",
"events_url": "https://api.github.com/users/balachander1964/events{/privacy}",
"received_events_url": "https://api.github.com/users/balachander1964/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @balachander1964 -- if you check the error log you shared, the error does not come from the `transformers` library. From a quick google search, the error seems to be from the environment set up (try googling `DLL load failed: The specified module could not be found.`)\r\n\r\nAs per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,664
| 1,664
|
NONE
| null |
Hi, I use a Windows 10 Proffessional laptop for development and am using python 3.7.6 conda virtual env. I get the following run time error when running my code (below, after the error details).
<<
bertqa interactive window [PTVS 17.0.22089.1-17.0]
Type $help for a list of commands.
The interactive window has not yet started.
Running D:\Projects2017\bertqa\bertqa\aaa_scratch.py
Traceback (most recent call last):
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\utils\import_utils.py", line 905, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "d:\python\Anaconda3\envs\transformers_qa\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\pipelines\__init__.py", line 50, in <module>
from .image_classification import ImageClassificationPipeline
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\pipelines\image_classification.py", line 15, in <module>
from PIL import Image
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\PIL\Image.py", line 69, in <module>
from . import _imaging as core
ImportError: DLL load failed: The specified module could not be found.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Projects2017\bertqa\bertqa\aaa_scratch.py", line 5, in <module>
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipelines
File "<frozen importlib._bootstrap>", line 1032, in _handle_fromlist
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\utils\import_utils.py", line 893, in __getattr__
value = self._get_module(name)
File "d:\python\Anaconda3\envs\transformers_qa\lib\site-packages\transformers\utils\import_utils.py", line 910, in _get_module
) from e
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
DLL load failed: The specified module could not be found.
>>>
>>
The following is the code I am trying to execute:
<<
#imports
import os, sys
import pandas as PD
import numpy as NP
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipelines
_text = 'I bought a Toyota Corala last January. It works well..'
_questions = [
'What did the person buy?',
'What is working?',
'When did he buy?',
]
_model_name = "deepset/tinyroberta-squad2"
def main():
'''Main entry point'''
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
qa_input = {}
qa_input['question'] = _questions[0]
qa_input['context'] = _text
answer = nlp(QA_input)
print(answer)
'''
Required for all python programs.
'''
if __name__ == '__main__':
print('Starting the QA tool.')
main()
print('Done')
>>
I will appreciate if you let me know how to overcome this issue. Awaiting your reply.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18664/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18663
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18663/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18663/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18663/events
|
https://github.com/huggingface/transformers/issues/18663
| 1,341,562,012
|
I_kwDOCUB6oc5P9pyc
| 18,663
|
No module named 'evaluate'
|
{
"login": "skye95git",
"id": 41561936,
"node_id": "MDQ6VXNlcjQxNTYxOTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/41561936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skye95git",
"html_url": "https://github.com/skye95git",
"followers_url": "https://api.github.com/users/skye95git/followers",
"following_url": "https://api.github.com/users/skye95git/following{/other_user}",
"gists_url": "https://api.github.com/users/skye95git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skye95git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skye95git/subscriptions",
"organizations_url": "https://api.github.com/users/skye95git/orgs",
"repos_url": "https://api.github.com/users/skye95git/repos",
"events_url": "https://api.github.com/users/skye95git/events{/privacy}",
"received_events_url": "https://api.github.com/users/skye95git/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi! You need to do `pip install evaluate`. #18666 will also add it to each of the `examples` internal `requirements.txt` file, so it will be installed when you do `pip install -r requirements.txt`",
"> Hi! You need to do `pip install evaluate`. #18666 will also add it to each of the `examples` internal `requirements.txt` file, so it will be installed when you do `pip install -r requirements.txt`\r\n\r\nThanks! It work."
] | 1,660
| 1,660
| 1,660
|
NONE
| null |
https://github.com/huggingface/transformers/blob/c99e984657b64dd8f19de74405bbf13763ab4f2b/examples/pytorch/language-modeling/run_mlm.py#L35
When I pre-train RoBerta from scratch using the latest version `run_mlm.py`, there is an error :
```
import evaluate
ModuleNotFoundError: No module named 'evaluate'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18663/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18662
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18662/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18662/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18662/events
|
https://github.com/huggingface/transformers/issues/18662
| 1,341,532,037
|
I_kwDOCUB6oc5P9ieF
| 18,662
|
BartTokenizer add_tokens feature.
|
{
"login": "charitharaghavaraju",
"id": 37924225,
"node_id": "MDQ6VXNlcjM3OTI0MjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/37924225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charitharaghavaraju",
"html_url": "https://github.com/charitharaghavaraju",
"followers_url": "https://api.github.com/users/charitharaghavaraju/followers",
"following_url": "https://api.github.com/users/charitharaghavaraju/following{/other_user}",
"gists_url": "https://api.github.com/users/charitharaghavaraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charitharaghavaraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charitharaghavaraju/subscriptions",
"organizations_url": "https://api.github.com/users/charitharaghavaraju/orgs",
"repos_url": "https://api.github.com/users/charitharaghavaraju/repos",
"events_url": "https://api.github.com/users/charitharaghavaraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/charitharaghavaraju/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for opening an issue @Charithavarma!\r\n\r\n@ydshieh, could you take a look here?",
"Could confirm the issue. Also occur for slow bart tokenizer. I can see the word is added to the tokenizers, but the output don't change.",
"```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer_slow = AutoTokenizer.from_pretrained(\"facebook/bart-base\", use_fast=False)\r\ntokenizer_fast = AutoTokenizer.from_pretrained(\"facebook/bart-base\", use_fast=True)\r\n\r\nprint(len(tokenizer_slow.get_vocab()))\r\nprint(len(tokenizer_fast.vocab))\r\n\r\nprint('Δ rumbling' in tokenizer_slow.get_vocab())\r\nprint('Δ rumbling' in tokenizer_fast.vocab)\r\n\r\ntokenizer_slow.add_tokens(['Δ rumbling'], special_tokens=True)\r\ntokenizer_fast.add_tokens(['Δ rumbling'], special_tokens=True)\r\n\r\nprint(len(tokenizer_slow.get_vocab()))\r\nprint(len(tokenizer_fast.vocab))\r\n\r\nprint('Δ rumbling' in tokenizer_slow.get_vocab())\r\nprint('Δ rumbling' in tokenizer_fast.vocab)\r\n\r\n\r\ntext = 'the rain falls down while someone is pounding a car passes by and the thunder is rumbling'\r\nseq = tokenizer_slow.tokenize(text)\r\nprint(seq)\r\n```\r\ngives\r\n```bash\r\n50265\r\n50265\r\nFalse\r\nFalse\r\n50266\r\n50266\r\nTrue\r\nTrue\r\n['the', 'Δ rain', 'Δ falls', 'Δ down', 'Δ while', 'Δ someone', 'Δ is', 'Δ pounding', 'Δ a', 'Δ car', 'Δ passes', 'Δ by', 'Δ and', 'Δ the', 'Δ thunder', 'Δ is', 'Δ r', 'umbling']\r\n```",
"Hi @Charithavarma,\r\n\r\nIn transformers, an added token will be a token that will be preserved before the tokenizer is applied. In this case, since in the initial sentence the spaces are `\" \"` and not `\"Δ \"`, the `'Δ rumbling'` token is not identified anywhere.\r\n\r\nTo achieve what you want to do I advise you to try to add the token `\" rumbling\"`.\r\n\r\nLet me know if it solves your issue! :hugs: ",
"Thank you, @SaulLu ! Is this documented somewhere (I believe so) π .",
"By searching a little I realize that the current documentation is not very explicit on this point. I propose to detail it a little in the PR https://github.com/huggingface/transformers/pull/18687 :relaxed: ",
"Hi @SaulLu,\r\n\r\nThank you. It solved my problem. But the performance of the BART in my model was reduced!\r\n\r\nIs it possible to use the manual tokenizer instead of this BART Tokenizer? Is it compatible?\r\n",
"@Charithavarma If you want to use the trained model `facebook/bart-base`, it's always good to use the corresponding tokenizer. If you change the tokenizer (for example, here you add a new token, where the tokenization of sentences may change too - for some examples), it is normal that the model performance is affected (as it never sees the word/token `rumbling` before).\r\n\r\nIf adding new tokens is really important in your task, you probably would consider finetuning the original model with this changed tokenizer."
] | 1,660
| 1,661
| 1,661
|
NONE
| null |
Hi @LysandreJik ,
I am working on audio captioning, and the ground truth captions are tokenized using the BartTokenizer. I have observed that some of the words in the captions are not tokenized correctly. For instance, the word 'rumbling'. There is no such word in the tokenizer, and its tokenizing is ['Δ r', 'umbling']. I have tried to add the tokens(the word 'Δ rumbling')and change the model token embeddings. But instead of tokenizing the word correctly, it is still tokenizing as ['Δ r', 'umbling']. Did I miss anything here? I have also faced the same issues with the some other words too!
Here is my code!
```
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base", use_fast=True)
tokenizer.add_tokens(['Δ rumbling'])
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
model.resize_token_embeddings(len(tokenizer))
print(tokenizer.is_fast)
ou_e = 'the rain falls down while someone is pounding a car passes by and the thunder is rumbling'
tok_e = tokenizer(ou_e, max_length=64, return_tensors='pt', padding='max_length')
seq = tokenizer.tokenize(ou_e)
print(seq)
summary_ids = model.generate(tok_e['input_ids'], num_beams=4, min_length=5, max_length=100)
summary = tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(summary)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18662/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18661
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18661/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18661/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18661/events
|
https://github.com/huggingface/transformers/issues/18661
| 1,341,453,262
|
I_kwDOCUB6oc5P9PPO
| 18,661
|
Refactor Pytorch `model.generate` method to work on TPU
|
{
"login": "mikcnt",
"id": 11929535,
"node_id": "MDQ6VXNlcjExOTI5NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/11929535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikcnt",
"html_url": "https://github.com/mikcnt",
"followers_url": "https://api.github.com/users/mikcnt/followers",
"following_url": "https://api.github.com/users/mikcnt/following{/other_user}",
"gists_url": "https://api.github.com/users/mikcnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikcnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikcnt/subscriptions",
"organizations_url": "https://api.github.com/users/mikcnt/orgs",
"repos_url": "https://api.github.com/users/mikcnt/repos",
"events_url": "https://api.github.com/users/mikcnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikcnt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @patrickvonplaten",
"Hey @mikcnt, \r\n\r\nThis sounds like a very cool project and I think we should sooner or later focus on it. Currently I won't have the time to take a closer look here, but my advice would be:\r\n\r\n- I think you're totally right in that PyTorch/XLA often falls back on CPU which is why it is very slow. We're luckier here with Jax and TF because if things fall back on CPU the code fails\r\n- It'll take some time to get this fully working so we should start with the easiest example -> see what code changes are necessary to make PyTorch/XLA work with `greedy(...)`\r\n- To set expectations: PyTorch's generate method is one of Transformers most used functions - it's extremely important and we're trying very hard to keep the code readable, easy to understand. If making PyTorch XLA-compatible requires too many changes or makes the code too unreadable we might come to the conclusion that it's just not worth it and maybe just add it as a \"experimental\" additional function but not in \"main\" generate. Also @michaelbenayoun @mfuntowicz is that maybe something we want to have only in optimum maybe but not in Transformers? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, \r\n\r\nAny updates on this? When can we expect to generate a function to work on TPUs? Also, will it be part of transformers or optimum? as mentioned by @patrickvonplaten above?",
"I won't have time to look into this sadly anytime soon. @gante maybe? ",
"Added to my `generate` task queue π \r\n\r\n@divyanshuaggarwal it would be part of `transformers`!",
"Thanks @gante!",
"Hi, @gante just noticed it had been marked WIP, any ETAs on when can we expect this feature?",
"This is not a prioritized feature as you can already use TPUs for generation in Flax and TensorFlow. Since you can easily convert a model from one framework to the other, there is an easy workaround :-)",
"Is there any update on this PR?",
"@deveworld we are atm exploring PT-level optimizations, which include the static shapes needed for XLA (TPU). A significant upgrade in this direction is likely in the next releases (keep an eye there :) )",
"@gante folks from Meta were able to do llama inference on TPU using pytorch XLA. Might be helpful for this issue. \r\n\r\nhttps://pytorch.org/blog/path-achieve-low-inference-latency/?utm_content=254892693&utm_medium=social&utm_source=linkedin&hss_channel=lcp-78618366",
"Has there been any update on this? When is the next release likely to be released?",
"We have some code ready, which makes the generation loop friendly with compiled forward passes (e.g. with `torch.compile`). Pretty much the same algorithm we use with TF/FLAX + XLA. \r\n\r\nHowever, there are performance regressions on some devices, and the PyTorch team is having a look. We will include these changes when the performance bump is consistent across devices. \r\n\r\nMeanwhile, feel free to adapt code from this [repo/PR](https://github.com/fxmarty/accelerated-pytorch-transformers-generation/pull/10).",
"I see. Will this work on TPU then / are TPUs one of the device that are experiencing performance regressions?\r\n\r\nI also looked into the Optimum Neuron greedy decode implementation. While it no longer requires moving computations to CPU, running inference on TPU with it seems significantly slower than on GPU.",
"@verityw I can't confirm. We are aiming at having models that are fully compatible and efficient to use with `torch.compile()`, there may be additional issues when selecting the XLA backend :)",
"Any update on this? I'm trying to work with `trl` and `peft` on a TPU slice (to run tests on yet another [HF-aspiring lib](https://github.com/paulbricman/autocurricula/)), but these newer parts of the ecosystem seem to currently only support torch, which is not supported in an XLA-friendly way in the underlying `transformers`.\r\n\r\nI looked into it a bit and it seems that both mostly wrap the `transformers` `generate()`, so maybe an XLA-friendly version of that would help throughout? I also expect to encounter other issues of XLA-awkwardness in the backward step of `trl`, but I don't have a good intuition of that. Would love any pointers to learn about what it takes to make them XLA-friendly and how far the stack is from that.",
"Not far from seeing the light, actually!\r\n\r\nOur current major endeavor in `generate` is possibility of using different types of caches. By default, caches grow with the input length, but XLA needs a fixed-size cache -- we will be adding it as part of this task. In turn, this should make the forward pass of most models XLA-compatible (or close to it).",
"Any updates on this @gante ?",
"Yes: https://github.com/huggingface/transformers/pull/27931 (it is a pre requisite :) )"
] | 1,660
| 1,705
| null |
CONTRIBUTOR
| null |
### Feature request
Refactor PT version of the method `model.generate` for text generating models to make it compatible with XLA and speed up inference on TPU.
### Motivation
Right now, `model.generate` on PT is extremely slow on TPU compared to CPU and GPU. This is probably due to the fact that some operations done in the PT version of `model.generate` are not XLA compatible, and thus the generation process falls back on CPU. This makes inference on TPU infeasible. A major refactoring work has already been done on its TF counterpart, so it would be nice to have the PT version working as well.
A more in-depth discussion with @gante took place in #12322 and on this [huggingface discussion](https://huggingface.co/spaces/joaogante/tf_xla_generate_benchmarks/discussions/1).
### Your contribution
If there is some interest from the HF team, I can definitely assist during the work.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18661/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18661/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18660
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18660/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18660/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18660/events
|
https://github.com/huggingface/transformers/issues/18660
| 1,341,285,032
|
I_kwDOCUB6oc5P8mKo
| 18,660
|
_no_load_in_8bit module list have custom ignored layers
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @ArthurZucker \r\nThanks a lot for the feature request! \r\nI have addressed a commit in https://github.com/huggingface/transformers/pull/18646 that should support adding an argument `no_load_in_8bit_modules` in `from_pretrained` function. Could you try it in your usecase and let me know if this helped?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing as completed in #18646"
] | 1,660
| 1,663
| 1,663
|
COLLABORATOR
| null |
### Feature request
Could be awesome to have the property `_no_load_in_8bit = []` for the 8bit quantisation to allow custom layers to not be converted.
CC @younesbelkada
### Motivation
Would increase flexibility
### Your contribution
Would be testing on `Jukebox`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18660/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18660/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18659
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18659/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18659/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18659/events
|
https://github.com/huggingface/transformers/issues/18659
| 1,341,096,689
|
I_kwDOCUB6oc5P74Lx
| 18,659
|
DeBERTa can't load some parameters
|
{
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"#18674 should fix this. Thanks for reporting!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,664
| 1,664
|
NONE
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
- Reproduction
```python
from transformers import pipeline
text = "The capital of France is [MASK]"
mlm_pipeline = pipeline('fill-mask', model='microsoft/deberta-base', tokenizer='microsoft/deberta-base')
print(mlm_pipeline(text))
```
- Warning Message
```
Some weights of the model checkpoint at microsoft/deberta-base were not used when initializing DebertaForMaskedLM: ['lm_predictions.lm_head.LayerNorm.bias', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.dense.bias', 'lm_predictions.lm_head.dense.weight', 'deberta.embeddings.position_embeddings.weight', 'lm_predictions.lm_head.LayerNorm.weight']
- This IS expected if you are initializing DebertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DebertaForMaskedLM were not initialized from the model checkpoint at microsoft/deberta-base and are newly initialized: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
- Output
```
The capital of France isumption
The capital of France isοΏ½
The capital of France iszag
The capital of France isreply
The capital of France isnerg
```
### Expected behavior
When DeBERTa model load using transformers, It seems that doesn't load the weights needed for the MLM head. (+ positional embedding weights).
There are some issues similar to mine.
- https://github.com/huggingface/transformers/issues/15216
- https://github.com/huggingface/transformers/issues/15673
- https://github.com/microsoft/DeBERTa/issues/74
But it doesn't seem to be working out yet.
Can you check it?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18659/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18658
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18658/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18658/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18658/events
|
https://github.com/huggingface/transformers/pull/18658
| 1,341,026,082
|
PR_kwDOCUB6oc49SR8H
| 18,658
|
Update run_clm_flax.py from single TPU worker to multiple TPU workers
|
{
"login": "congyingxia",
"id": 26128195,
"node_id": "MDQ6VXNlcjI2MTI4MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/26128195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/congyingxia",
"html_url": "https://github.com/congyingxia",
"followers_url": "https://api.github.com/users/congyingxia/followers",
"following_url": "https://api.github.com/users/congyingxia/following{/other_user}",
"gists_url": "https://api.github.com/users/congyingxia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/congyingxia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/congyingxia/subscriptions",
"organizations_url": "https://api.github.com/users/congyingxia/orgs",
"repos_url": "https://api.github.com/users/congyingxia/repos",
"events_url": "https://api.github.com/users/congyingxia/events{/privacy}",
"received_events_url": "https://api.github.com/users/congyingxia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18658). All of your documentation changes will be reflected on that endpoint.",
"WDYT @sanchit-gandhi?",
"Hey @congyingxia and sorry for the delay! \r\n\r\nWe try to keep these examples as simple as possible. In that spirit, we have limited the example scripts to single host training (v3-8). Unfortunately, scaling up to multi-host training is non-trivial: it requires another driver VM to keep the TPU hosts synced and execute commands across TPUs in parallel. For this reason, we have currently omitted multi-host training/inference, and instead focussed on single host training. If you're interested in running training/inference on a pod, I would suggest looking at the repo https://github.com/huggingface/bloom-jax-inference, which details how inference for LLM's can be scaled up to an arbitrary number of TPU devices with MP + DP.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,664
| 1,664
|
NONE
| null |
Current code only works on 1 TPU worker. If there's multiple TPU workers, the data need to be split into multiple workers first then shard to local devices. The same issue for T5 language modeling with flax: https://github.com/google/flax/discussions/2017
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18658/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18658",
"html_url": "https://github.com/huggingface/transformers/pull/18658",
"diff_url": "https://github.com/huggingface/transformers/pull/18658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18658.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18657
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18657/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18657/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18657/events
|
https://github.com/huggingface/transformers/pull/18657
| 1,340,891,666
|
PR_kwDOCUB6oc49R17i
| 18,657
|
Fix for issue #12182 to ensure that the tutorial for zero shot distillation works
|
{
"login": "pramodith",
"id": 16939722,
"node_id": "MDQ6VXNlcjE2OTM5NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/16939722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pramodith",
"html_url": "https://github.com/pramodith",
"followers_url": "https://api.github.com/users/pramodith/followers",
"following_url": "https://api.github.com/users/pramodith/following{/other_user}",
"gists_url": "https://api.github.com/users/pramodith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pramodith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pramodith/subscriptions",
"organizations_url": "https://api.github.com/users/pramodith/orgs",
"repos_url": "https://api.github.com/users/pramodith/repos",
"events_url": "https://api.github.com/users/pramodith/events{/privacy}",
"received_events_url": "https://api.github.com/users/pramodith/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18657). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,664
| 1,664
|
NONE
| null |
# What does this PR do?
The code for training models via zero shot distillation was breaking because the .map() function was removing the _labels_ field from the dataset object. This PR fixes the issue by changing the way the tokenizer is called via the map() function.
Fixes [https://github.com/huggingface/transformers/issues/12182](https://github.com/huggingface/transformers/issues/12182
)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patil-suraj
@VictorSanh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18657/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18657",
"html_url": "https://github.com/huggingface/transformers/pull/18657",
"diff_url": "https://github.com/huggingface/transformers/pull/18657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18657.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18656
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18656/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18656/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18656/events
|
https://github.com/huggingface/transformers/issues/18656
| 1,340,584,091
|
I_kwDOCUB6oc5P57Cb
| 18,656
|
BigBird inference: same input data gives different outputs
|
{
"login": "NautiyalAmit",
"id": 34062684,
"node_id": "MDQ6VXNlcjM0MDYyNjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/34062684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NautiyalAmit",
"html_url": "https://github.com/NautiyalAmit",
"followers_url": "https://api.github.com/users/NautiyalAmit/followers",
"following_url": "https://api.github.com/users/NautiyalAmit/following{/other_user}",
"gists_url": "https://api.github.com/users/NautiyalAmit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NautiyalAmit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NautiyalAmit/subscriptions",
"organizations_url": "https://api.github.com/users/NautiyalAmit/orgs",
"repos_url": "https://api.github.com/users/NautiyalAmit/repos",
"events_url": "https://api.github.com/users/NautiyalAmit/events{/privacy}",
"received_events_url": "https://api.github.com/users/NautiyalAmit/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@NautiyalAmit Sure!",
"@NautiyalAmit The provided code snippet has several issues that it fails to run. \r\n\r\nThe \"model path \" is not a valid model name on the Hub, and I don't know which one you tried.\r\nThe call `print(question_answer(chunk, question,handle_impossible_answer=False))` has missing arguments which gives\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/home/yih_dar_huggingface_co/transformers/run_bigbird.py\", line 31, in <module>\r\n print(question_answer(chunk, question,handle_impossible_answer=False))\r\nTypeError: question_answer() missing 2 required positional arguments: 'text' and 'question'\r\n```\r\n\r\nCould you fix the code snippet, please π . Thank you.",
"Please find the code with the correct args:\r\n```\r\nfrom transformers import (BigBirdForQuestionAnswering, BigBirdTokenizer)\r\nimport torch \r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntokenizer = BigBirdTokenizer.from_pretrained(\"model path \")\r\nmodel = BigBirdForQuestionAnswering.from_pretrained(\"model path\")\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\nmodel = model.to((device), non_blocking=True)\r\ndef question_answer(text, question, handle_impossible_answer=False):\r\n encoded_inputs = tokenizer(question, text, return_tensors=\"pt\").to(device)\r\n start_positions = torch.tensor([1]).to((device), non_blocking=True)\r\n end_positions = torch.tensor([3]).to((device), non_blocking=True)\r\n # self.model.eval()\r\n bbmodel = model.to((device), non_blocking=True)\r\n with torch.no_grad(): # reduce memory consumption\r\n outputs = bbmodel(\r\n **encoded_inputs,\r\n start_positions=start_positions,\r\n end_positions=end_positions,\r\n output_attentions=True)\r\n print(outputs)\r\n##note this is a sample input chunks:\r\nchunks = [\"Scikit-learn is a free software machine learning library for the Python programming language.\",\r\n \"It features various classification, regression and clustering algorithms including support-vector machines\", \"sklearn is a lib\"]\r\nquestion = \"what is sklearn?\"\r\n\r\n\r\nfor chunk in chunks:\r\n print(question_answer(chunk, question,handle_impossible_answer=False))\r\n print(\"-----------------------------------------------------\")\r\n print(question_answer(chunk, question,handle_impossible_answer=False))\r\n `print(\"------xxxxxxx-----------------------------------------------\")`\r\n",
"Thanks @NautiyalAmit ! The following 2 lines would still fail . Could you specify the exact checkpoint name you used? Thanks.\r\n\r\n```python\r\ntokenizer = BigBirdTokenizer.from_pretrained(\"model path \")\r\nmodel = BigBirdForQuestionAnswering.from_pretrained(\"model path\")\r\n```",
"Hi @ydshieh , you can check on base checkpoint 0 from: https://console.cloud.google.com/storage/browser/bigbird-transformer/pretrain/bigbr_base?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false",
"@NautiyalAmit I would like to help if the code snippet is self-contained, which means it should be able to run directly. (In some special case, I agree some manual actions might be necessary).\r\n\r\nHere the code snippet is incomplete (missing `model path `). Even the provided GCS link contains TF checkpoint files, which could not be loaded with the `.from_pretrained` method in `transformers` models.\r\n\r\nAs you found the issue, you must have something that could run on your side. Please try to help us debug more easily in order to investigate the issue you encountered.\r\n\r\nI would guess you have used a model from [HuggingFace Hub](https://huggingface.co/models), with probably an official bigbird model checkpoint. But it would still be very nice if you can specify explicitly in the code snippet. Thank you.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,664
| 1,664
|
NONE
| null |
### System Info
- pytorch==1.10.2
- transformers==4.20.1
- python=3.9.7
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import (BigBirdForQuestionAnswering, BigBirdTokenizer)
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = BigBirdTokenizer.from_pretrained("model path ")
model = BigBirdForQuestionAnswering.from_pretrained("model path")
def question_answer(model,tokenizer,text, question, handle_impossible_answer=False):
encoded_inputs = tokenizer(question, text, return_tensors="pt").to(device)
start_positions = torch.tensor([1]).to((device), non_blocking=True)
end_positions = torch.tensor([3]).to((device), non_blocking=True)
bb_model = model.to((device), non_blocking=True)
with torch.no_grad():
outputs = bb_model(
**encoded_inputs,
start_positions=start_positions,
end_positions=end_positions,
output_attentions=True)
print(outputs)
##note this is a sample input chunks:
chunks = ["Scikit-learn is a free software machine learning library for the Python programming language.",
"It features various classification, regression and clustering algorithms including support-vector machines", "sklearn is a lib"]
question = "what is sklearn?"
for chunk in chunks:
print(question_answer(chunk, question,handle_impossible_answer=False))
print("-----------------------------------------------------")
print(question_answer(chunk, question,handle_impossible_answer=False))
print("------xxxxxxx-----------------------------------------------")
```
### Expected behavior
Every time for the exact same input data, the output tensors are the same for the first time.
However, in the 2nd, 3rd runs and so on the tensors vary from the 1st run.
run1 :
```
BigBirdForQuestionAnsweringModelOutput(loss=tensor(1000012.9375), start_logits=tensor([[-1.3296e+00, -1.0000e+06, -1.0000e+06, ..., -1.1439e+01,
-1.0714e+01, -1.0375e+01]]), end_logits=tensor([[-3.0393e+00, -1.0000e+06, -1.0000e+06, ..., -8.3824e+00,
-7.1354e+00, -5.7355e+00]]), pooler_output=None, hidden_states=None, attentions=(tensor([[[[1.7520e-01, 3.3716e-03, 1.7215e-03, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[3.4175e-01, 4.7108e-02, 2.2092e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[6.5068e-02, 6.3478e-01, 1.1994e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[3.0409e-03, 6.9234e-05, 1.2098e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[6.5139e-03, 5.8579e-05, 4.8735e-05, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.9445e-02, 2.0696e-04, 2.2188e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
[[2.0401e-02, 9.3263e-04, 1.9891e-03, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.7520e-02, 1.4839e-02, 1.2475e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.2214e-02, 3.1718e-02, 1.0765e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[2.3348e-03, 6.1570e-04, 8.4744e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...
[9.8621e-02, 1.7792e-06, 4.8031e-06, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]]]])))
```
run 2:
```
BigBirdForQuestionAnsweringModelOutput(loss=tensor(1000012.9375), start_logits=tensor([[-1.3296e+00, -1.0000e+06, -1.0000e+06, ..., -1.1439e+01,
-1.0714e+01, -1.0375e+01]]), end_logits=tensor([[-3.0393e+00, -1.0000e+06, -1.0000e+06, ..., -8.3824e+00,
-7.1354e+00, -5.7355e+00]]), pooler_output=None, hidden_states=None, attentions=(tensor([[[[1.7520e-01, 3.3716e-03, 1.7215e-03, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[3.4175e-01, 4.7108e-02, 2.2092e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[6.5068e-02, 6.3478e-01, 1.1994e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[3.0409e-03, 6.9234e-05, 1.2098e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[6.5139e-03, 5.8579e-05, 4.8735e-05, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.9445e-02, 2.0696e-04, 2.2188e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
[[2.0401e-02, 9.3263e-04, 1.9891e-03, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.7520e-02, 1.4839e-02, 1.2475e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.2214e-02, 3.1718e-02, 1.0765e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[2.3348e-03, 6.1570e-04, 8.4744e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...
[1.1374e-05, 2.7021e-06, 4.4213e-07, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]]]])))
```
run 3:
```
BigBirdForQuestionAnsweringModelOutput(loss=tensor(1000012.9375), start_logits=tensor([[-1.3296e+00, -1.0000e+06, -1.0000e+06, ..., -1.1439e+01,
-1.0714e+01, -1.0375e+01]]), end_logits=tensor([[-3.0393e+00, -1.0000e+06, -1.0000e+06, ..., -8.3824e+00,
-7.1354e+00, -5.7355e+00]]), pooler_output=None, hidden_states=None, attentions=(tensor([[[[1.7520e-01, 3.3716e-03, 1.7215e-03, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[3.4175e-01, 4.7108e-02, 2.2092e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[6.5068e-02, 6.3478e-01, 1.1994e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[3.0409e-03, 6.9234e-05, 1.2098e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[6.5139e-03, 5.8579e-05, 4.8735e-05, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.9445e-02, 2.0696e-04, 2.2188e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
[[2.0401e-02, 9.3263e-04, 1.9891e-03, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.7520e-02, 1.4839e-02, 1.2475e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.2214e-02, 3.1718e-02, 1.0765e-02, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[2.3348e-03, 6.1570e-04, 8.4744e-04, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...
[1.1374e-05, 2.7021e-06, 4.4213e-07, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]]]])))
```
Can you please look into this @ydshieh?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18656/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18655
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18655/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18655/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18655/events
|
https://github.com/huggingface/transformers/issues/18655
| 1,340,534,392
|
I_kwDOCUB6oc5P5u54
| 18,655
|
Generate: deprecate the use of model `config` as a source of defaults
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @patrickvonplaten ",
"I like the idea of using a `use_config_defaults` a lot - think that's a great additional safety mechanism to ensure it's possible to keep backward compatibility. \r\n\r\nAlso we were thinking about the idea of having a `generation_config.json` file that can optionally be passed to `generate` by the user and that includes all the default values that are set in the config at the moment. This would also make it easier to possible have multiple different generation configs.\r\nSome models like `bart-large`: https://huggingface.co/facebook/bart-large/blob/main/config.json#L45 always have certain generation parameters enabled by default and IMO it would be a smoother transition to help the user extract a `generation_config.json` from `config.json` and then always pass this config if present in the repo to `generate(...)` **instead** of forcing the user to always pass all those arguments to generate.\r\n\r\nWith the config, we could do something like the following automatically:\r\n- User runs model repo with `generate`. We detect that no `generation_config.json` is present and that default generation params are used from `config.json`\r\n- We throw a warning that states \"no generation config detected, we strongly advise you to run the following code snippet on your repo to create a `generate_config.json` file\r\n- We keep all the generation params in `config.json` though to keep backwards compatibility with `use_config_defaults`\r\n- However if a `generation_config.py` is present we always use this and do not look into the config\r\n- We have to make an exception with `max_length=20` because it's always set and we don't want to create a `generation_config.py` for all models \r\n\r\nAlso happy to jump on a call to brainstorm about this a bit!",
"Fair point! π\r\n\r\nFrom the comment above, let's consider the updated requirements:\r\n1. Until `v5`, the default behavior canβt change, i.e., we will use the model `config.json` as a source of defaults;\r\n2. From `v5` onwards, the default behavior is to use `generate_config.json` as a source of defaults;\r\n3. The transition should be as smooth as possible β the users should be able to anticipate this transition, so nothing changes when we release the new major version;\r\n4. We want to use defaults (many models are designed to do a certain thing) while also enabling power users to have full control over `generate`.\r\n\r\n______________________\r\nA solution that fits all requirements is the ability to specify where the defaults should be loaded from, with default paths controlled by us. With the aid of a script to create the new generation config file from the existing model config file, the transition should be smooth and users can anticipate any change.\r\n\r\nE.g. if we have a `generation_config_file` flag, defaulting to `None` and where a path in the model repo can be specified, then we could:\r\n- Set `generation_config_file=\"config.json\"`, which would mimic the existing default behavior (and would be the default behavior for now);\r\n- Set `generation_config_file=\"generation_config.json\"`, which would use the new config file for generation (which would be the default behavior in the future);\r\n- Set `generation_config_file` to ANY generation config path, so that power users can have multiple configurations for the same model;\r\n- Set `generation_config_file=False` (or other non-`None` falsy value) to not use any configuration at all.\r\n\r\nWe seem to need two warnings β οΈ :\r\n1. [Needed because in `v5` we will be defaulting to a new config file, which may not exist in a user's model repo, and the model may have generation parameters in its config] If the configuration file does not exist, fall back to `config.json` and warn about it. We can quickly scan `config.json` to avoid raising a warning if it doesn't contain any generation argument;\r\n2. [Needed because the default behavior will still be to use values from a config, and many users are not aware of it] If `generation_config_file` is not specifically set by the user, a warning should be raised if the config replaces any argument. Many configs don't replace any value.\r\n\r\nBoth warnings can be avoided by specifying the `generation_config_file` argument. They may be a bit verbose, but I think verbosity (which can be shut down easily) is preferable to magic confusing behavior.\r\n\r\nThe `max_length=20` default (and other similar defaults) can be easily added -- `max_length = max_length if max_length is not None else 20` after attempting to load the configs. We can add them to the argument's documentation (see below).\r\n\r\n__________________________________\r\n\r\nπ€ The only issue I have with this approach is that it is hell to document (similar to the current approach). Having \"this argument defaults to X or to `config.argument`\" for all arguments' documentation line is verbose and confusing, and users need to be aware that the configuration files play an important role. \r\n\r\nMy suggestion here would be to make `generation_config_file` the second argument of `generate` (after `input_ids`), so that it becomes immediately clear that `generate` argument defaults can be set through a file. Then, I would remove further references to the config in the docs, relying on the warnings to remind the user of what's going on. I think it is clear by now that long docs don't avoid simple issues :(\r\n\r\nWDYT?\r\n\r\n(P.S.: will edit the issue after we settle on an approach :) )",
"Cool, I think this is going into a very nice direction! A couple more questions to think about:\r\n\r\n- Do we really want a `generate_config_file` keyword argument for `generate(...)` ? For me it would be much more intuitive to just have `config: Optional[Dict]` as an argument. This would then mean it requires the user to do one more step for a specific config:\r\n\r\n```python\r\ngenerate_config = # load generation config from path\r\nmodel.generate(input_ids, config=generate_config)\r\n```\r\n\r\n- We could add a `config` argument to the init of `GenerationMixin` which would make backwards compatibility then very easy:\r\n - `from_pretrained(...)` would load either a `generation_config.json` or if not present a `config.json` and then set it as `self.generation_config = config` => then every generation model would have access to `self.generation_config` . In `generate` would could then add a `self.generate_config = config if config is not None else self.generate_config (the default one)` and then overwrite `self.generate_config` once more with if the user passes generate args into `generate` directly (e.g. model.generate(top_k=top_k)`\r\n - Overall I think we cannot really come around the fact the we need to store a config inside the model somewhere because it'd be a bit to me to load a config **upon calling generate**. E.g. `model.generate(..., generate_config=\"config.json\")` would have to load a config which opens too many problems with internet connection etc....\r\n\r\n- What type should `generation_config` be? Just a `dict` or let's maybe create a class for it (similar to `BloomConfig`). Creating its own class probably also helps with documentation\r\n\r\n-> What do you think? ",
"@patrickvonplaten Agreed, the argument name is a bit too long π
However, if we decide to go the `GenerationMixin.__init__` route, we can't pick `config` -- `PreTrainedModel`, which inherits from `GenerationMixin`, uses a `config` argument for the model config. Perhaps `generation_config`? We could then do `.from_pretrained(foo, generation_config=bar)`.\r\n\r\nI love the ideas you gave around the config:\r\n1. if it is part of the `__init__` and if we always attempt to load the new file format before falling back to the original config, it actually means we don't need to do a major release to build the final version of this updated configuration handling! No need to change defaults with a new release at all β€οΈ ;\r\n2. The idea of \"arguments write into a config that is always used\" as opposed to \"config is used when no arguments are passed\" is much clearer to explain. We gain the ability to pass config files around (as opposed to tens of arguments), and it also opens the door to exporting generation configurations;\r\n3. Despite the above, we need to be careful with the overwrites: if a user calls `model.generate(top_k=top_k)` and then `model.generate(temperature=temperature)`, `top_k` should be the original config's `top_k`. Copies of objects are needed;\r\n4. Agreed, having all downloads/file paths in the same place is helpful.\r\n\r\nRegarding `dict` vs `class` -- I'd go with `class` (or perhaps a simpler `dataclass`). Much easier to document and enforce correctness, e.g. check if the right arguments are being used with a certain generation type.\r\n\r\n__________________________\r\n\r\nIt seems like we are in agreement. Are there more issues we can anticipate?",
"Very nice summary @gante thanks for writing this all down - I agree with all the above point! \r\n\r\n@LysandreJik @sgugger and maybe @thomwolf could you take a quick look here? I think @gante and I have now an actionable plan for `generate()` and would be ready to open a PR.\r\n\r\nBefore starting the PR, it would be nice if you could check if you generally agree with our comments here so that we're not totally on a different page before opening such a big PR. The PR will then take some time and require discussion, but I think we have a clear vision of what we want now",
"@patrickvonplaten @LysandreJik @sgugger @thomwolf -- I took the liberty of updating the issue at the top with the plan that originated from the discussion here (and also to structure the whole thing in my head better) :)",
"Thanks for the write-up! I think this is a much welcome change that will tremendously improve the way we use `generate`.\r\n\r\nWriting down some thoughts below.\r\n\r\n- Very personal, but I think `generation_config` sounds more explicit. `generate` is very understandable by us because we know what is the \"generate method\", but \"generation config\" sounds so much clearer to me than \"generate config\".\r\n- Would the generate config class be able to load from other models? i.e., could we load a generation config specific to `bart-large-cnn` in `t5-large`? Would we enforce model-specificity, or would we allow this to work? How would we handle model-specific attributes (maybe there aren't any, there seems to be only RAG that has its own `generate` method)?\r\n- Could we store multiple generation configs in the same repo? How would you handle a model that can have several generation configuration, for example a model such as a prefix-LM that could do both translation and summarization with the same checkpoint?\r\n\r\nThe biggest work here will likely be education & documentation. I think this will already make things much clearer, but I suppose the much awaited generate method doc rework will be an absolute requirement after this refactor!",
"Agreed, the biggest issue is and will be education and documentation. Hopefully, this will make the process easier π \r\n\r\n- Regarding one or multiple generation config classes: there are two arguments in `generate` that are used with a limited number of (encoder-decoder) models, `forced_bos_token_id` and `forced_eos_token_id`. Additionally, there is one argument, `encoder_no_repeat_ngram_size`, that is only used in encoder-decoder models (and have no effect on decoder-only). The remaining **36** arguments are usable by all models. IMO, having a single class would make documentation (the key issue) much simpler, and model<>arguments verification can be done in the function (as it is done in the present).\r\n- Regarding multiple configs in the same repo: Yes that would be doable. According to the plan above, through the specification of a different `generation_config` files. But @LysandreJik raised a good point, as the name of the files containing the defaults for different tasks may not be immediately obvious to the users, which implies more documentation pain. Perhaps we can take the chance here to approximate `generate` to `pipeline`, which we know is user-friendly -- in the `pipeline`, [specific config parameters are loaded for each task](https://github.com/huggingface/transformers/blob/051311ff66e7b23bfcfc42bc514c969517323ce9/src/transformers/pipelines/base.py#L783) (here's an [example of a config with task-specific parameters](https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L55)). We could use the exact same structure with the new `generation_config` files, where all task-specific arguments can be set this way, and `generate` could gain a new `task` argument. That way, there would be a higher incentive to set task-specific parameters that would work across the HF ecosystem (`generate` and `pipeline` for now, perhaps more in the future).\r\n\r\n```python\r\n# with the `task` parameter, it is trivial to share the parameters for some desired behavior\r\n\r\n# When loading the model, the existence of task-specific options would be logged to the user.\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"...\")\r\ninput_prompt = ...\r\ntask_tokens = model.generate(**input_prompt, task=\"my_task\")\r\n# There would be an exception if `my_task` is not specified in the generation config file. \r\n```",
"The plan looks good to me, but the devil will be in the details ;-) Looking forward to the PRs actioning this!",
"Closing -- `generation_config` is now the source of defaults for `.generate()`"
] | 1,660
| 1,677
| 1,677
|
MEMBER
| null |
EDIT: Updated with the discussion up to [2022/08/20](https://github.com/huggingface/transformers/issues/18655#issuecomment-1221047772)
## Why?
A confusing part of `generate` is how the defaults are set. When a certain argument is not specified, we attempt to fetch it from the model `config` file. This makes `generate` unpredictable and hard to fully document (the default values change for each model), as well as a major source of issues :hocho:
## How?
We have the following requirements:
1οΈβ£ The existing behavior canβt be removed, i.e., we must be able to use the model `config.json` as a source of generation parameters by default;
2οΈβ£ We do need per-model defaults -- some models are designed to do a certain thing (e.g. summarization), which requires a specific generation configuration.
3οΈβ£ Users must have full control over generate, with minimal hidden behavior.
Ideally, we also want to:
4οΈβ£ Have separation of concerns and use a new `generate_config.json` to parameterize generation;
A TL;DR of the plan consists in changing the paradigm from βnon-specified `generate` arguments are overridden by the [model] configuration fileβ to β`generate` arguments will override the [generate] configuration file, which is always usedβ. With proper documentation changes and logging/warnings, the user will be aware of what's being set for `generate`.
### Step 1: Define a new generate config file and class
Similar to the model config, we want a `.json` file to store the generation defaults. The class itself can be a very simplified version of `PretrainedConfig`, also with functionality to load/store from the hub.
### Step 2: Integrate loading generate config file in `.from_pretrained()`
The generation configuration file should be loaded when initializing the model with a `from_pretrained()` method. A couple of things to keep in mind:
1. There will be a new `kwarg` in `from_pretrained`, `generate_config` (or `generation_config`? Leaning toward the former as it has the same name as the function);
2. It will default to `generate_config.json` (contrarily to the model `config`, which defaults to `None`). This will allow users to set this argument to `None`, to load a model with an empty generate config. Some users have requested a feature like this;
3. Because the argument can take a path, it means that users can store/load multiple generate configs if they wish to do so (e.g. to use the same model for summarization, creative generation, factual question-answering, etc) π
5. Only models that can run `generate` will attempt to load it;
6. If there is no `generate_config.json` in the repo, it will attempt to initialize the generate configuration from the model `config.json`. This means that this solution will not change any `generate` behavior and will NOT need a major release πΌ
7. To keep the user in the loop, log ALL parameters set when loading the generation config file. Something like the snippet below.
8. Because this happens at `from_pretrained()` time, logging will only happen at most once and will not be verbose.
```
`facebook/opt-1.3b` generate configuration loaded from `generate_config.json`. The following generation defaults were set:
- max_length: 20
- foo: bar
- baz: qux
```
### Step 3: Generate uses the generate config class internally
Instead of using the configuration to override arguments when they are not set, overwrite a copy of the generation config at `generate` time. I.e. instead of:
```
arg = arg if arg is not None else self.config.arg
...
```
do
```
generate_config = self.generate_config.copy()
generate_config.arg = arg if arg is not None
...
```
This change has three main benefits:
1. We can improve the readability of the code, as we gain the ability to pass configs around. E.g. [this function](https://github.com/huggingface/transformers/blob/30992ef0d911bdeca425969d210771118a5cd1ac/src/transformers/generation_utils.py#L674) won't need to take a large list of arguments nor to bother with their initialization.
2. Building `generate` argument validation *for each type of generation* can be built in simple functions that don't need ~30 arguments as input π
3. The three frameworks (PT/TF/FLAX) can share functionality like argument validation, decreasing maintenance burden.
### Step 4: Document and open PRs with the generation config file
Rewrite part of the documentation to explain that a generation config is ALWAYS used (regardless of having defaults loaded from the hub or not). Open Hub PRs to pull generate-specific parameters from `config.json` to `generate_config.json`
## Pros/Cons
Pros:
- Better awareness -- any `generate` default will be logged to the screen when loading a generate-compatible model;
- Full control -- the users can choose NOT to load generation parameters or easily load a set of options from an arbitrary file;
- Enables more readable `generate` code;
- Enables sharing `generate`-related code across frameworks;
- Doesn't need a major release.
Cons:
- Pulling the generate parameters into their own files won't happen everywhere, as merging the changes described in step 4 is not feasible for all models (e.g. due to unresponsive model owners);
- Logging loaded defaults may not be enough to stop issues related to the default values, as the logs can be ignored;
- Another config file (and related code) to maintain.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18655/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18655/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18654
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18654/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18654/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18654/events
|
https://github.com/huggingface/transformers/pull/18654
| 1,340,513,231
|
PR_kwDOCUB6oc49QmrK
| 18,654
|
Update TF fine-tuning docs
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@stevhliu is there a better way to format a link to the `prepare_tf_dataset` docs [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18654/en/training#loading-data-as-a-tfdatadataset) than the way I did it here? It prints the whole `TFPreTrainedModel.prepare_tf_dataset` text, which looks a bit ugly on the page!",
"Yes, I believe you can just squiggly it!\r\n\r\n[`~TFPreTrainedModel.prepare_tf_dataset`]",
"@stevhliu Thank you for the suggestions! I'll make more edits to incorporate the other bits and ping you again for a final look.",
"@sgugger @stevhliu I finished incorporating your edits and did some other cleanup. I also replaced `to_tf_dataset` in the other fine-tuning pages. I didn't touch the translations, though - should I edit those too?",
"@sgugger I tried those edits but it looked a little odd because there was no separate header for the PyTorch section. I added a `Train` header to the whole thing with a brief intro, and then a `Train with PyTorch Trainer` header inside that block, which I think works a little better and makes it easier for people to find what they want in the sidebar. Let me know what you think!",
"@sgugger :facepalm: I knew I'd regret pinging you before waiting for the job to finish and checking it myself. Fixed!"
] | 1,660
| 1,662
| 1,662
|
MEMBER
| null |
This PR updates the fine-tuning sidebar tutorial with modern TF methods that were added in the most recent release.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18654/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18654",
"html_url": "https://github.com/huggingface/transformers/pull/18654",
"diff_url": "https://github.com/huggingface/transformers/pull/18654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18654.patch",
"merged_at": 1662553808000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18653
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18653/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18653/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18653/events
|
https://github.com/huggingface/transformers/pull/18653
| 1,340,465,066
|
PR_kwDOCUB6oc49QcaE
| 18,653
|
Generate: validate `model_kwargs` on FLAX (and catch typos in generate arguments)
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
MEMBER
| null |
# What does this PR do?
FLAX version of https://github.com/huggingface/transformers/pull/18261
Adds model_kwargs validation to FLAX generate, which also catches typos in the arguments. See the PR above for more details and an example of the error message users will see.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18653/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18653",
"html_url": "https://github.com/huggingface/transformers/pull/18653",
"diff_url": "https://github.com/huggingface/transformers/pull/18653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18653.patch",
"merged_at": 1660816582000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18652
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18652/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18652/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18652/events
|
https://github.com/huggingface/transformers/pull/18652
| 1,340,450,405
|
PR_kwDOCUB6oc49QZTv
| 18,652
|
Add cross entropy loss with stable custom gradient
|
{
"login": "yhavinga",
"id": 3098618,
"node_id": "MDQ6VXNlcjMwOTg2MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3098618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhavinga",
"html_url": "https://github.com/yhavinga",
"followers_url": "https://api.github.com/users/yhavinga/followers",
"following_url": "https://api.github.com/users/yhavinga/following{/other_user}",
"gists_url": "https://api.github.com/users/yhavinga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhavinga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhavinga/subscriptions",
"organizations_url": "https://api.github.com/users/yhavinga/orgs",
"repos_url": "https://api.github.com/users/yhavinga/repos",
"events_url": "https://api.github.com/users/yhavinga/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhavinga/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"float32 compared with bfloat16 without and with z_loss\r\n\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18652). All of your documentation changes will be reflected on that endpoint.",
"Hey @yhavinga,\r\n\r\nThanks a lot for your PR. Could we maybe add this to the `examples/research_folder` instead of the official examples?\r\nThe reason is that we won't have time to maintain this example and we would have need to check those loss curves on more than just Norwegian. \r\n\r\nWould that be ok for you? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Apologies for the long delay.\r\nIn the meantime I've noticed that term added to the loss (z_loss * jax.lax.square(log_z)) might in fact be (similar to) L2 regularization, and that this kind of regularization might in fact already be available through Optax Adafactors weight_decay_rate parameter. I currently do not have access to TRC so cannot test this, but thought it might be interesting to others training with bfloat16 and run_t5_mlm_flax.py if they might hit this page. "
] | 1,660
| 1,675
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds Flax code for a cross entropy loss calculation with an additional term to stabilize gradients for bfloat16 training.
The loss function is authored by the T5X Authors (https://github.com/google-research/t5x/blob/90d74fa703075d8b9808ae572602bc48759f8bcc/t5x/losses.py#L25)
Also add 'z_loss' as training argument to the T5 Flax pre-training script.
If z_loss > 0, then an auxiliary loss equal to z_loss*log(z)^2
will be added to the cross entropy loss (z = softmax normalization constant).
The two uses of z_loss are:
1. To keep the logits from drifting too far from zero, which can cause
unacceptable roundoff errors in bfloat16.
2. To encourage the logits to be normalized log-probabilities.
While the z_loss function is only added to the t5 flax pretraining script, this loss function might be interesting for
other flax pre-training scripts. I did not test this.
Finally, there is a (currently unused) function `compute_weighted_cross_entropy` with z_loss and label smoothing,
which might be useful for other flax training scripts as well.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [*] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
No but I tested with the t5 flax norwegian script, bfloat16 and z_loss set to 1e-4, the setting used in t5 gin configs.
In my own tests I've seen the following consistently:
Without z_loss and bfloat16, the loss will either diverge or converge on a higher plateau than training with float32.
With z_loss and bfloat16, set to 1e-4, loss curves almost match curves with float32 training.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @patil-suraj
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18652/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18652",
"html_url": "https://github.com/huggingface/transformers/pull/18652",
"diff_url": "https://github.com/huggingface/transformers/pull/18652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18652.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18651
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18651/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18651/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18651/events
|
https://github.com/huggingface/transformers/pull/18651
| 1,340,358,875
|
PR_kwDOCUB6oc49QFt2
| 18,651
|
Generate: validate `model_kwargs` on TF (and catch typos in generate arguments)
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,662
| 1,662
|
MEMBER
| null |
# What does this PR do?
TF version of https://github.com/huggingface/transformers/pull/18261
Adds `model_kwargs` validation to TF `generate`, which also catches typos in the arguments. See the PR above for more details and an example of the error message users will see.
Since TF had no dedicated file for `generate` tests, I took the liberty to create it and move some existing tests there (>70% of the diff is due to moving things around :) ). The test for this new check was also added there.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18651/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18651",
"html_url": "https://github.com/huggingface/transformers/pull/18651",
"diff_url": "https://github.com/huggingface/transformers/pull/18651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18651.patch",
"merged_at": 1662132326000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18650
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18650/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18650/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18650/events
|
https://github.com/huggingface/transformers/pull/18650
| 1,340,279,516
|
PR_kwDOCUB6oc49P0sR
| 18,650
|
Allow users to force TF availability
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah, additional nit: I'd document this somewhere.",
"@LysandreJik I think a lot of these envvars aren't documented anywhere - I can't see any documentation for USE_TF or USE_TORCH! Maybe we should make a separate docs PR with a list of envvars that `transformers` uses?",
"That would be fantastic :) \r\n\r\nThanks for your contribution, merging!"
] | 1,660
| 1,660
| 1,660
|
MEMBER
| null |
We have a user report that with custom Tensorflow builds and package names that `_tf_available` can return `False` even if `import tensorflow` succeeds, because the user's package name isn't in the [allowed list](https://github.com/huggingface/transformers/blob/02b176c4ce14340d26d42825523f406959c6c202/src/transformers/utils/import_utils.py#L63L75).
This is quite niche, so I don't want to do anything that could affect other users and workflows, but I added a `FORCE_TF_AVAILABLE` envvar that will skip version checks and just make sure TF is treated as available. @sgugger WDYT, or is there a better solution?
Fixed #18642
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18650/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18650",
"html_url": "https://github.com/huggingface/transformers/pull/18650",
"diff_url": "https://github.com/huggingface/transformers/pull/18650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18650.patch",
"merged_at": 1660806550000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18649
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18649/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18649/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18649/events
|
https://github.com/huggingface/transformers/issues/18649
| 1,340,163,025
|
I_kwDOCUB6oc5P4UPR
| 18,649
|
When resuming from checkpoint with Trainer using a streamed dataset, use the Datasets API to skip
|
{
"login": "sinking-point",
"id": 17532243,
"node_id": "MDQ6VXNlcjE3NTMyMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/17532243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinking-point",
"html_url": "https://github.com/sinking-point",
"followers_url": "https://api.github.com/users/sinking-point/followers",
"following_url": "https://api.github.com/users/sinking-point/following{/other_user}",
"gists_url": "https://api.github.com/users/sinking-point/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinking-point/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinking-point/subscriptions",
"organizations_url": "https://api.github.com/users/sinking-point/orgs",
"repos_url": "https://api.github.com/users/sinking-point/repos",
"events_url": "https://api.github.com/users/sinking-point/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinking-point/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"WDYT @lhoestq?",
"Unfortunately the `skip` function does the same. Though if you call `skip` before `map` it won't apply the preprocessing on the examples to skip and save time.\r\n\r\nTherefore I don't think it using `skip` would have a big impact here",
"Thanks for your insight @lhoestq . Is there a technical reason the function is implemented that way? I assume random access is supported, given that shuffling of enormous datasets is allowed and fast.",
"Random access is not supported for streaming datasets. Shuffling is approximate by shuffling the dataset shards order, and using a shuffle buffer, see the documentation here: https://huggingface.co/docs/datasets/v2.4.0/en/stream#shuffle",
"Would it be possible to skip whole shards when a large number of samples need to be skipped?",
"In the general case we don't know in advance how many examples there are per shard (it depends on the data file format). In many cases we would need some extra metadata somewhere that says how many examples each shard contain.\r\n\r\nFor example the C4 dataset is made out gzipped JSON Lines files - you don't know in advance how many examples each shard contain, because you need to uncompress the data and count the EOL.\r\n\r\nFor certain file formats like Parquet or Arrow however, the number of examples is known for free, as metadata included in the file itself. So maybe for those specific formats we could do something",
"Ah, this needs deeper changes than I thought then. Still, the metadata would be nice to have. It could even be generated lazily when people stream the dataset, to avoid a bit upfront cost. Alternatively, the streamed dataset client could locally keep a cache of how many samples each shard it's iterated through contains. I'd prefer the former, so it doesn't matter if the cache is cleared or you switch machines.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
### Feature request
Huggingface Datasets has a feature where you can instantiate datasets in streaming mode, so you don't have to download the whole thing onto your machine. The API has a skip function. The Transformers Trainer doesn't use this, it just iterates through all the batches to be skipped.
I propose Trainer checks whether the given dataset is a Datasets one in streaming mode, and if so, it uses the skip function.
### Motivation
I've been using the C4 dataset in streaming mode because of its size. Whenever I resume from a checkpoint, it takes a long time to skip. Around an hour for 200k batches. With this change, it should be effectively instant, which would save me a lot of time.
### Your contribution
I can make the change if no one else wants to. In which case, I'd like to be assured that the change will be reviewed and merged in a reasonable timeframe rather than being lost in the sea of pull requests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18649/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18648
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18648/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18648/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18648/events
|
https://github.com/huggingface/transformers/pull/18648
| 1,340,148,814
|
PR_kwDOCUB6oc49PYee
| 18,648
|
TF: Fix generation repetition penalty with XLA
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
MEMBER
| null |
# What does this PR do?
There was a dynamic shape being fetched as a static shape, causing issues from the 2nd generation iteration.
Fixes #18630
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18648/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18648",
"html_url": "https://github.com/huggingface/transformers/pull/18648",
"diff_url": "https://github.com/huggingface/transformers/pull/18648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18648.patch",
"merged_at": 1660653052000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18647
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18647/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18647/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18647/events
|
https://github.com/huggingface/transformers/pull/18647
| 1,340,084,589
|
PR_kwDOCUB6oc49PKeG
| 18,647
|
Fix cost condition in DetrHungarianMatcher and YolosHungarianMatcher to allow zero-cost
|
{
"login": "kongzii",
"id": 15619339,
"node_id": "MDQ6VXNlcjE1NjE5MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/15619339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kongzii",
"html_url": "https://github.com/kongzii",
"followers_url": "https://api.github.com/users/kongzii/followers",
"following_url": "https://api.github.com/users/kongzii/following{/other_user}",
"gists_url": "https://api.github.com/users/kongzii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kongzii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kongzii/subscriptions",
"organizations_url": "https://api.github.com/users/kongzii/orgs",
"repos_url": "https://api.github.com/users/kongzii/repos",
"events_url": "https://api.github.com/users/kongzii/events{/privacy}",
"received_events_url": "https://api.github.com/users/kongzii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks reasonable given the error message! cc @NielsRogge "
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes costs condition in DetrHungarianMatcher and YolosHungarianMatcher. In https://github.com/huggingface/transformers/pull/16720 a bug was introduced while switching from asserts to conditions. Currently, any zero-cost will result in a ValueError:
```python
if class_cost == 0 or bbox_cost == 0 or giou_cost == 0:
raise ValueError("All costs of the Matcher can't be 0")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @sgugger based on reviewers of previous PR https://github.com/huggingface/transformers/pull/16720.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18647/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18647",
"html_url": "https://github.com/huggingface/transformers/pull/18647",
"diff_url": "https://github.com/huggingface/transformers/pull/18647.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18647.patch",
"merged_at": 1661948939000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18646
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18646/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18646/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18646/events
|
https://github.com/huggingface/transformers/pull/18646
| 1,340,042,860
|
PR_kwDOCUB6oc49PBiB
| 18,646
|
[bnb] Small improvements on utils
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can confirm the tests pass!",
"so will there always be just one module not to convert?\r\n\r\nwon't it be safer to have modules instead and work with the list?",
"I have proposed a small refactoring that includes:\r\n\r\n- checking the list of modules to not convert instead of a single value. \r\n- changing an error message as it confused some user. Check: https://github.com/TimDettmers/bitsandbytes/issues/10 \r\n\r\nThe bnb slow tests are passing with this fix!",
"From https://github.com/huggingface/transformers/issues/18660 I also just added a commit to support having a custom list of the keys to ignore ",
"Thanks a lot @stas00 !\r\nThere is no rush at all for this PR, we can definitely wait for @sgugger before moving forward with it ",
"Can confirm the bnb slow tests are passing with the proposed fixes! Would love to have a final round of review πͺ \r\ncc @sgugger @stas00 ",
"Can confirm the slow tests pass after rebasing on `main`, will merge once it's green! π’ "
] | 1,660
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a small typo in `bitsandbytes.py`, should address https://github.com/huggingface/blog/pull/463#discussion_r946067141
I will have to test it first and mark it as ready for review!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18646/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18646",
"html_url": "https://github.com/huggingface/transformers/pull/18646",
"diff_url": "https://github.com/huggingface/transformers/pull/18646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18646.patch",
"merged_at": 1663239680000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18645
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18645/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18645/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18645/events
|
https://github.com/huggingface/transformers/pull/18645
| 1,340,039,928
|
PR_kwDOCUB6oc49PA6U
| 18,645
|
[BLOOM] Update doc with more details
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @cakiki ",
"_The documentation is not available anymore as the PR was closed or merged._",
"I would remove it, as it doesn't matter for the architecture, which these docs explain afaict? It's just an artefact of the data the models are trained on, hence it's arldy on the pretrained model & dataset READMEs. If someone were to release a BLOOM architecture model trained on different languages, this would be confusing imo.\r\n\r\nAlso should probably say `Pre-trained BLOOM models were officially released in the following sizes:`, as theoretically it's available in whatever version/size someone wants, just need to train it from scratch\r\n\r\n",
"Not sure if the model doc is restricted to the architecture.\r\n\r\n- `T5` has `## Example scripts` section\r\n- Some models include `training` and `generation` sections, for example, `t5` or `m2m_100`\r\n- `Marian` has a `## Naming` section\r\n\r\nBut I agree that we can probably just have something similar to `Marian` doc:\r\n\r\n```\r\nNote: The list of languages used in training can be found [here](link to model card page).\r\n```",
"Okay I agree with both points!\r\nIMO we can just specify that this list of languages is only relevant for bloom models stated on the doc, and for custom bloom models one should refer to the corresponding model card.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
Addressing https://huggingface.co/bigscience/bloom/discussions/86
I think that we should add the full list of the trained languages on the documentation so that we can refer to that whenever any user will have any question related to the trained languages
cc @ydshieh @Muennighoff
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18645/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18645/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18645",
"html_url": "https://github.com/huggingface/transformers/pull/18645",
"diff_url": "https://github.com/huggingface/transformers/pull/18645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18645.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18644
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18644/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18644/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18644/events
|
https://github.com/huggingface/transformers/pull/18644
| 1,339,963,673
|
PR_kwDOCUB6oc49Owve
| 18,644
|
Change scheduled CIs to use torch 1.12.1
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
COLLABORATOR
| null |
# What does this PR do?
To align with CircleCI tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18644/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18644",
"html_url": "https://github.com/huggingface/transformers/pull/18644",
"diff_url": "https://github.com/huggingface/transformers/pull/18644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18644.patch",
"merged_at": 1660650097000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18643
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18643/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18643/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18643/events
|
https://github.com/huggingface/transformers/issues/18643
| 1,339,936,376
|
I_kwDOCUB6oc5P3c54
| 18,643
|
`AttributeError: 'BertTokenizer' object has no attribute 'tokens_trie'
|
{
"login": "roetezadi",
"id": 19832911,
"node_id": "MDQ6VXNlcjE5ODMyOTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/19832911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roetezadi",
"html_url": "https://github.com/roetezadi",
"followers_url": "https://api.github.com/users/roetezadi/followers",
"following_url": "https://api.github.com/users/roetezadi/following{/other_user}",
"gists_url": "https://api.github.com/users/roetezadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roetezadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roetezadi/subscriptions",
"organizations_url": "https://api.github.com/users/roetezadi/orgs",
"repos_url": "https://api.github.com/users/roetezadi/repos",
"events_url": "https://api.github.com/users/roetezadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/roetezadi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@Narsil is the one knows the best.\r\n\r\nThis is added in the PR #13220",
"@roetezadi How did you solve this ?"
] | 1,660
| 1,661
| 1,661
|
NONE
| null |
### System Info
Google colab
### Who can help?
Anyone
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi,
I have trained a model with `transformers==4.9.1`.
Now I want to run this saved model with the new version of transformers and I get this error: `AttributeError: 'BertTokenizer' object has no attribute 'tokens_trie'`.
As I looked at transformers codes `token_trie` has been added which was not in the previous versions if I am correct. How can I solve this problem from my side? and is there any possibilities that this compatibility problem can be handled in the newer versions of transformers?
### Expected behavior
None
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18643/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18642
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18642/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18642/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18642/events
|
https://github.com/huggingface/transformers/issues/18642
| 1,339,933,934
|
I_kwDOCUB6oc5P3cTu
| 18,642
|
_tf_available for customized built tensorflow
|
{
"login": "kevint324",
"id": 8800468,
"node_id": "MDQ6VXNlcjg4MDA0Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8800468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevint324",
"html_url": "https://github.com/kevint324",
"followers_url": "https://api.github.com/users/kevint324/followers",
"following_url": "https://api.github.com/users/kevint324/following{/other_user}",
"gists_url": "https://api.github.com/users/kevint324/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevint324/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevint324/subscriptions",
"organizations_url": "https://api.github.com/users/kevint324/orgs",
"repos_url": "https://api.github.com/users/kevint324/repos",
"events_url": "https://api.github.com/users/kevint324/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevint324/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false
| null |
[] |
[
"Hi @kevint324 I think it (extending the list ) is fine if you would work with a specific `transformers` version. But it would be a bit tedious if you want to use newer versions constantly.\r\n\r\ncc @Rocketknight1 for the idea regarding `adding some runtime flexibility`.",
"Changed the tag to `Feature request` instead :-)",
"Hi @kevint324, I filed a PR that might resolve this, but I want to check with other maintainers that it's okay before I merge it. In the meantime, can you try it out? Just run `pip install git+https://github.com/huggingface/transformers.git@allow_force_tf_availability`, then set the environment variable `FORCE_TF_AVAILABLE=1` before running your code, and it should skip those checks now.",
"Yes, it works.\r\nThanks for the quick fix."
] | 1,660
| 1,660
| 1,660
|
NONE
| null |
### System Info
n/a
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
File "virtualenv_mlu/lib/python3.8/site-packages/transformers/pipelines/base.py", line 212, in infer_framework_load_model
raise RuntimeError(
RuntimeError: At least one of TensorFlow 2.0 or PyTorch should be installed. To install TensorFlow 2.0, read the instructions at https://www.tensorflow.org/install/ To install PyTorch, read the instructions at https://pytorch.or
```
### Expected behavior
https://github.com/huggingface/transformers/blob/02b176c4ce14340d26d42825523f406959c6c202/src/transformers/utils/import_utils.py#L63L75
I built a tensorflow-xxu for our in house accelerator and tried to run the transformer example.
I got the RuntimeError indicates tf is not available.
Currently the _tf_available has a hard-coded candidates list.
I'm not suire if extending the existing list is a good idea.
Maybe adding some runtime flexibility would be better?
Thanks
Kevin
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18642/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18641
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18641/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18641/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18641/events
|
https://github.com/huggingface/transformers/issues/18641
| 1,339,707,251
|
I_kwDOCUB6oc5P2k9z
| 18,641
|
Add TF VideoMAE
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Please reopen the issue as I am working on it. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"It's taking more time as I am in a job switch. Please reopen it. I apologise for the inconvenience. "
] | 1,660
| 1,666
| null |
MEMBER
| null |
### Feature request
Add the [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) model in TensorFlow.
### Motivation
There's an evident scarcity of SoTA and easy-to-use video models in TensorFlow. I believe having VideoMAE in TensorFlow would greatly benefit the community.
### Your contribution
I am willing to contribute the model. Please assign it to me.
@amyeroberts possible to assign this to me?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18641/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18641/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18640
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18640/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18640/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18640/events
|
https://github.com/huggingface/transformers/pull/18640
| 1,339,691,022
|
PR_kwDOCUB6oc49N2_5
| 18,640
|
Finetune guide for semantic segmentation
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,662
| 1,662
|
MEMBER
| null |
This PR creates a finetune guide for semantic segmentation in the docs. Unlike previous finetune guides, this one will include:
* metrics for evaluation
* a section for how to use the model for inference
* an embedded Gradio demo
π§ To do:
- [x] create section for inference
- [ ] create Gradio demo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18640/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18640",
"html_url": "https://github.com/huggingface/transformers/pull/18640",
"diff_url": "https://github.com/huggingface/transformers/pull/18640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18640.patch",
"merged_at": 1662146991000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18639
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18639/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18639/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18639/events
|
https://github.com/huggingface/transformers/issues/18639
| 1,339,632,343
|
I_kwDOCUB6oc5P2SrX
| 18,639
|
CLIP output doesn't match the official weight
|
{
"login": "xvjiarui",
"id": 18479688,
"node_id": "MDQ6VXNlcjE4NDc5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/18479688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xvjiarui",
"html_url": "https://github.com/xvjiarui",
"followers_url": "https://api.github.com/users/xvjiarui/followers",
"following_url": "https://api.github.com/users/xvjiarui/following{/other_user}",
"gists_url": "https://api.github.com/users/xvjiarui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xvjiarui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xvjiarui/subscriptions",
"organizations_url": "https://api.github.com/users/xvjiarui/orgs",
"repos_url": "https://api.github.com/users/xvjiarui/repos",
"events_url": "https://api.github.com/users/xvjiarui/events{/privacy}",
"received_events_url": "https://api.github.com/users/xvjiarui/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @xvjiarui, actually the output between original and hf is the same, but you compare with the wrong tensor. Firstly, we can compare the code between the original and hf (here we focus on the text features part).\r\n\r\n```\r\n# Original CLIP\r\ndef encode_image(self, image):\r\n return self.visual(image.type(self.dtype))\r\n\r\ndef encode_text(self, text):\r\n x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]\r\n\r\n x = x + self.positional_embedding.type(self.dtype)\r\n x = x.permute(1, 0, 2) # NLD -> LND\r\n x = self.transformer(x)\r\n x = x.permute(1, 0, 2) # LND -> NLD\r\n x = self.ln_final(x).type(self.dtype)\r\n\r\n # x.shape = [batch_size, n_ctx, transformer.width]\r\n # take features from the eot embedding (eot_token is the highest number in each sequence)\r\n x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection\r\n\r\n return x\r\n\r\ndef forward(self, image, text):\r\n image_features = self.encode_image(image)\r\n text_features = self.encode_text(text)\r\n\r\n # normalized features\r\n image_features = image_features / image_features.norm(dim=1, keepdim=True)\r\n text_features = text_features / text_features.norm(dim=1, keepdim=True)\r\n\r\n # cosine similarity as logits\r\n logit_scale = self.logit_scale.exp()\r\n logits_per_image = logit_scale * image_features @ text_features.t()\r\n logits_per_text = logits_per_image.t()\r\n\r\n # shape = [global_batch_size, global_batch_size]\r\n return logits_per_image, logits_per_text\r\n```\r\n```\r\n# HF CLIP\r\ndef forward(\r\n self,\r\n input_ids: Optional[torch.LongTensor] = None,\r\n pixel_values: Optional[torch.FloatTensor] = None,\r\n attention_mask: Optional[torch.Tensor] = None,\r\n position_ids: Optional[torch.LongTensor] = None,\r\n return_loss: Optional[bool] = None,\r\n output_attentions: Optional[bool] = None,\r\n output_hidden_states: Optional[bool] = None,\r\n return_dict: Optional[bool] = None,\r\n) -> Union[Tuple, CLIPOutput]:\r\n # Use CLIP model's config for some fields (if specified) instead of those of vision & text components.\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n\r\n vision_outputs = self.vision_model(\r\n pixel_values=pixel_values,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n\r\n text_outputs = self.text_model(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n position_ids=position_ids,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n\r\n image_embeds = vision_outputs[1]\r\n image_embeds = self.visual_projection(image_embeds)\r\n\r\n text_embeds = text_outputs[1]\r\n text_embeds = self.text_projection(text_embeds)\r\n\r\n # normalized features\r\n image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True)\r\n text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True)\r\n\r\n # cosine similarity as logits\r\n logit_scale = self.logit_scale.exp()\r\n logits_per_text = torch.matmul(text_embeds, image_embeds.t()) * logit_scale\r\n logits_per_image = logits_per_text.T\r\n\r\n loss = None\r\n if return_loss:\r\n loss = clip_loss(logits_per_text)\r\n\r\n if not return_dict:\r\n output = (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs)\r\n return ((loss,) + output) if loss is not None else output\r\n\r\n return CLIPOutput(\r\n loss=loss,\r\n logits_per_image=logits_per_image,\r\n logits_per_text=logits_per_text,\r\n text_embeds=text_embeds,\r\n image_embeds=image_embeds,\r\n text_model_output=text_outputs,\r\n vision_model_output=vision_outputs,\r\n )\r\n```\r\nyou may found out that the output of original clip `encode_text` method should equal to `text_embeds = self.text_projection(text_embeds)` in the hf repo. You can modify your script like this one to check it.\r\n```\r\nimport torch\r\nimport clip\r\nfrom PIL import Image\r\nimport requests\r\nimport numpy as np\r\n\r\ndevice = \"cpu\"\r\nmodel, preprocess = clip.load(\"ViT-B/32\", device=device)\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nimage = preprocess(image).unsqueeze(0).to(device)\r\ntext = clip.tokenize([\"a photo of a cat\", \"a photo of a dog\"]).to(device)\r\n\r\nwith torch.no_grad():\r\n clip_image_features = model.encode_image(image)\r\n clip_text_features = model.encode_text(text)\r\n\r\n logits_per_image, logits_per_text = model(image, text)\r\n org_probs = logits_per_image.softmax(dim=-1).cpu().numpy()\r\n\r\nprint(\"CLIP Label probs:\", org_probs) # prints: [[0.9927937 0.00421068 0.00299572]]\r\n\r\nfrom transformers import CLIPProcessor, CLIPModel\r\n\r\nmodel = CLIPModel.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nprocessor = CLIPProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\n\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ninputs = processor(\r\n text=[\"a photo of a cat\", \"a photo of a dog\"],\r\n images=image,\r\n return_tensors=\"pt\",\r\n padding=True,\r\n)\r\noutputs = model(**inputs)\r\nlogits_per_image = outputs.logits_per_image\r\nhf_probs = logits_per_image.softmax(dim=1).detach().cpu().numpy()\r\n\r\nprint(\"HF Label probs:\", hf_probs) # prints: [[0.9927937 0.00421068 0.00299572]]\r\n\r\nprint(np.allclose(org_probs, hf_probs))\r\nprint(\r\n torch.allclose(\r\n clip_text_features,\r\n model.text_projection(outputs.text_model_output.pooler_output).detach(),\r\n atol=1e-5,\r\n )\r\n)\r\n\r\n```",
"Hi @aRyBernAlTEglOTRO thanks for your reply. \r\n\r\nI see. That makes sense."
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
### System Info
I am using transformer 4.20.1.
I found that the official CLIP model outputs don't match the hugging face one.
### Who can help?
@NielsRogge @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To install clip, run `pip install git+https://github.com/openai/CLIP.git`
```python
import torch
import clip
from PIL import Image
import requests
device = "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image = preprocess(image).unsqueeze(0).to(device)
text = clip.tokenize(["a photo of a cat", "a photo of a dog"]).to(device)
with torch.no_grad():
clip_image_features = model.encode_image(image)
clip_text_features = model.encode_text(text)
logits_per_image, logits_per_text = model(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print("CLIP Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
print("HF Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
print(clip_text_features.shape)
print(outputs.text_model_output.pooler_output.shape)
print((clip_text_features - outputs.text_model_output.pooler_output).abs().max())
assert torch.allclose(clip_text_features, outputs.text_model_output.pooler_output, atol=1e-5)
```
### Expected behavior
The feature and probability should be the same.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18639/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18638
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18638/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18638/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18638/events
|
https://github.com/huggingface/transformers/pull/18638
| 1,339,552,041
|
PR_kwDOCUB6oc49NaCs
| 18,638
|
Update run clm no trainer.py and run mlm no trainer.py
|
{
"login": "zhoutang776",
"id": 47708118,
"node_id": "MDQ6VXNlcjQ3NzA4MTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/47708118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhoutang776",
"html_url": "https://github.com/zhoutang776",
"followers_url": "https://api.github.com/users/zhoutang776/followers",
"following_url": "https://api.github.com/users/zhoutang776/following{/other_user}",
"gists_url": "https://api.github.com/users/zhoutang776/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhoutang776/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhoutang776/subscriptions",
"organizations_url": "https://api.github.com/users/zhoutang776/orgs",
"repos_url": "https://api.github.com/users/zhoutang776/repos",
"events_url": "https://api.github.com/users/zhoutang776/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhoutang776/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @muellerzr, would you like to take a look at this while Sylvain is away?"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes issue in selecting `no_decay` parameters, we need to exclude "layer_norm.weight" not "LayerNorm.weight"
Fixes issue that `resume_step` will not be constructed properly when the user continues to train from a checkpoint with `gradient_accumulation_steps != 1`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18638/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18638",
"html_url": "https://github.com/huggingface/transformers/pull/18638",
"diff_url": "https://github.com/huggingface/transformers/pull/18638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18638.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18637
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18637/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18637/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18637/events
|
https://github.com/huggingface/transformers/pull/18637
| 1,339,526,811
|
PR_kwDOCUB6oc49NUkt
| 18,637
|
Update run_translation_no_trainer.py
|
{
"login": "zhoutang776",
"id": 47708118,
"node_id": "MDQ6VXNlcjQ3NzA4MTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/47708118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhoutang776",
"html_url": "https://github.com/zhoutang776",
"followers_url": "https://api.github.com/users/zhoutang776/followers",
"following_url": "https://api.github.com/users/zhoutang776/following{/other_user}",
"gists_url": "https://api.github.com/users/zhoutang776/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhoutang776/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhoutang776/subscriptions",
"organizations_url": "https://api.github.com/users/zhoutang776/orgs",
"repos_url": "https://api.github.com/users/zhoutang776/repos",
"events_url": "https://api.github.com/users/zhoutang776/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhoutang776/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @muellerzr would you like to take a look at this while Sylvain is away?",
"@zhoutang776 I believe these two are the same, no? https://github.com/huggingface/transformers/pull/18638",
"> @zhoutang776 I believe these two are the same, no? #18638\r\n\r\nYes, I guess other examples files have the same problem but I haven't checked other codes besides these three files. ",
"Okay! Will just merge this one then π Thanks for the bugfix and nice find!"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes issue in selecting `no_decay` parameters, we need to exclude "layer_norm.weight" not "LayerNorm.weight"
Fixes issue that `resume_step` will not construct properly when the user continues to train from a checkpoint with `gradient_accumulation_steps != 1`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18637/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18637",
"html_url": "https://github.com/huggingface/transformers/pull/18637",
"diff_url": "https://github.com/huggingface/transformers/pull/18637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18637.patch",
"merged_at": 1660670757000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18636
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18636/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18636/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18636/events
|
https://github.com/huggingface/transformers/issues/18636
| 1,339,409,931
|
I_kwDOCUB6oc5P1cYL
| 18,636
|
Support output_scores in XLA TF generate
|
{
"login": "gyin94",
"id": 67664443,
"node_id": "MDQ6VXNlcjY3NjY0NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/67664443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyin94",
"html_url": "https://github.com/gyin94",
"followers_url": "https://api.github.com/users/gyin94/followers",
"following_url": "https://api.github.com/users/gyin94/following{/other_user}",
"gists_url": "https://api.github.com/users/gyin94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyin94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyin94/subscriptions",
"organizations_url": "https://api.github.com/users/gyin94/orgs",
"repos_url": "https://api.github.com/users/gyin94/repos",
"events_url": "https://api.github.com/users/gyin94/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyin94/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @gante @sgugger ",
"Hey @rossbucky π The support for returning scores with XLA is in our plans. Like other XLA-related changes, it requires rewriting that part of the code to work with static shapes, as opposed to lists of tensors that grow as generate runs. \r\n\r\nThe next round of `generate` changes will focus on correctness (e.g. adding informative error messages). This feature will come right after that -- I'm keeping this issue open to track it publicly. ",
"Hi, just curious if there is any update on this issue? Or, are there any known alternatives for easily getting confidence scores w/ XLA enabled, particularly for greedy decoding? It seems beam_search outputs a `sequences_score` when `return_dict_in_generate` is `True`, but don't see anything similar for greedy decoding.",
"No updates :) I have yet to build the appropriate piping to update and return those variables"
] | 1,660
| 1,680
| null |
NONE
| null |
### Feature request
Support output_scores in XLA TF generate.
### Motivation
The scores would be critical and widely used in generation models for downstream applications. It is a confidence score for downstream thresholding or ranking. But it is not yet supported for XLA TF generate.
### Your contribution
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18636/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18635
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18635/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18635/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18635/events
|
https://github.com/huggingface/transformers/issues/18635
| 1,339,386,290
|
I_kwDOCUB6oc5P1Wmy
| 18,635
|
Passing optimizer to Trainer constructor does not work
|
{
"login": "quantitative-technologies",
"id": 29150871,
"node_id": "MDQ6VXNlcjI5MTUwODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29150871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quantitative-technologies",
"html_url": "https://github.com/quantitative-technologies",
"followers_url": "https://api.github.com/users/quantitative-technologies/followers",
"following_url": "https://api.github.com/users/quantitative-technologies/following{/other_user}",
"gists_url": "https://api.github.com/users/quantitative-technologies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quantitative-technologies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quantitative-technologies/subscriptions",
"organizations_url": "https://api.github.com/users/quantitative-technologies/orgs",
"repos_url": "https://api.github.com/users/quantitative-technologies/repos",
"events_url": "https://api.github.com/users/quantitative-technologies/events{/privacy}",
"received_events_url": "https://api.github.com/users/quantitative-technologies/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @muellerzr, would you like to take a look at this while Sylvain is on leave?",
"@quantitative-technologies your issue here is how you have your if/else setup. I believe by reinstantiating an optimizer always even if we don't use it in the trainer, the model gets linked to that optimizer instead (or too?), and as a result you aren't training well. \r\n\r\nHere's my refactor of your code, and below shows that when doing `PASS_OPTIMIZER_TO_TRAINER` = `True` or `False`, I get the exact same results (bar the timing ever so slightly)\r\n\r\n```python\r\nimport numpy as np\r\nimport site\r\nimport torch\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments, set_seed\r\nfrom transformers.trainer_pt_utils import get_parameter_names\r\n\r\nPASS_OPTIMIZER_TO_TRAINER = True\r\n\r\nMODEL_NAME = 'albert-large-v2'\r\nTASK = 'rte'\r\nMAX_SEQ_LENGTH = 128\r\nEPOCHS = 1\r\nLEARNING_RATE = 2e-5\r\nSEED = 10000\r\nOPTIMIZER = 'adamw_torch'\r\nOUTPUT_DIR = 'output'\r\n\r\n\r\ntrain_args = TrainingArguments(\r\n num_train_epochs=EPOCHS, \r\n learning_rate=LEARNING_RATE,\r\n seed=SEED,\r\n optim=OPTIMIZER,\r\n output_dir=OUTPUT_DIR,\r\n overwrite_output_dir=True,\r\n evaluation_strategy='epoch',\r\n do_eval=True,\r\n full_determinism=True\r\n)\r\n\r\nset_seed(SEED)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\r\nraw_datasets = load_dataset(\"glue\", TASK)\r\nmetric = load_metric(\"glue\", TASK)\r\n\r\n\r\ndef compute_metrics(p):\r\n preds = p.predictions\r\n preds = np.argmax(p.predictions, axis=1)\r\n return metric.compute(predictions=preds, references=p.label_ids)\r\n \r\ndef preprocess_function(examples):\r\n # Tokenize the texts\r\n args = (\r\n (examples['sentence1'], examples['sentence2'])\r\n )\r\n return tokenizer(*args, padding=\"max_length\", max_length=MAX_SEQ_LENGTH, truncation=True)\r\n\r\nraw_datasets = raw_datasets.map(\r\n preprocess_function,\r\n batched=True\r\n)\r\n\r\n\r\ntrain_dataset = raw_datasets[\"train\"]\r\neval_dataset = raw_datasets[\"validation\"]\r\n\r\nif not PASS_OPTIMIZER_TO_TRAINER:\r\n trainer = Trainer(\r\n model=model, \r\n args=train_args, \r\n train_dataset=train_dataset, \r\n eval_dataset=eval_dataset, \r\n compute_metrics=compute_metrics, \r\n tokenizer=tokenizer\r\n)\r\n\r\nelse:\r\n # Create adamw_torch optimizer manually\r\n decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm])\r\n decay_parameters = [name for name in decay_parameters if \"bias\" not in name]\r\n optimizer_grouped_parameters = [\r\n {\r\n \"params\": [p for n, p in model.named_parameters() if n in decay_parameters],\r\n \"weight_decay\": train_args.weight_decay,\r\n },\r\n {\r\n \"params\": [p for n, p in model.named_parameters() if n not in decay_parameters],\r\n \"weight_decay\": 0.0,\r\n },\r\n ]\r\n optimizer = torch.optim.AdamW(\r\n optimizer_grouped_parameters,\r\n lr=train_args.learning_rate,\r\n betas=(train_args.adam_beta1, train_args.adam_beta2),\r\n eps=train_args.adam_epsilon\r\n )\r\n\r\n trainer = Trainer(\r\n model=model, \r\n args=train_args, \r\n train_dataset=train_dataset, \r\n eval_dataset=eval_dataset, \r\n compute_metrics=compute_metrics, \r\n tokenizer=tokenizer,\r\n optimizers=(optimizer, None)\r\n )\r\n\r\ntrainer.train()\r\n```\r\n\r\nWith passing: \r\n```python\r\n{'epoch': 1.0,\r\n 'train_loss': 0.5830265436417017,\r\n 'train_runtime': 206.1904,\r\n 'train_samples_per_second': 12.076,\r\n 'train_steps_per_second': 1.513}\r\n```\r\nWithout passing:\r\n```python\r\n{'epoch': 1.0,\r\n 'train_loss': 0.5830265436417017,\r\n 'train_runtime': 205.4122,\r\n 'train_samples_per_second': 12.122,\r\n 'train_steps_per_second': 1.519}\r\n```",
"@muellerzr I don't believe that was the issue. My if/else structure was convoluted because I was trying to make an additional point besides the bug. \r\n\r\nNow when I run the exact code you sent, I am still seeing the original problem:\r\n\r\n```\r\n{'train_runtime': 50.4719, \r\n 'train_samples_per_second': 49.334, \r\n 'train_steps_per_second': 6.182, \r\n 'train_loss': 0.7272015596047426, \r\n 'epoch': 1.0}\r\n```\r\n\r\nSo it is not learning. Also mine is running ~4x faster than yours, though we may be using different hardware.\r\n\r\nI am using a Colab instance with a TPU (using a single-core, i.e. not distributed for the purpose of the bug report). \r\n\r\nCould the TPU be an issue here? Can you test it out on a TPU?",
"@quantitative-technologies I could recreate your issue once I had torch-xla installed. Looking into it",
"@quantitative-technologies the issue is described in this xla issue: https://github.com/pytorch/xla/issues/3675#issuecomment-1171702988\r\n\r\nEssentially you need to place the model on the device yourself first, then create the optimizer. After doing this I got the exact same results. \r\n\r\n*Super* subtle bug here. \r\n\r\nTo do so, add the following lines to your code:\r\n\r\n```python\r\nimport torch_xla.core.xla_model as xm\r\n\r\n# In the if/else of to create the optimizer yourself\r\n # Create adamw_torch optimizer manually\r\n model = model.to(xm.xla_device()) # <- We need to move the model to the device *before* creating the optimizer\r\n decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm])\r\n decay_parameters = [name for name in decay_parameters if \"bias\" not in name]\r\n ...\r\n```\r\n",
"> @quantitative-technologies I could recreate your issue once I had torch-xla installed. Looking into it\r\n\r\nRight, I should have mentioned that `torch-xla` was installed in the report.",
"> @quantitative-technologies the issue is described in this xla issue: [pytorch/xla#3675 (comment)](https://github.com/pytorch/xla/issues/3675#issuecomment-1171702988)\r\n> \r\n> Essentially you need to place the model on the device yourself first, then create the optimizer. After doing this I got the exact same results.\r\n\r\nOK, I see. \r\n\r\nI was impatient and made my own solution, by subclassing `Trainer` with an `optimizers_init` function argument that creates the optimizer from the `Trainer`'s model and the `args`. This resolved the issue, since `Trainer` places the model for you.\r\n\r\nI'm not sure if this is perhaps an improved design. Actually, having an `lr_scheduler_init` definitely makes more sense -- I didn't implement it yet though. This is because the `Trainer` does the calculation for the number of training steps which is needed to build the `lr_regularizer` anyhow. \r\n\r\nIf there is interest updating the `Trainer` I can submit a push request. Here is my code for my subclass:\r\n\r\n```\r\nclass TrainerOptimizerInit(Trainer):\r\n \"\"\"\r\n Args:\r\n optimizers_init (`Tuple[Callable[[Union[PreTrainedModel, nn.Module], TrainingArguments], torch.optim.Optimizer], \r\n torch.optim.lr_scheduler.LambdaLR]`, *optional*): A tuple containing (1) a function that is\r\n used to create an optimizer from the `model` and `args`, and (2) the scheduler to use. Will default to an \r\n instance of [`AdamW`] on your model and a scheduler given by [`get_linear_schedule_with_warmup`] controlled \r\n by `args`.\r\n \"\"\"\r\n def __init__(\r\n self,\r\n model: Union[PreTrainedModel, nn.Module] = None,\r\n args: TrainingArguments = None,\r\n data_collator: Optional[DataCollator] = None,\r\n train_dataset: Optional[Dataset] = None,\r\n eval_dataset: Optional[Dataset] = None,\r\n tokenizer: Optional[PreTrainedTokenizerBase] = None,\r\n model_init: Callable[[], PreTrainedModel] = None,\r\n compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None,\r\n callbacks: Optional[List[TrainerCallback]] = None,\r\n optimizers_init: Tuple[Callable[[Union[PreTrainedModel, nn.Module], TrainingArguments], torch.optim.Optimizer], \r\n torch.optim.lr_scheduler.LambdaLR] = (None, None),\r\n preprocess_logits_for_metrics: Callable[[torch.Tensor, torch.Tensor], torch.Tensor] = None,\r\n ):\r\n super().__init__(model=model, \r\n args=args, \r\n data_collator=data_collator, \r\n train_dataset=train_dataset, \r\n eval_dataset=eval_dataset, \r\n tokenizer=tokenizer, \r\n model_init=model_init, \r\n compute_metrics=compute_metrics, \r\n callbacks=callbacks,\r\n preprocess_logits_for_metrics=preprocess_logits_for_metrics)\r\n\r\n self.optimizer_init, self.lr_scheduler = optimizers_init\r\n\r\n def create_optimizer(self):\r\n \"\"\"\r\n Setup the optimizer.\r\n\r\n We provide a reasonable default that works well. If you want to use something else, you can subclass and override \r\n this method in a subclass.\r\n \"\"\"\r\n opt_model = self.model_wrapped if is_sagemaker_mp_enabled() else self.model\r\n\r\n if self.optimizer is None:\r\n if self.optimizer_init is None:\r\n # fall back to original behaviour\r\n return super().create_optimizer()\r\n\r\n self.optimizer = self.optimizer_init(opt_model, self.args)\r\n\r\n if is_sagemaker_mp_enabled():\r\n self.optimizer = smp.DistributedOptimizer(self.optimizer)\r\n\r\n return self.optimizer\r\n```\r\n",
"That'd be up to @sgugger, who is OOF until the 30th on holiday π I'll make sure he sees this though when he's back!",
"We don't really want to add other init functions. For any specific behavior, users should subclass the `create_optimizer` function directly.",
"I am confused. Should the optimizer be passed to TrainingArguments or to the Trainer?",
"@sgugger @muellerzr \r\nis there any documentation available around it, I have requirement to use Torch optim scheduler such as cosine with restarts i am unable to find a way to pass it, we have been given gette fucntions but how to pass to the trainer..\r\nany eg. will be help ful.\r\n",
"You can pass the `optimiser` as an argument to the trainer. See the documentation [here](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.optimizers) as well as the following: \r\n```python \r\n def create_optimizer_and_scheduler(self, num_training_steps: int):\r\n \"\"\"\r\n Setup the optimizer and the learning rate scheduler.\r\n\r\n We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the\r\n Trainer's init through `optimizers`, or subclass and override this method (or `create_optimizer` and/or\r\n `create_scheduler`) in a subclass.\r\n \"\"\"\r\n self.create_optimizer()\r\n if IS_SAGEMAKER_MP_POST_1_10 and smp.state.cfg.fp16:\r\n # If smp >= 1.10 and fp16 is enabled, we unwrap the optimizer\r\n optimizer = self.optimizer.optimizer\r\n else:\r\n optimizer = self.optimizer\r\n self.create_scheduler(num_training_steps=num_training_steps, optimizer=optimizer)\r\n```\r\n Thus subclassing `create_optimizer` will help you create which ever optimizer. Does that make sense for you? "
] | 1,660
| 1,695
| 1,661
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu102 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: TPU (used on the platform, nothing specific in the script)
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following script, which passes an optimizer object to `Trainer`.
```
import numpy as np
import site
import torch
from datasets import load_dataset, load_metric
from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments, set_seed
from transformers.trainer_pt_utils import get_parameter_names
PASS_OPTIMIZER_TO_TRAINER = True
MODEL_NAME = 'albert-large-v2'
TASK = 'rte'
MAX_SEQ_LENGTH = 128
EPOCHS = 1
LEARNING_RATE = 2e-5
SEED = 10000
OPTIMIZER = 'adamw_torch'
OUTPUT_DIR = 'output'
train_args = TrainingArguments(num_train_epochs=EPOCHS,
learning_rate=LEARNING_RATE,
seed=SEED,
optim=OPTIMIZER,
output_dir=OUTPUT_DIR,
overwrite_output_dir=True,
evaluation_strategy='epoch',
do_eval=True,
full_determinism=True)
set_seed(SEED)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
raw_datasets = load_dataset("glue", TASK)
metric = load_metric("glue", TASK)
def compute_metrics(p):
preds = p.predictions
preds = np.argmax(p.predictions, axis=1)
return metric.compute(predictions=preds, references=p.label_ids)
def preprocess_function(examples):
# Tokenize the texts
args = (
(examples['sentence1'], examples['sentence2'])
)
return tokenizer(*args, padding="max_length", max_length=MAX_SEQ_LENGTH, truncation=True)
raw_datasets = raw_datasets.map(
preprocess_function,
batched=True)
train_dataset = raw_datasets["train"]
eval_dataset = raw_datasets["validation"]
if not PASS_OPTIMIZER_TO_TRAINER:
trainer = Trainer(
model=model,
args=train_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
tokenizer=tokenizer)
# Create adamw_torch optimizer manually
decay_parameters = get_parameter_names(model, [torch.nn.LayerNorm])
decay_parameters = [name for name in decay_parameters if "bias" not in name]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if n in decay_parameters],
"weight_decay": train_args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if n not in decay_parameters],
"weight_decay": 0.0,
},
]
optimizer = torch.optim.AdamW(optimizer_grouped_parameters,
lr=train_args.learning_rate,
betas=(train_args.adam_beta1, train_args.adam_beta2),
eps=train_args.adam_epsilon)
if PASS_OPTIMIZER_TO_TRAINER:
trainer = Trainer(
model=model,
args=train_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
optimizers=(optimizer, None))
else:
#trainer.optimizer = optimizer
pass
trainer.train()
```
The model fails to train. Also the training pass runs at about 2x the normal speed.
If the variable `PASS_OPTIMIZER_TO_TRAINER` is now set to `False`, the `Trainer` creates its optimizer based on `train_args`, which should be identical to the manually created one. However, now the training is successful.
I'm guessing that after passing `model` into the `Trainer` constructor it gets modified and the `optimizer` parameters are no longer valid. This is because in the script (with `PASS_OPTIMIZER_TO_TRAINER = False`) if line -4 is uncommented it has no effect, indicating that `optimizer` is now the same as `trainer.optimizer`.
### Expected behavior
The script should work properly as is. It should have identical results whether `PASS_OPTIMIZER_TO_TRAINER` is `True` or `False`.
If my guess is correct, then I don't see how the optimizer argument of `Trainer` can accept an optimizer object. But then that creates issues for anyone wanting to use `Trainer` with a custom optimizer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18635/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18635/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18634
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18634/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18634/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18634/events
|
https://github.com/huggingface/transformers/pull/18634
| 1,339,292,319
|
PR_kwDOCUB6oc49MmTk
| 18,634
|
Longt5 fix link in docs
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18634/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18634",
"html_url": "https://github.com/huggingface/transformers/pull/18634",
"diff_url": "https://github.com/huggingface/transformers/pull/18634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18634.patch",
"merged_at": 1660663246000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18633
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18633/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18633/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18633/events
|
https://github.com/huggingface/transformers/pull/18633
| 1,339,226,306
|
PR_kwDOCUB6oc49MYTZ
| 18,633
|
Fix typo in add_new_model_like.py
|
{
"login": "mathemakitten",
"id": 31600291,
"node_id": "MDQ6VXNlcjMxNjAwMjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/31600291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathemakitten",
"html_url": "https://github.com/mathemakitten",
"followers_url": "https://api.github.com/users/mathemakitten/followers",
"following_url": "https://api.github.com/users/mathemakitten/following{/other_user}",
"gists_url": "https://api.github.com/users/mathemakitten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathemakitten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathemakitten/subscriptions",
"organizations_url": "https://api.github.com/users/mathemakitten/orgs",
"repos_url": "https://api.github.com/users/mathemakitten/repos",
"events_url": "https://api.github.com/users/mathemakitten/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathemakitten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @mathemakitten, let's try to solve the CircleCI issue! Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"Hi @LysandreJik, thanks for the link β I refreshed the user permissions in the CircleCI web interface and tests seem to trigger as expected now for my PRs!",
"Glad to hear it! Would you mind pushing a new commit (empty or not) to this branch so that it runs the check here and we can merge?"
] | 1,660
| 1,663
| 1,663
|
NONE
| null |
(feel free to ignore; trying to see if this triggers CI successfully as it's not working for me on other HF repos)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18633/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18633",
"html_url": "https://github.com/huggingface/transformers/pull/18633",
"diff_url": "https://github.com/huggingface/transformers/pull/18633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18633.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18632
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18632/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18632/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18632/events
|
https://github.com/huggingface/transformers/pull/18632
| 1,339,212,222
|
PR_kwDOCUB6oc49MVXk
| 18,632
|
Examples: add Bloom support for token classification
|
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @younesbelkada ",
"Hi @younesbelkada ,\r\n\r\nsure no problem, here are the steps (fresh environment):\r\n\r\n```bash\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers/\r\npip3 install -e .\r\npip3 install seqeval evaluate\r\ncd examples/pytorch/token-classification\r\npython3 run_ner.py --model_name_or_path bigscience/bloom-560m --dataset_name conll2003 --output_dir ./output-bloom-560m --do_train --do_eval --do_predict\r\n```\r\n\r\nThen the following error message is thrown:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"run_ner.py\", line 630, in <module>\r\n main()\r\n File \"run_ner.py\", line 464, in main\r\n train_dataset = train_dataset.map(\r\n File \"/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2387, in map\r\n return self._map_single(\r\n File \"/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 557, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 524, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py\", line 480, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2775, in _map_single\r\n batch = apply_function_on_filtered_inputs(\r\n File \"/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2655, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2347, in decorated\r\n result = f(decorated_item, *args, **kwargs)\r\n File \"run_ner.py\", line 422, in tokenize_and_align_labels\r\n tokenized_inputs = tokenizer(\r\n File \"/workspace/transformers/src/transformers/tokenization_utils_base.py\", line 2475, in __call__\r\n encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)\r\n File \"/workspace/transformers/src/transformers/tokenization_utils_base.py\", line 2561, in _call_one\r\n return self.batch_encode_plus(\r\n File \"/workspace/transformers/src/transformers/tokenization_utils_base.py\", line 2752, in batch_encode_plus\r\n return self._batch_encode_plus(\r\n File \"/workspace/transformers/src/transformers/models/bloom/tokenization_bloom_fast.py\", line 140, in _batch_encode_plus\r\n raise Exception(\r\nException: You need to instantiate BloomTokenizerFast with add_prefix_space=True to use it with pretokenized inputs.\r\n```"
] | 1,660
| 1,660
| 1,660
|
COLLABORATOR
| null |
Hi,
this PR adds support for fine-tuning Bloom models for token classification tasks.
After trying out the current example, I got an error message from the tokenizer part, that `add_prefix_space=True` needs to be set. So with this PR, Bloom can be used in token classification example for PyTorch (there's no support for TensorFlow/FLAX yet).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18632/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18632",
"html_url": "https://github.com/huggingface/transformers/pull/18632",
"diff_url": "https://github.com/huggingface/transformers/pull/18632.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18632.patch",
"merged_at": 1660722658000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18631
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18631/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18631/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18631/events
|
https://github.com/huggingface/transformers/pull/18631
| 1,338,998,539
|
PR_kwDOCUB6oc49Ln3X
| 18,631
|
[bnb] Minor modifications
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"You could also read the rendered doc (not GitHub page) below\r\n\r\nhttps://moon-ci-docs.huggingface.co/docs/transformers/pr_18631/en/perf_train_gpu_one#efficient-training-on-a-single-gpu\r\n(toward the end)",
"@ydshieh do you think we should keep the version `0.31.8` on the Dockerfile since we know it's the version that works π€ ",
"> @ydshieh do you think we should keep the version `0.31.8` on the Dockerfile since we know it's the version that works π€\r\n\r\nSo far let's not add this. If the newer versions constantly break things, we can ping a particular versions.",
"Great works for me!",
"Great thanks! \r\nDo you think that we can merge it for now? I am especially interested in seeing if the changes in the `Dockerfile` would change anything (theoretically no, but we are using `bitsandbytes==0.31.5` on the Dockerfile and the latest verison is the `0.31.8`). For the rest since it's related to documentation I think we can always re-iterate.\r\nPerhaps I can open a separate PR just to change the Dockerfile?",
"If you feel it's complete and want to merge it, go for it. We can always improve it later.",
"Thanks a lot!\r\nJust went through a final pass, looks good to me!",
"@younesbelkada, I have just realized that this PR added to the wrong perf doc. This new feature is inference only and thus ideally should go into the inference doc and not training. Probably the many-gpu one. https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_many.mdx\r\n\r\nThanks.",
"No problem at all and sorry for the inconvenience, will re-open a PR for that!",
"I have addressed a PR at: https://github.com/huggingface/transformers/pull/18671 ! "
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
Addressing the request from https://github.com/huggingface/transformers/pull/17901 to refactor the documentation a little bit and fixes small details in the bnb PR.
- Fixing documentation
- Added useful troubleshooting document for more efficient debugging
- Updated the `Dockerfile` with the correct `bitsandbytes` version
cc @stas00 @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18631/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18631",
"html_url": "https://github.com/huggingface/transformers/pull/18631",
"diff_url": "https://github.com/huggingface/transformers/pull/18631.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18631.patch",
"merged_at": 1660690090000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18630
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18630/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18630/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18630/events
|
https://github.com/huggingface/transformers/issues/18630
| 1,338,968,725
|
I_kwDOCUB6oc5PzwqV
| 18,630
|
XLA generation error with repetition_penalty
|
{
"login": "AlekseyKorshuk",
"id": 48794610,
"node_id": "MDQ6VXNlcjQ4Nzk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/48794610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlekseyKorshuk",
"html_url": "https://github.com/AlekseyKorshuk",
"followers_url": "https://api.github.com/users/AlekseyKorshuk/followers",
"following_url": "https://api.github.com/users/AlekseyKorshuk/following{/other_user}",
"gists_url": "https://api.github.com/users/AlekseyKorshuk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlekseyKorshuk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlekseyKorshuk/subscriptions",
"organizations_url": "https://api.github.com/users/AlekseyKorshuk/orgs",
"repos_url": "https://api.github.com/users/AlekseyKorshuk/repos",
"events_url": "https://api.github.com/users/AlekseyKorshuk/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlekseyKorshuk/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @AlekseyKorshuk π Thank you for the reproducible script! \r\n\r\nI have never seen this exception, so I'll have to dig into it. Expect further information soon :)",
"@AlekseyKorshuk The PR linked above fixes the issue :) After it is merged, you'll have to install `transformers` from `main` to get its benefits! (which should be no issue for you, since you're using the `dev0` version)",
"Thank you, @gante π€ "
] | 1,660
| 1,664
| 1,660
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce error (adapted code from https://huggingface.co/blog/tf-xla-generate):
```python
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
generation_kwargs = {
"max_new_tokens": 64,
'eos_token_id': 198,
'do_sample': True,
'temperature': 0.72,
'top_k': 0,
'top_p': 0.725,
'repetition_penalty': 1.13,
}
tokenizer = AutoTokenizer.from_pretrained(
"gpt2", padding_side="left", pad_token="</s>"
)
model = TFAutoModelForCausalLM.from_pretrained("gpt2")
model.config.pad_token_id = model.config.eos_token_id
input_text = "repetition_penalty error"
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_text, return_tensors="tf")
print("model.generate")
model.generate(**tokenized_input, **generation_kwargs)
print("xla_generate")
xla_generate(**tokenized_input, **generation_kwargs) # error here
```
Error:
```
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 604, in generate *
seed=model_kwargs.pop("seed", None),
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 1651, in _generate *
input_ids,
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_utils.py", line 2475, in sample_body_fn *
next_tokens_scores = logits_processor(generated, next_token_logits, cur_len)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 94, in __call__ *
scores = processor(input_ids, scores, cur_len)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 278, in __call__ *
score_penalties = self._create_score_penalties(input_ids[:, :cur_len], scores)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation_tf_logits_process.py", line 265, in _create_score_penalties *
indexable_prev_input_ids = tf.concat(
ValueError: None values not supported.
```
By setting `repetition_penalty` to 1.0 or by removing this parameter everything works fine.
### Expected behavior
The expected result is the work of text generation using `repetition_penalty` without any errors, taking into account the use of XLA.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18630/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18629
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18629/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18629/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18629/events
|
https://github.com/huggingface/transformers/issues/18629
| 1,338,963,483
|
I_kwDOCUB6oc5PzvYb
| 18,629
|
OWL-ViT memory usage grows linearly with each prediction
|
{
"login": "zduey",
"id": 3272567,
"node_id": "MDQ6VXNlcjMyNzI1Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3272567?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zduey",
"html_url": "https://github.com/zduey",
"followers_url": "https://api.github.com/users/zduey/followers",
"following_url": "https://api.github.com/users/zduey/following{/other_user}",
"gists_url": "https://api.github.com/users/zduey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zduey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zduey/subscriptions",
"organizations_url": "https://api.github.com/users/zduey/orgs",
"repos_url": "https://api.github.com/users/zduey/repos",
"events_url": "https://api.github.com/users/zduey/events{/privacy}",
"received_events_url": "https://api.github.com/users/zduey/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @alaradirik as well :)",
"Thank you @zduey, I'm looking into this! ",
"@alaradirik - You are likely further along isolating the problem than I am, so please ignore if this is a distraction. The issue seems specific to repeated calls to the `OwlViTForObjectDetection` forward pass. \r\n\r\nThe following results in the same memory usage pattern as the initial snippet. It differs from the original mainly in that the processor is not called as part of the loop.\r\n\r\n```python\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import OwlViTForObjectDetection, OwlViTProcessor\r\nfrom tqdm import trange\r\n\r\ntext_prompts = [\"a photo of a cat\", \"a photo of a dog\"]\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\ninputs = processor(text=[text_prompts], images=image, return_tensors=\"pt\")\r\n\r\nfor _ in trange(15):\r\n _ = model(**inputs)\r\n```\r\n\r\n\r\nWhen I've stepped through the code more granularly, the jump in memory usage happens [here](https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/owlvit/modeling_owlvit.py#L1259), which is just a call to `OwlVitModel`. However, calls to `OwlVitModel` directly seem to have constant memory usage:\r\n\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import OwlViTProcessor, OwlViTModel\r\n\r\nmodel = OwlViTModel.from_pretrained(\"google/owlvit-base-patch32\")\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ninputs = processor(\r\n text=[[\"a photo of a cat\", \"a photo of a dog\"]], images=image, return_tensors=\"pt\"\r\n)\r\n\r\nfor _ in range(50):\r\n _ = model(**inputs)\r\n```\r\n\r\n\r\nSince I've run that separately without the same memory increase, the issue seems to be with how it gets called from `OWLViTForObjectDetection`, but I haven't been able to track it down. My initial guess (based on [this issue](https://discuss.pytorch.org/t/why-my-memory-keeps-increasing-at-every-iteration/118137)) was to look for a place where a tensor (one that is still attached to the computation graph) is being placed into a standard python list.\r\n",
"Hi @zduey, thank you for the detailed analysis!\r\n\r\nI'm more or less at the same point as you are, I confirmed that `OwlViTModel` is not the cause of the memory leak. I tracked the leak down to the calls to `image_text_embedder` method of `OwlViTForObjectDetection` and I'm working on the fix. I'm aiming to open a PR to fix this by tomorrow.",
"Just an update on the issue - this should be fixed when this [PR](https://github.com/huggingface/transformers/pull/18734) is merged. The memory leak was due to `OwlViTForObjectDetection` model making calls to `OwlViTModel`s non-forward methods and hence keeping track of all computational graphs.\r\n\r\nHere is the code I used to confirm the fix:\r\n```\r\nimport requests\r\nfrom PIL import Image\r\nimport torch\r\nfrom transformers import OwlViTModel, OwlViTForObjectDetection, OwlViTProcessor\r\n\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\n\r\ntext_prompts = [\"a photo of a cat\", \"a photo of a dog\"]\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ninputs = processor(text=[text_prompts], images=image, return_tensors=\"pt\")\r\n\r\nfor i in range(50):\r\n with torch.no_grad():\r\n _ = model(**inputs)\r\n```\r\n\r\n\r\nI will close this issue once the PR is merged.\r\n",
"We all learn in the hard way, @alaradirik . Just a few weeks ago, @NielsRogge and me had the same issue π\r\nWhenever there is PyTorch memory issue -> check `with torch.no_grad()` first.\r\n ",
"@ydshieh so true! Took me a deep dive into PyTorch docs and a while to debug it",
"Thanks so much @alaradirik for getting together a fix for this so quickly!",
"Closing this issue as the fix PR is merged",
"I don't think this bug is fixed. \r\nI was trying to use the model without torch.no_grad and the memory still grows linearly with every prediction I make, so something is still leaking",
"Hi @AlonZolfi You should use `with` `torch.no_grad`, not `without`",
"I understand this is a workaround. However, I need the gradients during inference for another calculation so I cannot use it with torch.no_grad",
"In this case, you will have to manage the gradients by yourself: when to save them, when to delete some of them, etc. It's out of the scope of the the library's GitHub issue pages however."
] | 1,660
| 1,687
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.11
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from torchvision.datasets import FakeData
from torchvision.transforms.functional import pil_to_tensor
from transformers import OwlViTProcessor, OwlViTForObjectDetection
text_prompts = ["a photo of a cat", "a photo of a dog"]
dataset = FakeData(size=50, image_size=(3, 28, 28), transform=pil_to_tensor)
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
target_sizes = torch.Tensor([[28, 28]])
for image, _ in dataset:
inputs = processor(text=[text_prompts], images=image, return_tensors="pt")
outputs = model(**inputs)
_ = processor.post_process(outputs=outputs, target_sizes=target_sizes)[0]
```
### Expected behavior
I expect to be able to generate predictions from the OwlViTForObjectDetection model in a loop without memory usage increasing by ~1GB on each call to the model (line 15). Below, I've included a plot of memory usage over time. I profiled the code using `memory_profiler` to determine that it is the call to the model (not the processing or post processing) that seems to be the culprit.

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18629/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18629/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18628
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18628/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18628/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18628/events
|
https://github.com/huggingface/transformers/pull/18628
| 1,338,955,899
|
PR_kwDOCUB6oc49LenT
| 18,628
|
Adds GroupViT to models exportable with ONNX
|
{
"login": "unography",
"id": 5240449,
"node_id": "MDQ6VXNlcjUyNDA0NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unography",
"html_url": "https://github.com/unography",
"followers_url": "https://api.github.com/users/unography/followers",
"following_url": "https://api.github.com/users/unography/following{/other_user}",
"gists_url": "https://api.github.com/users/unography/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unography/subscriptions",
"organizations_url": "https://api.github.com/users/unography/orgs",
"repos_url": "https://api.github.com/users/unography/repos",
"events_url": "https://api.github.com/users/unography/events{/privacy}",
"received_events_url": "https://api.github.com/users/unography/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@regisss hopefully you don't have to make any changes this time to make the tests pass!",
"Pinging @sgugger for final approval",
"Feel free to merge if you approve @lewtun",
"@lewtun Can you take a quick look at this PR and merge it when you approve? :slightly_smiling_face: "
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
Adds GroupViT to models exportable with ONNX
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18628/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18628",
"html_url": "https://github.com/huggingface/transformers/pull/18628",
"diff_url": "https://github.com/huggingface/transformers/pull/18628.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18628.patch",
"merged_at": 1661862695000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18627
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18627/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18627/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18627/events
|
https://github.com/huggingface/transformers/issues/18627
| 1,338,800,397
|
I_kwDOCUB6oc5PzHkN
| 18,627
|
can't run the wav2vec2-base-960 example
|
{
"login": "mehrdad78",
"id": 46048846,
"node_id": "MDQ6VXNlcjQ2MDQ4ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46048846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mehrdad78",
"html_url": "https://github.com/mehrdad78",
"followers_url": "https://api.github.com/users/mehrdad78/followers",
"following_url": "https://api.github.com/users/mehrdad78/following{/other_user}",
"gists_url": "https://api.github.com/users/mehrdad78/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mehrdad78/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mehrdad78/subscriptions",
"organizations_url": "https://api.github.com/users/mehrdad78/orgs",
"repos_url": "https://api.github.com/users/mehrdad78/repos",
"events_url": "https://api.github.com/users/mehrdad78/events{/privacy}",
"received_events_url": "https://api.github.com/users/mehrdad78/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi, @mehrdad78, I made some modifications to your script. But it should work the same, below is the modified script:\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import Wav2Vec2ForCTC, Wav2Vec2Processor\r\nimport torch\r\nfrom jiwer import wer\r\n\r\nlibrispeech_eval = load_dataset(\r\n \"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\"\r\n)\r\n\r\nmodel = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n\r\n\r\ndef map_to_pred(batch):\r\n arrays = [item[\"array\"] for item in batch[\"audio\"]]\r\n inputs = processor(arrays, return_tensors=\"pt\", padding=\"longest\")\r\n logits = model(**inputs).logits\r\n predicted_ids = torch.argmax(logits, dim=-1)\r\n transcription = processor.batch_decode(predicted_ids)\r\n batch[\"transcription\"] = transcription\r\n return batch\r\n\r\n\r\nresult = librispeech_eval.map(\r\n map_to_pred, batched=True, batch_size=4, remove_columns=[\"audio\"]\r\n)\r\n\r\nprint(\"WER:\", wer(result[\"text\"], result[\"transcription\"]))\r\n```\r\nThe output of this script should be:\r\n```\r\nWER: 0.059130434782608696\r\n```\r\nI think you need to change the dataset from\r\n```\r\nlibrispeech_eval = load_dataset(\r\n \"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\"\r\n)\r\n```\r\nto\r\n```\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\")\r\n```\r\nwhich aligns with your original script.\r\n",
"> Hi, @mehrdad78, I made some modifications to your script. But it should work the same, below is the modified script:\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor\r\n> import torch\r\n> from jiwer import wer\r\n> \r\n> librispeech_eval = load_dataset(\r\n> \"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\"\r\n> )\r\n> \r\n> model = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n> processor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n> \r\n> \r\n> def map_to_pred(batch):\r\n> arrays = [item[\"array\"] for item in batch[\"audio\"]]\r\n> inputs = processor(arrays, return_tensors=\"pt\", padding=\"longest\")\r\n> logits = model(**inputs).logits\r\n> predicted_ids = torch.argmax(logits, dim=-1)\r\n> transcription = processor.batch_decode(predicted_ids)\r\n> batch[\"transcription\"] = transcription\r\n> return batch\r\n> \r\n> \r\n> result = librispeech_eval.map(\r\n> map_to_pred, batched=True, batch_size=4, remove_columns=[\"audio\"]\r\n> )\r\n> \r\n> print(\"WER:\", wer(result[\"text\"], result[\"transcription\"]))\r\n> ```\r\n> \r\n> The output of this script should be:\r\n> \r\n> ```\r\n> WER: 0.059130434782608696\r\n> ```\r\n> \r\n> I think you need to change the dataset from\r\n> \r\n> ```\r\n> librispeech_eval = load_dataset(\r\n> \"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\"\r\n> )\r\n> ```\r\n> \r\n> to\r\n> \r\n> ```\r\n> librispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\")\r\n> ```\r\n> \r\n> which aligns with your original script.\r\n\r\nThank you"
] | 1,660
| 1,661
| 1,661
|
NONE
| null |
### System Info
i use Google Colab and the newest version of transformer
i wanted to run this example but i got error :`list indices must be integers or slices not str`
because line of :
`**result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])**`
```
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
### Who can help?
@patrickvonplaten @anton-l @sanchi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
just run the example.
### Expected behavior
calculate the wer
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18627/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18626
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18626/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18626/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18626/events
|
https://github.com/huggingface/transformers/issues/18626
| 1,338,799,923
|
I_kwDOCUB6oc5PzHcz
| 18,626
|
Parallelization for OPT model
|
{
"login": "lifan-yuan",
"id": 68536405,
"node_id": "MDQ6VXNlcjY4NTM2NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/68536405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lifan-yuan",
"html_url": "https://github.com/lifan-yuan",
"followers_url": "https://api.github.com/users/lifan-yuan/followers",
"following_url": "https://api.github.com/users/lifan-yuan/following{/other_user}",
"gists_url": "https://api.github.com/users/lifan-yuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lifan-yuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lifan-yuan/subscriptions",
"organizations_url": "https://api.github.com/users/lifan-yuan/orgs",
"repos_url": "https://api.github.com/users/lifan-yuan/repos",
"events_url": "https://api.github.com/users/lifan-yuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lifan-yuan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I have the same question since directly tuning models with this size (OPT) on one single device is infeasible. It would be great to have the parallelization mechanism implemented for OPT models. Many thanks! ",
"See this conversation which might help out: https://huggingface.co/facebook/opt-66b/discussions/6",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Lyandrejik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import OPTForCausalLM
model = OPTForCausalLM.from_pretrained("facebook/opt-6.7b").cuda()
model.parallelize()
### Expected behavior
Hi,
I am loading the OPT model. Due to many parameters, I cannot put the whole model on one CUDA device. Therefore, I tried to parallelize the OPT like T5, using "model.parallelize()". However, I find that there is no parallelization mechanism for OPT model now.
Can you please implement the parallelization mechanism for OPT models? Or what can I do to load and run a large OPT model?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18626/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18625
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18625/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18625/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18625/events
|
https://github.com/huggingface/transformers/pull/18625
| 1,338,773,496
|
PR_kwDOCUB6oc49K2jG
| 18,625
|
Create pipeline_tutorial.mdx german docs
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@patrickvonplaten, can you take a look at this?"
] | 1,660
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
this PR is another progress for https://github.com/huggingface/transformers/issues/18564
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18625/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18625",
"html_url": "https://github.com/huggingface/transformers/pull/18625",
"diff_url": "https://github.com/huggingface/transformers/pull/18625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18625.patch",
"merged_at": 1662019079000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18624
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18624/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18624/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18624/events
|
https://github.com/huggingface/transformers/issues/18624
| 1,338,750,640
|
I_kwDOCUB6oc5Py7aw
| 18,624
|
Community Integration: Colossal-AI for Large AI Models
|
{
"login": "binmakeswell",
"id": 61670638,
"node_id": "MDQ6VXNlcjYxNjcwNjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/61670638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binmakeswell",
"html_url": "https://github.com/binmakeswell",
"followers_url": "https://api.github.com/users/binmakeswell/followers",
"following_url": "https://api.github.com/users/binmakeswell/following{/other_user}",
"gists_url": "https://api.github.com/users/binmakeswell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binmakeswell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binmakeswell/subscriptions",
"organizations_url": "https://api.github.com/users/binmakeswell/orgs",
"repos_url": "https://api.github.com/users/binmakeswell/repos",
"events_url": "https://api.github.com/users/binmakeswell/events{/privacy}",
"received_events_url": "https://api.github.com/users/binmakeswell/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"If you have any difficulties or concerns, please let me know.\r\nWe can have further discussion about them, thanks. :-)",
"@stas00 \r\nseems much better than https://github.com/huggingface/transformers/issues/17392",
"I haven't had a change to read on Colossal-AI yet, why do you believe it's much better based on your research, @flozi00? I did notice that it suggests the integration of PatrickStar's functionality.\r\n\r\nCAI appears to be its own eco-system - not sure how easy it'd be to integrate with our eco-system.",
"https://github.com/hpcaitech/ColossalAI-Examples/blob/757514d2b1501d3530777cdf567f0a18063acf2d/image/resnet/train.py#L82-L111\r\n\r\nIn terms of code, it looks very similar to a normal pytorch training loop\r\nDid not had a deep look into the CAI code itself, focused on integration compitability to existing code\r\nto me it looks like you don't have to deal with the integration of patrickstar since everything is handled by CAI\r\nthe dependencies are also manageable\r\n\r\nI already noticed some time ago, that is was for a range of time in the trends of paperswithcode\r\n\r\nThe benchmarks looks pretty nice on the first take, but are a little bit confusing too.\r\nhttps://github.com/hpcaitech/ColossalAI#gpt-2\r\nFor RAM, Model size and throughput comparison are different techniques used (pytorch, deepspeed, megatron), did not checked if its only cherry picking or really does not matter which one to use\r\n\r\nIn any case, I think it's not bad to test alternatives to deepspeed.\r\nAt first glance, the integration into existing pytorch code looks feasible without major problems.\r\nAlso, with the expertise of both organizations, the integration could be done without much trouble for a single one, with CAI offering to help with the integration \"We are very appreciated if we could build the integration with you to benefit both of our users\".",
"Thank you for sharing your insights, @flozi00! \r\n\r\nI read their paper and I'm not quite sure of what type of integration is proposed here. Unlike Deepspeed which is meant to be integrated with the user code, CAI seems to be a standalone solution.\r\n\r\nOne of the biggest issues with any parallelism proposals (other than DDP) is that they all require rewriting the model's code, which with 100+ models and growing in our arsenal would be prohibitively expensive. Therefore we always welcome automated solutions like Deepspeed which require no changes whatsoever to most models and sometimes a small tweak for some peculiar models.\r\n\r\nIt's definitely worth exploring all the different versions of TP (2/2.5/3D) mentioned in the paper, but we need this automated and not manually rewritten.\r\n\r\nThe paper briefly mentions PP, but as we all know this one definitely requires a complete rewrite of the model for most frameworks.\r\n\r\nSo again let's ask a very concrete question - other than being part of the HF ecosystem what is the vision for the proposed integration? \r\n\r\nWe already have 2 trainer loop systems (HF Trainer and Accelerate) and we won't want to maintain a 3rd one.\r\n\r\nDo you need to inject something into the `modeling_utils.py` to better support CAI?\r\n\r\nDo you propose to rewrite the models to support?\r\n\r\nPerhaps let's take one HF Transformers model of your choice and tell us what would you like to do with it to have it run on CAI? This would be more practical.\r\n\r\nand specifically to your interest @flozi00 - yes, I hear you like the advanced memory utilization proposed in PatrickStar and CAI suggests to have integrated that functionality. \r\n\r\nI hope my commentary was constructive, we are definitely open for good improvements to our tools. It's just I'm weary to add yet another tool unless a clear advantage and ease of integration can be shown. ",
"Also, let's ping @hyunwoongko - Kevin, I know you have studied many frameworks while building https://github.com/tunib-ai/oslo - have you by chance researched [Colossal-AI](https://github.com/hpcaitech/ColossalAI) on your journey? If you did, would you kindly share a few insights if you have any? I know you were cherry picking the best parts from many systems in addition to your own innovations.",
"I'm sorry to admit that I didn't think of the backwards compatibility, totally forgot about that point, sorry.\r\n\r\nI focused mainly on the integration in the trainer and did not include the now very many architectures and weights.\r\n\r\nMaybe CAI has an idea to automate that ?\r\nWhat about the integration to lightning, did they had discussed that point too ?\r\n\r\nI have some ideas in mind but that would be more part of CAI itself or third party tools, about finding JIT methods to convert the required model parts, instead of the HF integration.",
"> I'm sorry to admit that I didn't think of the backwards compatibility, totally forgot about that point, sorry.\r\n> \r\n> I focused mainly on the integration in the trainer and did not include the now very many architectures and weights.\r\n\r\nNo harm done. This is totally understandable - the HF transformers eco-system has been becoming more and more complex so often it's far from trivial to add yet another component to it.\r\n\r\nWe are super welcoming solutions that can automate performance enhancements (like torchdynamo - see below).\r\n\r\n> Maybe CAI has an idea to automate that ? What about the integration to lightning, did they had discussed that point too ?\r\n\r\nPL is a training framework/loop, last I looked they didn't have the model library and were using transformers, so they don't need to deal with modeling.\r\n\r\n> I have some ideas in mind but that would be more part of CAI itself or third party tools, about finding JIT methods to convert the required model parts, instead of the HF integration.\r\n\r\nthere is already work being done on that with torchdynamo/nvfuser - it's not fully stable yet, but shows some impressive speed ups (and lower memory usage) for converting normal pytorch code to fused kernels - but this is a different dimension to parallelism and advanced memory management systems. It's definitely not a replacement for parallelism, as it can save 2x memory, or provide a 2x speed up, but it's far from enough for 100B+ models.\r\n\r\nPlease see the HF integration details here:\r\nhttps://huggingface.co/docs/transformers/main/en/perf_train_gpu_one#inference-with-torchdynamo\r\n",
"Hi, we drafted a [pull request](https://github.com/Lightning-AI/lightning/pull/14224) which intergrates ColossalAI to lightning. And here are exmaples and benchmark https://github.com/hpcaitech/ColossalAI-Pytorch-lightning. We have impletemented ZeRO-DP with chunk-based memory management and heterogeneous memory management. I think this is not hard to intergrate to HF. Besides, we are working on auto parallelism. I believe we can use TP/PP without modifying model in the future.",
"OK, so at the moment you're proposing to integrate CAI for:\r\n1. its ZeRO-DP with chunk-based memory management and heterogeneous memory management. This is something that Deepspeed is lacking at the moment (and if I understand correctly the technology comes from PatrickStar)\r\n2. down the road for auto-parallelism\r\n\r\n@sgugger, should this perhaps go straight into `accelerate`?\r\n\r\n(Sylvain is on vacation, so please let's wait a bit for him to be back and advise on how to best to proceed.)\r\n ",
"We'll probably need to duplicate the integration in the Trainer and Accelerate for now, since the Trainer does not depend on Accelerate.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,664
| 1,664
|
NONE
| null |
### Feature request
Dear Hugging Face Team,
My name is Yongbin Li. I am part of [Colossal-AI](https://github.com/hpcaitech/ColossalAI) Team.
Thanks for your previous [invitation](https://github.com/hpcaitech/ColossalAI/issues/396) to Colossal-AI org to join Hugging Face. We are happy to share our founder's [blog ](https://twitter.com/HPCAITech/status/1547041583337394176)about Hugging Face.
We are thinking about further collaboration, eg. integrating Colossal-AI into Hugging Face to help your community members use large AI models in an efficient and easier manner.
For example, we can democratize its access to all your users in the same way as you did with DeepSpeed.
https://huggingface.co/docs/transformers/v4.21.0/en/main_classes/deepspeed
### Motivation
We believe the democratization of large AI models is also very helpful for Hugging Face members. We are very appreciated if we could build the integration with you to benefit both of our users.
Actually, we are working on similar integrations with Meta OPT([done](https://github.com/facebookresearch/metaseq#using-opt-with-colossal-ai)), PyTorch Lightning([in process](https://github.com/hpcaitech/ColossalAI/issues/1330)), etc.
### Your contribution
We can provide help you need in this cooperation for free. Actually, we have reached a preliminary idea with your team member: omar, lysandre, and julien via email(ybl@hpcaitech.com) and look forward to your further reply.
Feel free to reach out to me on Hugging Face Discord. My username is billy2022. We can discuss more details with other colleagues in a private group.
Thank you very much.
Best regards,
Yongbin Li, Chief Marketing Officer, HPC-AI Tech
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18624/reactions",
"total_count": 18,
"+1": 6,
"-1": 0,
"laugh": 3,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 3
}
|
https://api.github.com/repos/huggingface/transformers/issues/18624/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18623
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18623/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18623/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18623/events
|
https://github.com/huggingface/transformers/issues/18623
| 1,338,748,120
|
I_kwDOCUB6oc5Py6zY
| 18,623
|
`local_files_only=True` not work
|
{
"login": "huchinlp",
"id": 40781986,
"node_id": "MDQ6VXNlcjQwNzgxOTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/40781986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huchinlp",
"html_url": "https://github.com/huchinlp",
"followers_url": "https://api.github.com/users/huchinlp/followers",
"following_url": "https://api.github.com/users/huchinlp/following{/other_user}",
"gists_url": "https://api.github.com/users/huchinlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huchinlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huchinlp/subscriptions",
"organizations_url": "https://api.github.com/users/huchinlp/orgs",
"repos_url": "https://api.github.com/users/huchinlp/repos",
"events_url": "https://api.github.com/users/huchinlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/huchinlp/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"how did you solve it?"
] | 1,660
| 1,681
| 1,661
|
NONE
| null |
### System Info
torch 1.12.1
transformers 4.21.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have downloaded the pretrained weights and the model worked well.
I am confused by the long loading time (~25s on a SSD) when using the `from_pretrained` API, and I set `local_files_only=True` to disable connections.
But I still found the function called a http request and it spent many seconds on my desktop without Internet.
Here is my log file:
> File "/disk1/fewshot/anaconda3/envs/pet/lib/python3.9/site-packages/transformers/utils/hub.py", line 284, in cached_path
> output_path = get_from_cache(
> File "/disk1/fewshot/anaconda3/envs/pet/lib/python3.9/site-packages/transformers/utils/hub.py", line 501, in get_from_cache
> r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
### Expected behavior
When set `local_files_only=True`, it should disable connections to the website.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18623/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18622
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18622/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18622/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18622/events
|
https://github.com/huggingface/transformers/issues/18622
| 1,338,675,194
|
I_kwDOCUB6oc5Pyo_6
| 18,622
|
variable name:_name = "label" if "label" in features[0].keys() else "labels" when training custom NER
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
NONE
| null |
I am trying to run custom NER on my data using offset values. I tried to replicate using this link << https://huggingface.co/course/chapter7/2 >>
I keep getting the error
**_name = "label" if "label" in features[0].keys() else "labels"
AttributeError: 'tokenizers.Encoding' object has no attribute 'keys'**
**DATA BEFORE tokenize_and_align_labels FUNCTIONS**
```
{'texts': ['WASHINGTON USA WA DRIVER LICENSE BESSETTE Lamma 4d DL 73235766 9 Class AM to Iss 22/03/2021 Ab Exp 07130/2021 DOB 2/28/21 1 BESSETTE 2 GERALD 8 6930 NE Grandview Blvd, keyport, WA 86494 073076 12 Restrictions A 9a End P 16 Hgt 5\'-04" 15 Sex F 18 Eyes BLU 5 DD 73235766900000000000 Gerald Bessette', ] }
tag_names': [
[
{'start': 281, 'end': 296, 'tag': 'PERSON_NAME', 'text': 'Gerald Bessette'},
{'start': 135, 'end': 141, 'tag': 'FIRST_NAME', 'text': 'GERALD'},
{'start': 124, 'end': 122, 'tag': 'LAST_NAME', 'text': 'BESSETTE'},
{'start': 81, 'end': 81, 'tag': 'ISSUE_DATE', 'text': '22/03/2021'},
{'start': 99, 'end': 109, 'tag': 'EXPIRY_DATE', 'text': '07130/2021'},
{'start': 114, 'end': 121, 'tag': 'DATE_OF_BIRTH', 'text': '2/28/21'},
{'start': 51, 'end': 59, 'tag': 'DRIVER_LICENSE_NUMBER', 'text': '73235766'},
{'start': 144, 'end': 185, 'tag': 'ADDRESS', 'text': '6930 NE Grandview Blvd, keyport, WA 86494'}
],
```
**DATA AFTER tokenize_and_align_labels FUNCTIONS**
```
{'input_ids':
[[0, 305, 8684, 2805, 9342, 10994, 26994, 42560, 39951, 163, 12147, 3935, 6433, 6887, 1916, 204, 417, 13925, 6521, 1922, 4390, 4280, 361,
4210, 3326, 7, 19285, 820, 73, 3933, 73, 844, 2146, 2060, 12806, 321, 5339, 541, 73, 844, 2146, 14010, 387, 132, 73, 2517, 73, 2146, 112,
163, 12147, 3935, 6433, 132, 272, 39243, 495, 290, 5913, 541, 12462, 2374, 5877, 12543, 6, 762, 3427, 6, 9342, 290, 4027, 6405, 13470, 541,
5067, 316, 40950, 2485, 83, 361, 102, 4680, 221, 545, 289, 19377, 195, 32269, 3387, 113, 379, 15516, 274, 504, 26945, 12413, 791, 195, 27932,
6521, 1922, 4390, 36400, 45947, 151, 14651, 163, 3361, 3398, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'attention_mask':
[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'offset_mapping': [[(0, 0), (0, 1), (1, 10), (11, 14), (15, 17), (18, 20), (20, 24), (25, 28), (28, 32), (33, 34), (34, 37), (37, 39), (39, 41),
(42, 45), (45, 47), (48, 49), (49, 50), (51, 53), (54, 56), (56, 58), (58, 60), (60, 62), (63, 64), (65, 70), (71, 73),
(74, 76), (77, 80), (81, 83), (83, 84), (84, 86), (86, 87), (87, 89), (89, 91), (92, 94), (95, 98), (99, 100), (100, 102),
(102, 104), (104, 105), (105, 107), (107, 109), (110, 112), (112, 113), (114, 115), (115, 116), (116, 118), (118, 119),
(119, 121), (122, 123), (124, 125), (125, 128), (128, 130), (130, 132), (133, 134), (135, 136), (136, 140), (140, 141),
(142, 143), (144, 146), (146, 148), (149, 151), (152, 157), (157, 161), (162, 166), (166, 167), (168, 171), (171, 175),
(175, 176), (177, 179), (180, 181), (181, 183), (183, 185), (186, 188), (188, 190), (190, 192), (193, 195), (196, 204),
(204, 208), (209, 210), (211, 212), (212, 213), (214, 217), (218, 219), (220, 222), (223, 224), (224, 226), (227, 228),
(228, 230), (230, 232), (232, 233), (234, 236), (237, 240), (241, 242), (243, 245), (246, 250), (251, 253), (253, 254),
(255, 256), (257, 259), (260, 262), (262, 264), (264, 266), (266, 269), (269, 277), (277, 280), (281, 287), (288, 289),
(289, 292), (292, 296), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)]
'labels': [[24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 2, 10, 10, 18, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 3, 11, 11, 11, 11, 19, 24, 24, 1, 9, 9, 9, 17, 24, 24, 24, 24, 24, 24, 4, 12, 20, 24, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
16, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 7, 15, 15, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24],
```
My Code:
```
import transformers
from transformers import AutoTokenizer
from transformers import AutoTokenizer,BertModel,BertTokenizer
from transformers import RobertaModel,RobertaConfig,RobertaForTokenClassification
from transformers import TrainingArguments, Trainer
# from transformers.trainer import get_tpu_sampler
from transformers.trainer_pt_utils import get_tpu_sampler
from transformers.data.data_collator import DataCollator, InputDataClass
from transformers import DataCollatorForTokenClassification
from transformers import AutoModelForTokenClassification
import torch
from torch.nn import CrossEntropyLoss, MSELoss
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data.dataloader import DataLoader
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data.sampler import RandomSampler
from torchcrf import CRF
import dataclasses
import logging
import warnings
import tqdm
import os
import numpy as np
from typing import List, Union, Dict
os.environ["WANDB_DISABLED"] = "true"
print(transformers.__version__)
import evaluate
metric = evaluate.load("seqeval")
model_checkpoint = "bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) #add_prefix_space=True
def isin(a, b):
return a[1] > b[0] and a[0] < b[1]
def tokenize_and_align_labels(examples, label2id, max_length=256):
tokenized_inputs = tokenizer(examples["texts"], truncation=True, padding='max_length', max_length=max_length,return_offsets_mapping=True)
print("tokenization done")
labels = []
for i, label_idx_for_single_input in enumerate(tqdm.tqdm(examples["tag_names"])):
# print(i,label_idx_for_single_input)
labels_for_single_input = ['O' for _ in range(max_length)]
# print(labels_for_single_input)
text_offsets = tokenized_inputs['offset_mapping'][i]
# print("text_offsets",text_offsets)
for entity in label_idx_for_single_input:
# print("entity",entity)
tag = entity['tag']
# print("tag",tag)
tag_offset = [entity['start'], entity['end']]
# print("tag_offset",tag_offset)
# text_offsets [(0, 0), (0, 1), (1, 10), (11, 14), (15, 17), (18, 20), (20, 24), (25, 28), (28, 32), (33, 34), (34, 37), (37, 39), (39, 41), (42, 45), (45, 47), (48, 49), (49, 50), (51, 53), (54, 56), (56, 58), (58, 60), (60, 62), (63, 64), (65, 70), (71, 73), (74, 76), (77, 80), (81, 83), (83, 84), (84, 86), (86, 87), (87, 89), (89, 91), (92, 94), (95, 98), (99, 100), (100, 102), (102, 104), (104, 105), (105, 107), (107, 109), (110, 112), (112, 113), (114, 115), (115, 116), (116, 118), (118, 119), (119, 121), (122, 123), (124, 125), (125, 128), (128, 130), (130, 132), (133, 134), (135, 136), (136, 140), (140, 141), (142, 143), (144, 146), (146, 148), (149, 151), (152, 157), (157, 161), (162, 166), (166, 167), (168, 171), (171, 175), (175, 176), (177, 179), (180, 181), (181, 183), (183, 185), (186, 188), (188, 190), (190, 192), (193, 195), (196, 204), (204, 208), (209, 210), (211, 212), (212, 213), (214, 217), (218, 219), (220, 222), (223, 224), (224, 226), (227, 228), (228, 230), (230, 232), (232, 233), (234, 236), (237, 240), (241, 242), (243, 245), (246, 250), (251, 253), (253, 254), (255, 256), (257, 259), (260, 262), (262, 264), (264, 266), (266, 269), (269, 277), (277, 280), (281, 287), (288, 289), (289, 292), (292, 296), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)]
# entity {'start': 281, 'end': 296, 'tag': 'PERSON_NAME', 'text': 'Gerald Bessette'}
# tag PERSON_NAME
# tag_offset [281, 296]
affected_token_ids = [j for j in range(max_length) if isin(tag_offset, text_offsets[j])]
# print("affected_token_ids",affected_token_ids)
if len(affected_token_ids) < 1:
# print('affected_token_ids)<1')
continue
if any(labels_for_single_input[j] != 'O' for j in affected_token_ids):
# print('entity orverlap! skipping')
continue
for j in affected_token_ids:
labels_for_single_input[j] = 'I_' + tag
labels_for_single_input[affected_token_ids[-1]] = 'L_' + tag
labels_for_single_input[affected_token_ids[0]] = 'B_' + tag
label_ids = [label2id[x] for x in labels_for_single_input]
labels.append(label_ids)
tokenized_inputs["labels"] = labels
# print(tokenized_inputs.keys())
return tokenized_inputs
import json
data = []
with open('data.json', 'r') as f:
for line in f:
data.append(json.loads(line))
l = []
for k, v in data[0].items():
l.append({'text': k, 'spans': v})
train_set = [
[
x['text'],
[{'start': y["start"], 'end': y["end"], 'tag': y["label"], 'text': y["ngram"]} for y in x['spans']]
] for x in l
]
## count labels in dataset
from collections import Counter
e = []
for x in train_set:
for y in x[1]:
e.append(y['tag'])
Counter(e).most_common()
## get label list
ori_label_list = []
for line in train_set:
ori_label_list += [entity['tag'] for entity in line[1]]
ori_label_list = sorted(list(set(ori_label_list)))
label_list = []
for prefix in 'BIL':
label_list += [prefix + '_' + x for x in ori_label_list]
label_list += ['O']
label_list = sorted(list(set(label_list)))
print(label_list)
print(len(label_list))
label2id = {n:i for i,n in enumerate(label_list)}
id2label= {str(i):n for i,n in enumerate(label_list)}
# id2label = {str(i): label for i, label in enumerate(label_names)}
# label2id = {v: k for k, v in id2label.items()}
train_examples ={'texts':[x[0] for x in train_set],'tag_names':[x[1] for x in train_set]}
train_examples = tokenize_and_align_labels(train_examples,label2id)
# train_examples = train_examples.map(tokenize_and_align_labels(label2id),batched=True)
print("here")
print(train_examples.keys())
print(len(train_examples['labels']))
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'offset_mapping', 'labels'])
# 775
data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
# collator=data_collator(train_examples)
# def compute_metrics(eval_preds):
# logits, labels = eval_preds
# predictions = np.argmax(logits, axis=-1)
#
# # Remove ignored index (special tokens) and convert to labels
# true_labels = [[label_list[l] for l in label if l != -100] for label in labels]
# true_predictions = [
# [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
# for prediction, label in zip(predictions, labels)
# ]
# all_metrics = metric.compute(predictions=true_predictions, references=true_labels)
# return {
# "precision": all_metrics["overall_precision"],
# "recall": all_metrics["overall_recall"],
# "f1": all_metrics["overall_f1"],
# "accuracy": all_metrics["overall_accuracy"],
# }
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,id2label=id2label,label2id=label2id,)
print(model.config.num_labels)
args = TrainingArguments(
"bert-finetuned-ner",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=2,
weight_decay=0.01,
# push_to_hub=True,
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_examples,
# eval_dataset=train_examples,
data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
trainer.train()
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18622/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18621
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18621/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18621/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18621/events
|
https://github.com/huggingface/transformers/issues/18621
| 1,338,648,877
|
I_kwDOCUB6oc5Pyikt
| 18,621
|
How to load multiple TXT training files when pre-train RoBERTa from scratch
|
{
"login": "skye95git",
"id": 41561936,
"node_id": "MDQ6VXNlcjQxNTYxOTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/41561936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skye95git",
"html_url": "https://github.com/skye95git",
"followers_url": "https://api.github.com/users/skye95git/followers",
"following_url": "https://api.github.com/users/skye95git/following{/other_user}",
"gists_url": "https://api.github.com/users/skye95git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skye95git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skye95git/subscriptions",
"organizations_url": "https://api.github.com/users/skye95git/orgs",
"repos_url": "https://api.github.com/users/skye95git/repos",
"events_url": "https://api.github.com/users/skye95git/events{/privacy}",
"received_events_url": "https://api.github.com/users/skye95git/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I would recommend using `datasets` in order to store all of your data; you can then pass it either as a local dataset or as a dataset stored on the Hub to that same script.",
"> I would recommend using `datasets` in order to store all of your data; you can then pass it either as a local dataset or as a dataset stored on the Hub to that same script.\r\n\r\nThanks for your reply! How to build a dataset with our data? Is there a tutorial?",
"This part of the datasets documentation should likely help out: https://huggingface.co/docs/datasets/loading#local-and-remote-files",
"> This part of the datasets documentation should likely help out: https://huggingface.co/docs/datasets/loading#local-and-remote-files\r\n\r\nThanks! It works.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
NONE
| null |
https://github.com/huggingface/transformers/blob/d6eeb871706db0d64ab9ffd79f9545d95286b536/examples/pytorch/language-modeling/run_mlm.py#L308
Hi, I want to pre-train RoBERTa from scratch on my dataset. But in example run_mlm.py:
```
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
extension = data_args.train_file.split(".")[-1]
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.validation_file.split(".")[-1]
if extension == "txt":
extension = "text"
raw_datasets = load_dataset(
extension,
data_files=data_files,
cache_dir=model_args.cache_dir,
use_auth_token=True if model_args.use_auth_token else None,
)
```
The train file argument `train_file` seems to support only one file:
```
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
data_files["train"] = data_args.train_file
extension = data_args.train_file.split(".")[-1]
```
Because we have too much training data, it is inconvenient to store it in a file. Can train_file support multiple text files?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18621/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18620
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18620/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18620/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18620/events
|
https://github.com/huggingface/transformers/issues/18620
| 1,338,386,950
|
I_kwDOCUB6oc5PxioG
| 18,620
|
Big Bird cannot be converted to ONNX
|
{
"login": "cigrainger",
"id": 3984794,
"node_id": "MDQ6VXNlcjM5ODQ3OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3984794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cigrainger",
"html_url": "https://github.com/cigrainger",
"followers_url": "https://api.github.com/users/cigrainger/followers",
"following_url": "https://api.github.com/users/cigrainger/following{/other_user}",
"gists_url": "https://api.github.com/users/cigrainger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cigrainger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cigrainger/subscriptions",
"organizations_url": "https://api.github.com/users/cigrainger/orgs",
"repos_url": "https://api.github.com/users/cigrainger/repos",
"events_url": "https://api.github.com/users/cigrainger/events{/privacy}",
"received_events_url": "https://api.github.com/users/cigrainger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @vumichien, have you encountered this when contributing the BigBird ONNX exporter?",
"@LysandreJik I didn't encounter this problem when contributing the BigBird ONNX exporter. I do check again by running `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k \"bigbird\"` , and all the tests are still passed. ",
"Maybe we could put a higher tolerance in that case. WDYT @lewtun ?",
"Hey @cigrainger I'm not able to reproduce this behaviour using either CPU or GPU. Could you please try running this again with the latest state of the `main` branch in `transformers` and report back if the issue persists?\r\n\r\nIf yes, we can certainly increase the tolerance as Lysandre suggested :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,665
| 1,665
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.17.5-x86_64-with-glibc2.33
- Python version: 3.9.6
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ydshie
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
python -m transformers.onnx --model=google/bigbird-roberta-base bigbird
```
Returns:
```bash
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 6.103515625e-05
```
### Expected behavior
I expect the export to work and return: `All good, model saved at: bigbird/model.onnx`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18620/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18619
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18619/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18619/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18619/events
|
https://github.com/huggingface/transformers/issues/18619
| 1,338,329,928
|
I_kwDOCUB6oc5PxUtI
| 18,619
|
Bug in DonutFeatureExtractor
|
{
"login": "gagan3012",
"id": 49101362,
"node_id": "MDQ6VXNlcjQ5MTAxMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gagan3012",
"html_url": "https://github.com/gagan3012",
"followers_url": "https://api.github.com/users/gagan3012/followers",
"following_url": "https://api.github.com/users/gagan3012/following{/other_user}",
"gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions",
"organizations_url": "https://api.github.com/users/gagan3012/orgs",
"repos_url": "https://api.github.com/users/gagan3012/repos",
"events_url": "https://api.github.com/users/gagan3012/events{/privacy}",
"received_events_url": "https://api.github.com/users/gagan3012/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThe [docstring](https://huggingface.co/docs/transformers/main/en/model_doc/donut#transformers.DonutFeatureExtractor.size) says that the size argument should be a tuple of (width, height). "
] | 1,660
| 1,662
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.6.4 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.2 (gpu)
- Jax version: 0.3.14
- JaxLib version: 0.3.14
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
tokenizer = AutoTokenizer.from_pretrained(token)
feature_extractor = DonutFeatureExtractor.from_pretrained(encoder)
processor = DonutProcessor(feature_extractor,tokenizer)
class OCRDataset(Dataset):
def __init__(self, root_dir, df, processor, max_target_length=256):
self.root_dir = root_dir
self.df = df
self.processor = processor
self.max_target_length = max_target_length
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
# get file name + text
#file_name = self.df['file_name'][idx]
text = self.df['text'][idx]
# prepare image (i.e. resize + normalize)
image = self.df['image'][idx].convert("RGB")
print(type(image))
w, h = image.size
#image = Image.open(self.root_dir + file_name).convert("RGB")
pixel_values = self.processor([image], return_tensors="pt").pixel_values
# add labels (input_ids) by encoding the text
labels = self.processor.tokenizer(text,
padding="max_length",
max_length=self.max_target_length).input_ids
# important: make sure that PAD tokens are ignored by the loss function
labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels]
labels = torch.tensor(labels)
encoding = {"pixel_values": pixel_values.squeeze(), "labels": labels}
return encoding
train_dataset = OCRDataset(root_dir='',
df=dataset_train,
processor=processor)
encoding = train_dataset[1]
```
Error:
```
/opt/conda/lib/python3.7/site-packages/transformers/models/trocr/processing_trocr.py in __call__(self, *args, **kwargs)
65
66 if images is not None:
---> 67 inputs = self.feature_extractor(images, *args, **kwargs)
68 if text is not None:
69 encodings = self.tokenizer(text, **kwargs)
/opt/conda/lib/python3.7/site-packages/transformers/models/donut/feature_extraction_donut.py in __call__(self, images, return_tensors, random_padding, **kwargs)
193 images = [
194 self.resize(image=image, size=min(self.size), resample=self.resample, default_to_square=False)
--> 195 for image in images
196 ]
197 if self.do_thumbnail and self.size is not None:
/opt/conda/lib/python3.7/site-packages/transformers/models/donut/feature_extraction_donut.py in <listcomp>(.0)
193 images = [
194 self.resize(image=image, size=min(self.size), resample=self.resample, default_to_square=False)
--> 195 for image in images
196 ]
197 if self.do_thumbnail and self.size is not None:
TypeError: 'int' object is not iterable
```
### Expected behavior
It should work
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18619/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18618
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18618/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18618/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18618/events
|
https://github.com/huggingface/transformers/pull/18618
| 1,338,321,289
|
PR_kwDOCUB6oc49JXEu
| 18,618
|
Add depth estimation pipeline
|
{
"login": "nandwalritik",
"id": 48522685,
"node_id": "MDQ6VXNlcjQ4NTIyNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nandwalritik",
"html_url": "https://github.com/nandwalritik",
"followers_url": "https://api.github.com/users/nandwalritik/followers",
"following_url": "https://api.github.com/users/nandwalritik/following{/other_user}",
"gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions",
"organizations_url": "https://api.github.com/users/nandwalritik/orgs",
"repos_url": "https://api.github.com/users/nandwalritik/repos",
"events_url": "https://api.github.com/users/nandwalritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/nandwalritik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @NielsRogge @Narsil I tried debugging to resolve above issue, I found that in `src/transformers/models/auto/auto_factory.py` in line 462 `model_class = _get_model_class(config, cls._model_mapping)` when I logged `cls._model_mapping` I recieved\r\n```\r\nOrderedDict([(<class 'transformers...PTConfig'>, <?>), (<class 'transformers...PNConfig'>, <?>)])\r\n<error>:\r\nTraceback (most recent call last):\r\n```\r\nCan you guide me to resolve this error.",
"> Great PR, you also need to add\r\n> \r\n> ```python\r\n> + def _sanitize_parameters(self, **kwargs):\r\n> + return {}, {}, {}\r\n> +\r\n> ```\r\n> \r\n> To the pipeline, it's not used but the base class expects this method to exist. (Here there's not parameters declared so it's quite easy.\r\n> \r\n> This weird method exist to allow parameters to be defined both at definition time or call tiime\r\n> \r\n> ```\r\n> pipe = pipeline(model=model, myargs=1)\r\n> data = pipe(image)\r\n> # or\r\n> pipe = pipeline(model=model)\r\n> data = pipe(image, myargs=1)\r\n> ```\r\n> \r\n> Cheers ! Otherwise LGTM.\r\n> \r\n> Why do you output 3 different images ? That sounds like a lot.\r\n> \r\n> The image I can understand, the predicted depth is defined in what unit ? Is it noisy hence the interpolation ? IMO that seems like something to be left to the user to decide what to do.\r\n> \r\n> In general I think a pure image would be nice (to be a bit more general) but I can understand that the loss of precision might be harmful, do you mind sharing how you use those numbers ? Maybe we could output an other time of image that doesn't loose information (keeping f32 pixel)\r\n> \r\n> Wdyt ?\r\n\r\nI just saw `DPT`'s depth estimation example and added these three outputs. I have removed the interpolation one and In the output I have kept only the `predicted_depth` (which is a `tensor`) and `depth` (which is the PIL `Image` object).\r\nLet me know if I should remove the `predicted_depth` also.\r\n",
"> ### Review required\r\n> At least 1 approving review is required by reviewers with write access. [Learn more.](https://docs.github.com/articles/about-pull-request-reviews/)\r\n> ** 1 pending reviewer **\r\n\r\nBy generic test did you mean `run_pipeline_test` ? Let me know if it's other than this, I have added `run_pipeline_test` for now.\r\nAlso In CI `test_pipelines_depth_estimation` is failing can you help me with that ? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @nandwalritik, could you revive this PR by rebasing with the main branch? ",
"> Hi @nandwalritik, could you revive this PR by rebasing with the main branch?\r\n\r\nDone.",
"@nandwalritik Do you want to help to get the PR green ?",
"> @nandwalritik Do you want to help to get the PR green ?\r\n\r\n@Narsil Yeah please , I tried but I was not able to make the test cases pass.",
"> Added some comments on how I fixed the CI for you.\r\n\r\nThanks I will look at them.",
"@sgugger for final review.",
"@sgugger the test failure seem unrelated to the PR, should we go ahead and merge ?",
"Yes, those are flaky tests."
] | 1,660
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
* I tried debugging
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18446
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge @Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Error While using the pipeline
```
pipe = pipeline("depth-estimation")
No model was supplied, defaulted to Intel/dpt-large and revision e93beec (https://huggingface.co/Intel/dpt-large).
Using a pipeline without specifying a model name and revision in production is not recommended.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/pipelines/__init__.py", line 670, in pipeline
framework, model = infer_framework_load_model(
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/pipelines/base.py", line 257, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 445, in from_pretrained
model_class = _get_model_class(config, cls._model_mapping)
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 359, in _get_model_class
supported_models = model_mapping[type(config)]
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 565, in __getitem__
return self._load_attr_from_module(model_type, model_name)
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 579, in _load_attr_from_module
return getattribute_from_module(self._modules[module_name], attr)
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
return getattribute_from_module(transformers_module, attr)
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
return getattribute_from_module(transformers_module, attr)
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 539, in getattribute_from_module
return getattribute_from_module(transformers_module, attr)
[Previous line repeated 982 more times]
File "/home/nandwalritik/nandwalritik/transformers/src/transformers/models/auto/auto_factory.py", line 538, in getattribute_from_module
transformers_module = importlib.import_module("transformers")
File "/home/nandwalritik/anaconda3/envs/hftfSwinDev/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load
File "<frozen importlib._bootstrap>", line 157, in __enter__
File "<frozen importlib._bootstrap>", line 183, in _get_module_lock
File "<frozen importlib._bootstrap>", line 59, in __init__
RecursionError: maximum recursion depth exceeded while calling a Python object
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18618/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18618",
"html_url": "https://github.com/huggingface/transformers/pull/18618",
"diff_url": "https://github.com/huggingface/transformers/pull/18618.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18618.patch",
"merged_at": 1665579261000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18617
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18617/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18617/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18617/events
|
https://github.com/huggingface/transformers/issues/18617
| 1,338,317,250
|
I_kwDOCUB6oc5PxRnC
| 18,617
|
Post-processing for HumanEval code generations not working properly
|
{
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Update: the case I mentioned would never occur with the stoppingcriteria in codeparrot as all generation should include an eof_string so no bug https://github.com/huggingface/transformers/blob/d6eeb871706db0d64ab9ffd79f9545d95286b536/examples/research_projects/codeparrot/scripts/human_eval.py#L59"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
### System Info
Not system-dependent
### Who can help?
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The post-processing function for HumanEval code generation in CodeParrot doesn't work as expected, precisely this function
https://github.com/huggingface/transformers/blob/d6eeb871706db0d64ab9ffd79f9545d95286b536/examples/research_projects/codeparrot/scripts/human_eval.py#L67
It returns an empty string if no EOF_STRING is present:
```python
EOF_STRINGS = ["\nprint"]
def remove_last_block(string):
"""Remove the last block of the code containing EOF_STRINGS"""
string_list = re.split("(%s)" % "|".join(EOF_STRINGS), string)
# last string should be ""
return "".join(string_list[:-2])
example = "def somme(x,y)\n return x+y"
print(f"example :\n{remove_last_block(example)}")
```
```
example :
```
Going to an old version of the repo, I found we had this function instead which works properly so I'm wondering why we changed it, it would also be more practical in case no stopping criteria at EOF_STRINGs was used during the generation as it only keeps the first block
```python
def first_block(string):
"""Split off first block of code by scanning for class, def etc. on newlines."""
return re.split("|".join(EOF_STRINGS), string)[0].rstrip()
print(f"example :\n{first_block(example)}")
```
```
example :
def somme(x,y)
return x+y
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18617/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18616
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18616/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18616/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18616/events
|
https://github.com/huggingface/transformers/issues/18616
| 1,338,052,780
|
I_kwDOCUB6oc5PwRCs
| 18,616
|
BartForConditionalGeneration is erroneous either at .forward or at .generate
|
{
"login": "Aktsvigun",
"id": 36672861,
"node_id": "MDQ6VXNlcjM2NjcyODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aktsvigun",
"html_url": "https://github.com/Aktsvigun",
"followers_url": "https://api.github.com/users/Aktsvigun/followers",
"following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}",
"gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions",
"organizations_url": "https://api.github.com/users/Aktsvigun/orgs",
"repos_url": "https://api.github.com/users/Aktsvigun/repos",
"events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aktsvigun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @Aktsvigun π You are absolutely right, our documentation is not clear at the moment if you want to get the scores with beam search. In essence, the scores for a given index of `generate_output.sequences_scores` do not match the sequence with that index, because the sequence's internal index during beam search gets shuffled around (due to the beam search algorithm structure) :)\r\n\r\nWe do have a method to reverse this shuffling, but it is not yet documented:\r\nπ [`compute_transition_beam_scores`](https://github.com/huggingface/transformers/blob/e0b825a8d03f50ed9dbf9fbbbb3b4fcf0b4e4b22/src/transformers/generation_utils.py#L876)\r\nπ [Beam search output docs](https://huggingface.co/docs/transformers/v4.21.1/en/internal/generation_utils#transformers.generation_utils.BeamSearchDecoderOnlyOutput), to understand the inputs to this function\r\n\r\nGive it a go, and let us know if it worked as you expected. We will be updating the docs soon, suggestions are appreciated!\r\n\r\n(keeping this issue open to track the documentation updates for the sequence scores)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@patil-suraj @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
text = """
Phillip, Could you please do me a favor?\nI would like to read your current title policy to see what \
it says about easements.\nYou should have received a copy during your closing.\nI don't know how many \
pages it will be but let me know how you want to handle getting a copy made.\nI'll be happy to make the copy,\
or whatever makes it easy for you.\nThanks,\n
"""
checkpoint = "Aktsvigun/bart-base_aeslc_42"
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).cuda()
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
input_ids = tokenizer(text, truncation=True, return_tensors="pt")["input_ids"].to(model.device)
generate_output = model.generate(
input_ids, num_return_sequences=4, length_penalty=1., return_dict_in_generate=True, output_scores=True, early_stopping=True
)
# Most probable labels according to the generate output. Taking from first since do not need initial generation token.
labels = generate_output.sequences[0][generate_output.sequences[0] != 1][None, 1:]
out = model(input_ids, labels=labels)
probas = torch.nn.functional.softmax(out.logits, dim=-1)
sequence_score = probas[0].log().gather(index=labels[0][:, None], dim=-1).sum() / len(labels[0])
assert torch.allclose(-sequence_score, out.loss)
assert torch.allclose(sequence_score, generate_output.sequences_scores[0])
```
### Expected behavior
The last assert must be passed, yet the results differ (-0.8670 for reconstructed score and -0.8581 from generated output). What happens in the code: I first generate the sequence with BART, and then I try to reproduce the score by calling `.forward` (reproducing the score as the average of log-probas of labels ids taken from each decoder iteration).
Why is it important: this is a "sub-bug" which I found, verifying another bug: I wrote a function to restore the sequences and sequences scores from `transformers.generation_utils.BeamSearchEncoderDecoderOutput.scores` and got slightly different results with the ones outputted by `transformers.generation_utils.BeamSearchEncoderDecoderOutput`. Namely, I restore some sequences with the scores, higher than `transformers.generation_utils.BeamSearchEncoderDecoderOutput.sequences_scores`. I need to check, which version (default / mine) is correct, hence I need to pass the sequence with forward and calculate its "intrinsic" score. However, as this example shows, either `.forward` or `.generate` return slightly erroneous results.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18616/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18615
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18615/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18615/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18615/events
|
https://github.com/huggingface/transformers/pull/18615
| 1,337,999,201
|
PR_kwDOCUB6oc49Ib6H
| 18,615
|
Determine framework automatically before ONNX export
|
{
"login": "rachthree",
"id": 46288912,
"node_id": "MDQ6VXNlcjQ2Mjg4OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/46288912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rachthree",
"html_url": "https://github.com/rachthree",
"followers_url": "https://api.github.com/users/rachthree/followers",
"following_url": "https://api.github.com/users/rachthree/following{/other_user}",
"gists_url": "https://api.github.com/users/rachthree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rachthree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rachthree/subscriptions",
"organizations_url": "https://api.github.com/users/rachthree/orgs",
"repos_url": "https://api.github.com/users/rachthree/repos",
"events_url": "https://api.github.com/users/rachthree/events{/privacy}",
"received_events_url": "https://api.github.com/users/rachthree/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thank you so much for greatly improving the framework selection in the ONNX exporter @rachthree (also, welcome as a first time contributor π₯³)!\r\n> \r\n> Overall, the logic looks great to me and I'd really like to see a unit test of the `determine_framework` function. This would give us some confidence that any future changes on the framework selection side won't accidentally break the desired behaviour.\r\n> \r\n> Regarding the failing unit tests, these will be fixed by:\r\n> \r\n> * #18587\r\n> * #18336\r\n> \r\n> so we can rebase your branch on `main` once they're approved / merged (should be soon)\r\n\r\nThank you for the review and welcoming me! I'm excited to contribute, especially since this is my first PR in the open source community :) Glad to see the 2 PRs will fix those unit tests. "
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
Determines whether to use `torch` or `tf2onnx` as the ONNX exporter automatically with the following priority:
1. User input via `framework` / `--framework`.
2. If local checkpoint is provided, use the same framework as the checkpoint.
3. Available framework in environment, with priority given to PyTorch.
Fixes issue https://github.com/huggingface/transformers/issues/18495 where PyTorch was still attempted for a local TF checkpoint even though it did not exist in the environment. This also avoids requiring users to use `--framework=tf` when using the ONNX export driver script.
Misc:
* Adds `tf` to pip install for `run_tests_onnxruntime` and `run_tests_onnxruntime_all` in CI.
## Tests
* `python -m transformers.onnx` driver with and without `--framework` on local checkpoints and hub. Tested in containerized environments that had only PyTorch, only TensorFlow, or both.
* Successful.
* Unit tests: ran `RUN_SLOW=true pytest tests/onnx`
~~* Overall, tests **passed w.r.t `main`** since they share the same failing tests:~~
<strike>
```
FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_pytorch - TypeError: 'module' object is not callable
FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_tf - TypeError: 'module' object is not callable
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_048_data2vec_vision_image_segmentation - ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation.
FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_048_data2vec_vision_image_segmentation - ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation.
```
</strike>
~~* Wrote up https://github.com/huggingface/transformers/issues/18614 for the `TypeError: 'module' object is not callable` errors.~~ **Fixed by https://github.com/huggingface/transformers/pull/18336**
~~* As for the `AutoModel` error, https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py#L363 says not to add new models, so is this failure acceptable?~~ **Fixed by https://github.com/huggingface/transformers/pull/18587**
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik and others who may be interested :)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18615/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18615/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18615",
"html_url": "https://github.com/huggingface/transformers/pull/18615",
"diff_url": "https://github.com/huggingface/transformers/pull/18615.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18615.patch",
"merged_at": 1661437894000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18614
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18614/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18614/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18614/events
|
https://github.com/huggingface/transformers/issues/18614
| 1,337,998,500
|
I_kwDOCUB6oc5PwDyk
| 18,614
|
`transformers.convert_graph_to_onnx.quantize` fails in unit tests
|
{
"login": "rachthree",
"id": 46288912,
"node_id": "MDQ6VXNlcjQ2Mjg4OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/46288912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rachthree",
"html_url": "https://github.com/rachthree",
"followers_url": "https://api.github.com/users/rachthree/followers",
"following_url": "https://api.github.com/users/rachthree/following{/other_user}",
"gists_url": "https://api.github.com/users/rachthree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rachthree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rachthree/subscriptions",
"organizations_url": "https://api.github.com/users/rachthree/orgs",
"repos_url": "https://api.github.com/users/rachthree/repos",
"events_url": "https://api.github.com/users/rachthree/events{/privacy}",
"received_events_url": "https://api.github.com/users/rachthree/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @lewtun ",
"I have fixed this in https://github.com/huggingface/transformers/pull/18336, but still waiting for a review.",
"Looks like this has been fixed! Closing this."
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.10.60.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
* Used `tensorflow/tensorflow:latest` Docker image for this environment, then used `pip install -e '.[dev,onnx]'`
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Other:
- `onnxruntime` version: 1.12.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run `RUN_SLOW=true pytest tests/onnx/test_onnx.py`
Get failures:
```
FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_pytorch - TypeError: 'module' object is not callable
FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_tf - TypeError: 'module' object is not callable
```
### Expected behavior
The unit tests should pass.
I believe this failure is due to `onnxruntime.quantization.quantize` which is a module that contains functions `quantize_static` and `quantize_dynamic`. The API may have changed since the unit test was written. I'm not sure which is the one to use for the unit tests. Even after fixing, not sure how `transformers` should handle the different versions of `onnxruntime` or should the required version change in `setup.py`.
See https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/quantize.py
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18614/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18613
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18613/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18613/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18613/events
|
https://github.com/huggingface/transformers/pull/18613
| 1,337,966,932
|
PR_kwDOCUB6oc49IV7H
| 18,613
|
feed `input_embeds` into `FlaxT5ForConditionalGeneration`
|
{
"login": "BigRedT",
"id": 5041894,
"node_id": "MDQ6VXNlcjUwNDE4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5041894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigRedT",
"html_url": "https://github.com/BigRedT",
"followers_url": "https://api.github.com/users/BigRedT/followers",
"following_url": "https://api.github.com/users/BigRedT/following{/other_user}",
"gists_url": "https://api.github.com/users/BigRedT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigRedT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigRedT/subscriptions",
"organizations_url": "https://api.github.com/users/BigRedT/orgs",
"repos_url": "https://api.github.com/users/BigRedT/repos",
"events_url": "https://api.github.com/users/BigRedT/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigRedT/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18613). All of your documentation changes will be reflected on that endpoint.",
"Hey @BigRedT! Awesome that you've managed to get it working with so little code! And great to see that the model outputs are the same for `input_ids` and `input_embeds`. Before we can merge this we'll need to add some tests to make sure the functionality is as expected (which is should hopefully be given the toy example passes!). At the very least, we should add one test that mirrors the PyTorch test:\r\nhttps://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/tests/test_modeling_common.py#L2094\r\nAnd one that verifies that the output logits for `input_ids` and `input_embeds` match. Do you want to have a go at this?",
"If you have any questions/issues, feel free to reach out to @patrickvonplaten or @patil-suraj. They will be more than happy to lend you a hand and provide a review on this PR! Otherwise I can take a look in a little over a weeks time! Thanks @BigRedT!",
"@sanchit-gandhi thanks for helping with this! Will take a look in the coming week. ",
"Added `test_input_embeds()` to `FlaxT5ModelTest` in `test_modeling_flax_t5.py` as requested by @sanchit-gandhi \r\n\r\nThis test currently checks to see if the generated sequences from `input_ids` match those from `input_embeds` (obtained by feeding `input_ids` through the `shared` embedding layer in the model)\r\n\r\n@sanchit-gandhi wanted to see if the logits match too. @patrickvonplaten @patil-suraj what's the easiest way to compute logits from `generate()`. Also, could one of you review this PR. Thanks!\r\n\r\n",
"@sanchit-gandhi any updates?",
"Hey @BigRedT, let me know if you want any further clarification for the comments. Happy to answer any questions! This PR looks pretty close to completion :)",
"Hey @BigRedT - do you want to see this PR to completion? We're pretty close now! π€",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,668
| 1,668
|
NONE
| null |
This is a PR requested by @sanchit-gandhi in [https://github.com/huggingface/transformers/issues/18036#issuecomment-1214131955](https://github.com/huggingface/transformers/issues/18036#issuecomment-1214131955)
To summarize the issue - Flax encoder-decoder models are currently missing `input_embeds` argument, unlike the non-flax models. In this PR, I have added this argument for `FlaxT5ForConditionalGeneration` model and showed how this may be used for feeding features from other modalities such as vision into a language model such as `T5`.
Please run [examples/flax/vision-language/t5_for_vl.py](examples/flax/vision-language/t5_for_vl.py) for testing this feature.
Here's the output you should see:
```
--------------------------------------------------------------------------------
Model Input -> summarize: The US has "passed the peak" on new coronavirus cases, President Donald Trump said and predicted that some states would reopen this month. The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world. At the daily White House coronavirus briefing on Wednesday, Trump said new guidelines to reopen the country would be announced on Thursday after he speaks to governors.
--------------------------------------------------------------------------------
Summary from input_ids -> the country has over 637,000 confirmed cases and more than 30,826 deaths . the latest cases could be announced monday after speaking to governors .
--------------------------------------------------------------------------------
Summary from input_embeds -> the country has over 637,000 confirmed cases and more than 30,826 deaths . the latest cases could be announced monday after speaking to governors .
--------------------------------------------------------------------------------
Summary after concatenating random visual embeddings -> the country has over 637,000 confirmed cases and more than 30,826 deaths . the latest cases could be announced monday after a tuesday vote .
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18613/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18613/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18613",
"html_url": "https://github.com/huggingface/transformers/pull/18613",
"diff_url": "https://github.com/huggingface/transformers/pull/18613.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18613.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18612
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18612/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18612/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18612/events
|
https://github.com/huggingface/transformers/issues/18612
| 1,337,928,459
|
I_kwDOCUB6oc5PvysL
| 18,612
|
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC
|
{
"login": "YingLi001",
"id": 75192317,
"node_id": "MDQ6VXNlcjc1MTkyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YingLi001",
"html_url": "https://github.com/YingLi001",
"followers_url": "https://api.github.com/users/YingLi001/followers",
"following_url": "https://api.github.com/users/YingLi001/following{/other_user}",
"gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions",
"organizations_url": "https://api.github.com/users/YingLi001/orgs",
"repos_url": "https://api.github.com/users/YingLi001/repos",
"events_url": "https://api.github.com/users/YingLi001/events{/privacy}",
"received_events_url": "https://api.github.com/users/YingLi001/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @YingLi001! Great question, and awesome that you found help using materials on the HuggingFace Hub and GitHub!\r\n\r\nI'll first provide some context regarding Wav2Vec2 models, their tokenisers and how they affects the model weights. This information should help in answering your questions!\r\n\r\nThe pre-trained Wav2Vec2 model maps a sequence of audio inputs to a sequence of hidden-state representations. In order to decode text form this, we need to map the hidden-state representations to a vector over our vocabulary. To do this, we add a linear layer on top of the pre-trained Wav2Vec2 model. This linear layer performs a linear transformation of our hidden-states. It maps them from a dimensionality of 1024 to a dimensionality equal to our vocabulary size. In the case of TIMIT, where we have a vocabulary size of 51, we map the hidden-state representations from 1024-d down to 51-d. To decode a single character from this, we'd simply take the argmax of the 51-d vector, and look-up the corresponding token from our tokenizer! So if we had a model that predicted the following 51-d vector:\r\n```\r\n[ 0.01 ]\r\n[ 0.02 ]\r\n[ 0.90 ]\r\n...\r\n[ 0.01 ]\r\n```\r\nThe argmax would be the third token. The character that we'd predict would be the token at position 3 in the tokenizer. If we want to decode a string of character, we have to do something a bit more fancy than just taking the argmax (i.e. connectionist temporal classification (CTC)), but the linear transformation remains the same!\r\n\r\nThe linear layer that I've eluded to is called the \"language model head\", or `lm_head` for short. What's special about the LM head is that it has a dimensionality specific to the vocabulary that we train the model on. If we have a vocabulary of 51 tokens, we'll have an LM-head weight matrix of size [51, 1024] (map the 1024-d hidden-states to the 51-d output vector). If we have 64 tokens, such as in the Torgo dataset, we'll have an LM-head weight matrix of size [64, 1024].\r\n\r\nWhenever you give a model a different vocabulary size, the LM-head is going to have to be reset to a new size. Because of this, the LM-head weights are going to be randomly initialised, and thus require fine-tuning if we want our model to generate sensible predictions. Since each dataset typically has a different vocabulary, we usually build a new tokeniser for each dataset, and fine-tune the Wav2Vec2 model accordingly.\r\n\r\n1. If you match the vocabularies sizes one-to-one, it is possible to load the LM-head weights. However, this does not guarantee that your model will predict characters accurately. Suppose in TIMIT you built the following tokeniser:\r\n\r\n```\r\n\"a\": 1\r\n\"b\": 2\r\n\"c\": 3\r\n...\r\n\"z\": 26\r\n```\r\n\r\nAnd for Torgo you built a tokenizer of the same dimensionality, but with a re-ordered vocabulary:\r\n\r\n```\r\n\"z\": 1\r\n\"y\": 2\r\n\"x\": 3\r\n...\r\n\"a\": 26\r\n```\r\n\r\nIf we now use our LM-head weights to make predictions, we might get the following vector:\r\n```\r\n[ 0.01 ]\r\n[ 0.02 ]\r\n[ 0.90 ]\r\n...\r\n[ 0.01 ]\r\n```\r\nTaking the argmax, we get the third token in our vocabulary. So for TIMIT, we'd output a \"c\". For Torgo, we'd output a \"x\". Very different! Because the vocabulary is shuffled, we've effectively re-initialised our LM head. If we want to load the LM-head and evaluate the model **without any further fine-tuning**, we would need to match the tokenisers **exactly** in vocabulary size and positions. This means that for the Torgo datasets, you would load the tokeniser that you built when you fine-tuned on TIMIT. However, if we permit fine-tuning on the Torgo dataset, we can do something a bit different. See point 3.\r\n\r\n2. As mentioned, you need to align both the vocabulary size and the tokeniser exactly.\r\n\r\n3. If the Torgo dataset is similar to TIMIT, it's more than valid to load the encoder in isolation from the TIMIT checkpoint and then train the model on Torgo. You could then build a tokenizer specifically for the Torgo dataset, but leverage the majority of the weights from the TIMIT checkpoint. You can do this with the `from_pretrained()` method specifying the checkpoint location, setting the `config.vocab_size` to the correct value, and `ingore_mismatched_sizes` to `True` (ignores the `RuntimeError` you got previously):\r\n```python\r\nconfig.vocab_size = VOCAB_SIZE\r\nmodel = Wav2Vec2ForCTC.from_pretrained(CKPT_LOCATION, config=config, ingore_mismatched_sizes=True)\r\n```\r\nThis will randomly initialise the LM-head weights. We can then go ahead and train the model on the Torgo dataset with our new purpose-built tokeniser to learn suitable weights.\r\n\r\nNote that in total, we have approximately $51*1024 + 51 \\approx 53 \\times 10^{3}$ weights for the linear layer. Overall, the model has nearly 400M params. That means that we're only randomly initialising the last 0.01% of the model weights! The remaining 99.99% are loaded from the pre-trained checkpoint. This means that we need relatively little data to fine-tune a model when we randomly initialised the LM-head but load the rest of the weights from pre-trained.\r\n\r\nWhat checkpoint you use to load your model before training on Torgo is at your discretion. If the datasets are similar, you could use the TIMIT checkpoint. Otherwise, you can fine-tune from scratch using the official pre-trained `facebook/wav2vec2-large-xlsr-53` checkpoint. In both cases, you'll likely have to build a new tokeniser to match the vocabulary of Torgo and randomly initialise the linear LM-head accordingly.\r\n\r\nHope that helps and best of luck with your task!",
"Hi @sanchit-gandhi \r\n\r\nThank you sooo much for your quickly reply and providing the detailed and constructive feedback!\r\n\r\nI followed your suggestions(point 3), and successfully load the fine-tuned checkpoint. However, I still encountered some problems. Could you provide some help? Thank you in advance. The code of load the fine-tuned checkpoint shows as follows:\r\n\r\n` config = Wav2Vec2Config.from_json_file(\r\n \"/content/drive/MyDrive/Thesispackage/wav2vec2-base-timit-demo-phones/checkpoint-11500/config.json\")\r\n\r\n config.vocab_size = VOCAB_SIZE\r\n print(\"vocab_size\", config.vocab_size)\r\n \r\n model = Wav2Vec2ForCTC.from_pretrained(\r\n \"/content/drive/MyDrive/Thesispackage/wav2vec2-base-timit-demo-phones/checkpoint-11500\", \r\n config=config, \r\n ignore_mismatched_sizes=True,\r\n # ctc_loss_reduction=\"mean\",\r\n # pad_token_id=processor.tokenizer.pad_token_id,\r\n # vocab_size=len(processor.tokenizer),\r\n )\r\n model.gradient_checkpointing_enable()\r\n model.pad_token_id=processor.tokenizer.pad_token_id\r\n model.ctc_loss_reduction=\"mean\"\r\n model.vocab_size=len(processor.tokenizer)`\r\n\r\n\r\n\r\nQ1: When I load the fine-tuned checkpoint from local path using the `Wav2Vec2ForCTC.from_pretrained()`, I cannot add the `ctc_loss_reduction`, `pad_token_id`, `vocab_size` attributes inside this method. It will give me an error **TypeError: __init__() got an unexpected keyword argument 'vocab_size' site**. Therefore, I add them after loading the model. Am I right? \r\n\r\nQ2: Because the `vocab_size` between Torgo and Timit is still different, **[** Actually, **Torgo** dataset has **51** phonemes. **Timit** dataset has **64** phonemes. The 51 phonemes of Torgo dataset are **included** in the 64 phonemes of Timit dataset. **]** when I fine-tuned the fine-tuned checkpoint of Timit dataset on Torgo dataset, I got the following error. \r\n\r\n`[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in ctc_loss(log_probs, targets, input_lengths, target_lengths, blank, reduction, zero_infinity)\r\n 2615 )\r\n 2616 return torch.ctc_loss(\r\n-> 2617 log_probs, targets, input_lengths, target_lengths, blank, _Reduction.get_enum(reduction), zero_infinity\r\n 2618 )\r\n 2619 \r\n\r\nRuntimeError: blank must be in label range`\r\n\r\nAfter looking for some materials on HuggingFace, I found this [link](https://discuss.huggingface.co/t/runtimeerror-blank-must-be-in-label-range/4976/5). But I did not find a corresponding answer. \r\n\r\nDo you have any suggestions for solving this problem? \r\n\r\nThanks again and looking forward to hearing from you.\r\n\r\n\r\n\r\n",
"Hey @YingLi001, sorry for the late reply!\r\n\r\nA1: The way you've set the vocab size with the config is entirely correct!\r\n\r\nA2: Interesting! I would then re-use the tokenizer for TIMIT dataset when fine-tuning on Torgo and load the Wav2Vec2 checkpoint in it's entirety. There seems to be no need in building a new tokenizer if the vocabularies overlap entirely. You'll then also retain the knowledge of the last linear layer (LM head) when you load the checkpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @YingLi001! I hope the above comments answered your questions. Feel free to reopen this issue if you're still encountering problems loading the `state_dict`, or a new issue if there's something else you're having issues with! More than happy to help π€"
] | 1,660
| 1,665
| 1,665
|
NONE
| null |
### System Info
Transformers version: 4.4.0
Platform: Google Colab
Python version: 3.7
### Who can help?
@patrickvonplaten, @anton-l, @sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Pre-trained model: "facebook/wav2vec2-large-xlsr-53"
I have fine-tuned the above pre-tained model on Timit dataset.
When I loaded my own dataset(named: Torgo), and try to evaluate the fine-tuned model performance on Torgo dataset, I got the following error:
`RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([64, 1024]) from checkpoint, the shape in current model is torch.Size([51, 1024]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([51]).`
### Expected behavior
My expected behavior is that I can directly evaluate the fine-tuned model on Torgo dataset. In other words, how do I further train a fine-tuned model on a different dataset?
After reading some materials on HuggingFace and github, I know it is due to the `config.vocab_size` of Torgo dataset is not match that of Timit dataset.
My questions are as follows:
1. When I align the vocab size of Torgo dataset to the one of the fine-tuned model, do I need to guarantee the vacab extracted from Torgo dataset is as same as that extracted from Timit dataset? Or I just need to care about the size?
2. If I cannot align the vocab size to the one of the fine-tuned model, is there any other method that I can achieve the expected behavior?
3. Could I just load the encoder or decoder components from the fine-tuned model?
Because I am a beginner of the wav2vec2 model, if something that I mentioned above is wrong, feel free to correct me. Thank you in advance and looking forward to hearing from you.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18612/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18611
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18611/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18611/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18611/events
|
https://github.com/huggingface/transformers/issues/18611
| 1,337,865,063
|
I_kwDOCUB6oc5PvjNn
| 18,611
|
OSError in linux server
|
{
"login": "PersianSpock",
"id": 16386426,
"node_id": "MDQ6VXNlcjE2Mzg2NDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/16386426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PersianSpock",
"html_url": "https://github.com/PersianSpock",
"followers_url": "https://api.github.com/users/PersianSpock/followers",
"following_url": "https://api.github.com/users/PersianSpock/following{/other_user}",
"gists_url": "https://api.github.com/users/PersianSpock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PersianSpock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PersianSpock/subscriptions",
"organizations_url": "https://api.github.com/users/PersianSpock/orgs",
"repos_url": "https://api.github.com/users/PersianSpock/repos",
"events_url": "https://api.github.com/users/PersianSpock/events{/privacy}",
"received_events_url": "https://api.github.com/users/PersianSpock/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
NONE
| null |
### System Info
Python 3.8.10
version of transformers == 4.0.1
linux server
### Who can help?
@patil-suraj
Hi. I'm fine-tuning TrOCR in Farsi.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
tokenizer xlm-roberta-large
model:
`model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained("google/vit-base-patch16-224-in21k", 'facebook/mbart-large-50', from_tf=True)`
### Expected behavior
I expect it to run and train the model as it does in colab, but it gives me this error:
> OSError: Unable to load weights from pytorch checkpoint file for '/root/.cache/huggingface/transformers/d01bfc4a52063e6f2cc1bc7063192e012043a7c6d8e75981bb6afbb9dc911001.e4710baf72bd00d091aab2ae692d487c057734cf044ba421696823447b95521e' at '/root/.cache/huggingface/transformers/d01bfc4a52063e6f2cc1bc7063192e012043a7c6d8e75981bb6afbb9dc911001.e4710baf72bd00d091aab2ae692d487c057734cf044ba421696823447b95521e'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18611/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18610
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18610/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18610/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18610/events
|
https://github.com/huggingface/transformers/pull/18610
| 1,337,717,607
|
PR_kwDOCUB6oc49HkNY
| 18,610
|
Added Docstrings for Deberta and DebertaV2 [PyTorch]
|
{
"login": "Tegzes",
"id": 48134725,
"node_id": "MDQ6VXNlcjQ4MTM0NzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/48134725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tegzes",
"html_url": "https://github.com/Tegzes",
"followers_url": "https://api.github.com/users/Tegzes/followers",
"following_url": "https://api.github.com/users/Tegzes/following{/other_user}",
"gists_url": "https://api.github.com/users/Tegzes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tegzes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tegzes/subscriptions",
"organizations_url": "https://api.github.com/users/Tegzes/orgs",
"repos_url": "https://api.github.com/users/Tegzes/repos",
"events_url": "https://api.github.com/users/Tegzes/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tegzes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh I made the changes you mentioned in this PR: https://github.com/huggingface/transformers/pull/17997\r\n",
"@patrickvonplaten would like to have your feedback on this",
"request @LysandreJik for a final review in order to merge",
"Let's merge as it is. The usage of tiny models for doc is not ideal, but we decided to use them so doctest could run. There are several (downstream) models have done this way. We/I can definitely find some time to train (at least a few) models.\r\n",
"Sounds good to me!"
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
Adds Doctest for DeBerta and DeBertaV2 [Pytorch version]
Issue: #16292
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ydshieh @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18610/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18610",
"html_url": "https://github.com/huggingface/transformers/pull/18610",
"diff_url": "https://github.com/huggingface/transformers/pull/18610.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18610.patch",
"merged_at": 1661863581000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18609
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18609/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18609/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18609/events
|
https://github.com/huggingface/transformers/issues/18609
| 1,337,653,217
|
I_kwDOCUB6oc5Puvfh
| 18,609
|
Optuna hyperparameter does not sync trial/hyperparameters when using torchrun single-node, multi-process
|
{
"login": "spigo900",
"id": 6877173,
"node_id": "MDQ6VXNlcjY4NzcxNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6877173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spigo900",
"html_url": "https://github.com/spigo900",
"followers_url": "https://api.github.com/users/spigo900/followers",
"following_url": "https://api.github.com/users/spigo900/following{/other_user}",
"gists_url": "https://api.github.com/users/spigo900/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spigo900/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spigo900/subscriptions",
"organizations_url": "https://api.github.com/users/spigo900/orgs",
"repos_url": "https://api.github.com/users/spigo900/repos",
"events_url": "https://api.github.com/users/spigo900/events{/privacy}",
"received_events_url": "https://api.github.com/users/spigo900/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Can confirm this still happens as of commit c126a239bcea9c68453cf86045a5177afbe2be6c.",
"Hi, I have enabled the HPO DDP for optuna, and it works for CPU, you could try it in the latest master.",
"Hi, @sywangyi. Are you referring to #19096 (merged as 6227078d0a95aed688578d37b319e969a1dcd30f)? It's not clear to me that this should fix the problem -- because each process calls optimize() on its own I'd expect each process to generate a different Optuna trial object and for that to cause problems when passing the trial object to `train()`. However, I did try rerunning the OP (reproduce) script on the latest commit (83dc6377d0107b462e5d804ffa72d069625bc36b). It crashed with `RuntimeError: DDP expects same model across all ranks, but Rank 0 has 199 params, while rank 1 has inconsistent 0 params.`. Unsure if that's related.",
"Hi @spigo900 ,I am referring to https://github.com/huggingface/transformers/pull/19002, only rank0 will generate the trial and pass the argument to other ranks",
"@sywangyi I see, that looks like it should solve the problem. I will try that today. ETA: Thank you.",
"@sywangyi Yes, #19168 solved the problem. Thanks again.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,666
| 1,666
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Note this requires running a script with `torchrun` on a single node but multiple processes. I ran this on a computer with 4 GPUs, so I used 1 node, 4 processes per node.
1. Install dependencies: `datasets`, `evaluate` for example script, `optuna` itself.
2. Unzip scripts: [scripts_to_reproduce.zip](https://github.com/huggingface/transformers/files/9329512/scripts_to_reproduce.zip).
3. Run `bug.sh`.
4. Observe the output mentions 4 trials, rather than 1 as the arguments specify. Note each reported learning rate is different.
Example of relevant output:
```
[INFO|trainer.py:1612] 2022-08-12 15:12:03,173 >> ***** Running training *****
[INFO|trainer.py:1613] 2022-08-12 15:12:03,173 >> Num examples = 1024
[INFO|trainer.py:1614] 2022-08-12 15:12:03,173 >> Num Epochs = 3
[INFO|trainer.py:1615] 2022-08-12 15:12:03,173 >> Instantaneous batch size per device = 64
[INFO|trainer.py:1616] 2022-08-12 15:12:03,173 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1617] 2022-08-12 15:12:03,173 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1618] 2022-08-12 15:12:03,174 >> Total optimization steps = 12
0%| | 0/12 [00:00<?, ?it/s][W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1251] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
100%|ββββββββββ| 12/12 [00:07<00:00, 1.79it/s][INFO|trainer.py:1857] 2022-08-12 15:12:11,105 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 7.9321, 'train_samples_per_second': 387.289, 'train_steps_per_second': 1.513, 'train_loss': 4.8337141672770185, 'epoch': 3.0}
100%|ββββββββββ| 12/12 [00:07<00:00, 1.51it/s]
[INFO|trainer.py:729] 2022-08-12 15:12:11,143 >> The following columns in the evaluation set don't have a corresponding argument in `BertForQuestionAnswering.forward` and have been ignored: offset_mapping, example_id. If offset_mapping, example_id are not expected by `BertForQuestionAnswering.forward`, you can safely ignore this message.
[INFO|trainer.py:2902] 2022-08-12 15:12:11,148 >> ***** Running Evaluation *****
[INFO|trainer.py:2904] 2022-08-12 15:12:11,148 >> Num examples = 1024
[INFO|trainer.py:2907] 2022-08-12 15:12:11,148 >> Batch size = 64
100%|ββββββββββ| 4/4 [00:00<00:00, 7.13it/s]08/12/2022 15:12:12 - INFO - utils_qa - Post-processing 1024 example predictions split into 1024 features.
100%|ββββββββββ| 1024/1024 [00:02<00:00, 392.92it/s]
08/12/2022 15:12:15 - INFO - utils_qa - Saving predictions to /tmp/debug_hpsearch_TrfOptunaBug_16693069.out/eval_predictions.json.
08/12/2022 15:12:15 - INFO - utils_qa - Saving nbest_preds to /tmp/debug_hpsearch_TrfOptunaBug_16693069.out/eval_nbest_predictions.json.
100%|ββββββββββ| 1024/1024 [00:02<00:00, 394.31it/s]
100%|ββββββββββ| 1024/1024 [00:02<00:00, 393.04it/s]
100%|ββββββββββ| 1024/1024 [00:03<00:00, 337.82it/s]
[I 2022-08-12 15:12:16,218] Trial 1 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 5.82055234642441e-06}. Best is trial 1 with value: 3.0207817191957127.
[I 2022-08-12 15:12:16,471] Trial 2 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 1.0083131394917086e-06}. Best is trial 1 with value: 3.0207817191957127.
100%|ββββββββββ| 4/4 [00:05<00:00, 1.30s/it]
08/12/2022 15:12:16 - INFO - __main__ - Rank 0: Metrics: {'eval_exact_match': 0.09765625, 'eval_f1': 3.0207817191957127, 'epoch': 3.0}
[I 2022-08-12 15:12:16,788] Trial 3 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 5.780902974449181e-06}. Best is trial 1 with value: 3.0207817191957127.
08/12/2022 15:12:16 - INFO - __main__ - Best HP search run results: {'id': '1', 'value': 3.0207817191957127, 'all_metrics': None, 'hyperparameters': {'learning_rate': 5.82055234642441e-06}, 'value_name': 'brier_score', 'train_samples': 1024}
[I 2022-08-12 15:12:17,173] Trial 0 finished with value: 3.0207817191957127 and parameters: {'learning_rate': 1.1949546634616653e-06}. Best is trial 1 with value: 3.0207817191957127.
[INFO|modelcard.py:443] 2022-08-12 15:12:17,212 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Question Answering', 'type': 'question-answering'}, 'dataset': {'name': 'squad', 'type': 'squad', 'config': 'plain_text', 'split': 'train', 'args': 'plain_text'}}
```
### Expected behavior
I expected this setup to produce and report 1 trial of results, with each GPU-process using the same hyperparameters, in this case the same learning rate. I expected `trainer.hyperparameter_search()` would be consistent in this way with how `trainer.train()` and `trainer.evaluate()` work. Instead the script reports 4 results and each GPU-process apparently uses a different learning rate.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18609/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18608
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18608/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18608/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18608/events
|
https://github.com/huggingface/transformers/issues/18608
| 1,337,565,848
|
I_kwDOCUB6oc5PuaKY
| 18,608
|
IterableDatasets result in nan loss in eval with dataloader_num_workers>=1 and multi-gpu
|
{
"login": "dlwh",
"id": 9633,
"node_id": "MDQ6VXNlcjk2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlwh",
"html_url": "https://github.com/dlwh",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"repos_url": "https://api.github.com/users/dlwh/repos",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thanks for flagging. The PR above should fix the issue, could you give it a quick try?"
] | 1,660
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: YES
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run this modified/minimized [run_clm.py](https://gist.github.com/dlwh/074e2571fab15f94103603674dd184a3) under DeepSpeed (or presumably any other multiprocessing, but I didn't check)
The script works fine if you don't use multiprocessing, or if you change it to not use an IterableDataset, or if you set dataloader_num_workers to 0 (which is the default)
Relevant bit of logs:
```
Traceback (most recent call last):
File "run_clm.py", line 125, in <module>
main()
File "run_clm.py", line 116, in main
assert np.isfinite(metrics["eval_loss"])
AssertionError
```
### Expected behavior
assertion shouldn't fail, or at least trainer should require that dataloader_num_workers is 0 if using multi-gpu and IterableDataset...
The underlying issue is that Trainer creates 'IterableDatasetShard's when using multi-gpu and IterableDataset, and (evaluation_loop)[https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3024-L3027] looks at the "num_examples" property of the IterableDatasetShard, but this value isn't actually incremented in the main training process if you're using `dataloader_num_workers>0`, because it's set in the worker processes...
I will note that `evaluation_loop` goes to some trouble [to track the actual number of examples](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2935-L2944) so unless I'm missing something I think one could just always use that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18608/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18607
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18607/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18607/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18607/events
|
https://github.com/huggingface/transformers/pull/18607
| 1,337,539,179
|
PR_kwDOCUB6oc49G-bB
| 18,607
|
Changed the class on which `register_for_auto_class` method is defined from `TFSequenceSummary` to `TFPreTrainedModel`
|
{
"login": "azonti",
"id": 28558912,
"node_id": "MDQ6VXNlcjI4NTU4OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/28558912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/azonti",
"html_url": "https://github.com/azonti",
"followers_url": "https://api.github.com/users/azonti/followers",
"following_url": "https://api.github.com/users/azonti/following{/other_user}",
"gists_url": "https://api.github.com/users/azonti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/azonti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/azonti/subscriptions",
"organizations_url": "https://api.github.com/users/azonti/orgs",
"repos_url": "https://api.github.com/users/azonti/repos",
"events_url": "https://api.github.com/users/azonti/events{/privacy}",
"received_events_url": "https://api.github.com/users/azonti/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @LysandreJik, could you review please? ",
"Let me ping @sgugger for review, he's more acquainted with this code and will be back from leave shortly :)"
] | 1,660
| 1,662
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
Changed the class on which `register_for_auto_class` method is defined from `TFSequenceSummary` to `TFPreTrainedModel`.
It does not make sense that `register_for_auto_class` is defined on `TFSequenceSummary`. I believe this is a bug in PR #15379.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18607/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18607",
"html_url": "https://github.com/huggingface/transformers/pull/18607",
"diff_url": "https://github.com/huggingface/transformers/pull/18607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18607.patch",
"merged_at": 1661956639000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18606
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18606/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18606/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18606/events
|
https://github.com/huggingface/transformers/pull/18606
| 1,337,456,999
|
PR_kwDOCUB6oc49Gr9C
| 18,606
|
Fix Yolos ONNX export test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
COLLABORATOR
| null |
# What does this PR do?
YOLOS has issue on ONNX exporting on CUDA. Let's skip it, just like [this](https://github.com/ultralytics/yolov5/pull/8378).
**Question**: we can enable this if there is a way to run the export non-dynamically for this model.
current job failure
```
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_143_yolos_default
(line 318) AssertionError: yolos, default -> Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu! (when checking argument for argument index in method wrapper__index_select)
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_144_yolos_object_detection
(line 318) AssertionError: yolos, object-detection -> Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu! (when checking argument for argument index in method wrapper__index_select)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18606/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18606",
"html_url": "https://github.com/huggingface/transformers/pull/18606",
"diff_url": "https://github.com/huggingface/transformers/pull/18606.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18606.patch",
"merged_at": 1660723490000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18605
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18605/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18605/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18605/events
|
https://github.com/huggingface/transformers/pull/18605
| 1,337,451,472
|
PR_kwDOCUB6oc49Gqss
| 18,605
|
[WIP] Introduce NestLayer
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18605). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,667
| null |
COLLABORATOR
| null |
# What does this PR do?
Outline for a new layer class to replace `tf.keras.layers.Layer` in our models. It extends `tf.keras.layers.Layer` to include two methods `get_layer` and `layers` from `tf.keras.Model`.
## Motivation
All of our TF models' layers are subclasses of `tf.keras.layers.Layer`. Unfortunately, when there are nested layers, we are not able to access the layers below the first level using the typical keras `layers` API.
The main reason for introducing to be able to use our models as backbones. In DETR, we replace the ResNet backbone's [batchnorm layers to frozen batchnorm layers](https://github.com/huggingface/transformers/blob/2ab790e82d0759b667cd848a4d49e6ad65e15d59/src/transformers/models/detr/modeling_detr.py#L306). We need to be able to perform the same or similar operations on our TF models. This requires being able to access all of the layers, which is currently not possible.
For example - our `TFResNetModel` will only show `TFResNetMainLayer` when we call `model.summary(expand_nested=True)` and `TFResNetMainLayer` has no property `layers`.
```
In [1]: from transformers import TFResNetModel
In [2]: model_checkpoint = "microsoft/resnet-50"
In [3]: model = TFResNetModel.from_pretrained(model_checkpoint)
In [4]: model.summary(expand_nested=True)
Model: "tf_res_net_model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resnet (TFResNetMainLayer) multiple 23561152
=================================================================
Total params: 23,561,152
Trainable params: 23,508,032
Non-trainable params: 53,120
_________________________________________________________________
In [5]: model.layers
Out[5]: [<transformers.models.resnet.modeling_tf_resnet.TFResNetMainLayer at 0x17fb9daf0>]
In [6]: hasattr(model.layers[0], 'layers')
Out[6]: False
```
This is also necessary if we every want to be able to access the intermediate activation functions of our TF model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18605/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18605/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18605",
"html_url": "https://github.com/huggingface/transformers/pull/18605",
"diff_url": "https://github.com/huggingface/transformers/pull/18605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18605.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18604
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18604/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18604/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18604/events
|
https://github.com/huggingface/transformers/pull/18604
| 1,337,407,872
|
PR_kwDOCUB6oc49GhKK
| 18,604
|
[Donut] Fix URLs
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the URL the my Donut notebooks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18604/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18604",
"html_url": "https://github.com/huggingface/transformers/pull/18604",
"diff_url": "https://github.com/huggingface/transformers/pull/18604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18604.patch",
"merged_at": 1660323169000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18603
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18603/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18603/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18603/events
|
https://github.com/huggingface/transformers/issues/18603
| 1,337,232,651
|
I_kwDOCUB6oc5PtI0L
| 18,603
|
NAN values appears when including a new padding token in my tokenizer
|
{
"login": "tessanix",
"id": 51161698,
"node_id": "MDQ6VXNlcjUxMTYxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/51161698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tessanix",
"html_url": "https://github.com/tessanix",
"followers_url": "https://api.github.com/users/tessanix/followers",
"following_url": "https://api.github.com/users/tessanix/following{/other_user}",
"gists_url": "https://api.github.com/users/tessanix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tessanix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tessanix/subscriptions",
"organizations_url": "https://api.github.com/users/tessanix/orgs",
"repos_url": "https://api.github.com/users/tessanix/repos",
"events_url": "https://api.github.com/users/tessanix/events{/privacy}",
"received_events_url": "https://api.github.com/users/tessanix/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@ydshieh, would you like to take a look at this issue?",
"Hi @tessanix, thank you for reporting. Could you provide a self-contained code snippet that could be run and reproduce the issue.\r\nSo far, `dataset` is not defined, neither `ds`. And `model` is used (`model.prepare_tf_dataset`) before it is created.\r\n\r\nIt would be really helpful to have a self-contained code snippet for debugging π . Thank you.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
NONE
| null |
I'm trying to fine-tune a DialoGPT model on a new dataset. I already processed my data correctly and adding a new padding token in the tokenizer didn't seem to make any issue :
```python
#my dataset :
print(dataset)
print(dataset[0]['text'])
```
> ### output ###
>
> Dataset({
> features: ['text'],
> num_rows: 48423
> })
>
> [speaker 1]: Great that you wish to hear the voices of the guitarists. Here are your booking details of the tickets. You wish to purchase 4 tickets for the event The Original Wailers that is going to take place on March 8th in Berkeley, right?
> [speaker 2]: Yup, you're right. Please May I know where is the event conducted and I need the complete address?
> [speaker 1]: Please note down the complete address of the event happening. It's at Cornerstone Craft Beer & Live Music, 2367 Shattuck Avenue. Your reservation is successful and have a great time there!
> [speaker 2]: Thanks much for the information you've given. Please can you help me to find some intermediate priced restaurant that provides Ethiopian kind of food.
> [speaker 1]: Yup! There is an Ethiopian Restaurant named Addis Restaurant providing excellent and authentic traditional Ethiopian cuisine located in Berkeley. Do you wish to reserve a table here?
> [speaker 2]:
```python
#tokenizing and adding labels
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
def tokenize_function(examples):
return tokenizer(examples["text"], padding='max_length', add_special_tokens =True, max_length=246) #truncation=True, max_length=13)
tokenized_datasets = ds.map(
tokenize_function, batched=True, num_proc=4, remove_columns=["text"]
)
tokenized_datasets = tokenized_datasets.add_column("labels", tokenized_datasets[:]['input_ids'])
train_set = model.prepare_tf_dataset(
tokenized_datasets,
shuffle=True,
batch_size=1,
)
sample = train_set.as_numpy_iterator()
sample = sample.next()
print(tokenized_datasets)
print(train_set)
print(sample)
```
> ### output ###
>
> Dataset({
> features: ['input_ids', 'attention_mask', 'labels'],
> num_rows: 48423
> })
>
> <PrefetchDataset element_spec=({'input_ids': TensorSpec(shape=(1, 246), dtype=tf.int64, name=None), 'attention_mask': TensorSpec(shape=(1, 246), dtype=tf.int64, name=None)}, TensorSpec(shape=(1, 246), dtype=tf.int64, name=None))>
>
> ({'attention_mask': array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> 0, 0, 0, 0]]),
> 'input_ids': array([[ 58, 4125, 3110, 352, 5974, 314, 765, 284, 711,
> 440, 9190, 440, 14918, 440, 3825, 319, 616, 3359,
> 13, 198, 58, 4125, 3110, 362, 5974, 921, 765,
> 284, 3350, 262, 3496, 440, 9190, 440, 14918, 440,
> 3825, 4291, 262, 3195, 11, 826, 30, 198, 58,
> 4125, 3110, 352, 5974, 1320, 318, 826, 13, 1867,
> 2099, 286, 3496, 318, 340, 30, 198, 58, 4125,
> 3110, 362, 5974, 632, 318, 5610, 739, 262, 12136,
> 6536, 290, 534, 3496, 468, 2067, 13, 198, 58,
> 4125, 3110, 352, 5974, 20558, 617, 1637, 329, 502,
> 13, 198, 58, 4125, 3110, 362, 5974, 220, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257]])},
> array([[ 58, 4125, 3110, 352, 5974, 314, 765, 284, 711,
> 440, 9190, 440, 14918, 440, 3825, 319, 616, 3359,
> 13, 198, 58, 4125, 3110, 362, 5974, 921, 765,
> 284, 3350, 262, 3496, 440, 9190, 440, 14918, 440,
> 3825, 4291, 262, 3195, 11, 826, 30, 198, 58,
> 4125, 3110, 352, 5974, 1320, 318, 826, 13, 1867,
> 2099, 286, 3496, 318, 340, 30, 198, 58, 4125,
> 3110, 362, 5974, 632, 318, 5610, 739, 262, 12136,
> 6536, 290, 534, 3496, 468, 2067, 13, 198, 58,
> 4125, 3110, 352, 5974, 20558, 617, 1637, 329, 502,
> 13, 198, 58, 4125, 3110, 362, 5974, 220, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257, 50257,
> 50257, 50257, 50257]]))
The ouputs so far seem pretty clean for me. But when I try to make a prediction with my model or train it I have nan values as output :
```python
#Instatiation of model
from transformers import TFAutoModelForCausalLM
model = TFAutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
optimizer = AdamWeightDecay(learning_rate=1e-9, weight_decay_rate=0.01)
model.compile(optimizer=optimizer, jit_compile=True)
```
```python
#model inference
loss = model(sample[0], labels=sample[1])
print(loss)
```
> ### output ###
>
> TFCausalLMOutputWithCrossAttentions([('loss',
> <tf.Tensor: shape=(1,), dtype=float32, numpy=array([nan], dtype=float32)>),
> ('logits',
> <tf.Tensor: shape=(1, 246, 50258), dtype=float32, numpy=
> array([[[nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> ...,
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan]]], dtype=float32)>),
> ('past_key_values',
> (<tf.Tensor: shape=(2, 1, 16, 246, 64), dtype=float32, numpy=
> array([[[[[nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> ...,
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan]],
>
> [[nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> ...,
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan],
> [nan, nan, nan, ..., nan, nan, nan]],
> .............
```python
#model training
model.fit(train_set, epochs=1)
```
> ### output ###
>
> 56/48423 [..............................] - ETA: 2:27:49 - loss: nan
This NAN value is certainly caused by the new token '[PAD]' added but I don't know how to deal with it.
Can someone help me please ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18603/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18602
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18602/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18602/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18602/events
|
https://github.com/huggingface/transformers/pull/18602
| 1,337,201,484
|
PR_kwDOCUB6oc49F2Wt
| 18,602
|
Remove pos arg from Perceiver's Pre/Postprocessors
|
{
"login": "aielawady",
"id": 46355173,
"node_id": "MDQ6VXNlcjQ2MzU1MTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/46355173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aielawady",
"html_url": "https://github.com/aielawady",
"followers_url": "https://api.github.com/users/aielawady/followers",
"following_url": "https://api.github.com/users/aielawady/following{/other_user}",
"gists_url": "https://api.github.com/users/aielawady/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aielawady/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aielawady/subscriptions",
"organizations_url": "https://api.github.com/users/aielawady/orgs",
"repos_url": "https://api.github.com/users/aielawady/repos",
"events_url": "https://api.github.com/users/aielawady/events{/privacy}",
"received_events_url": "https://api.github.com/users/aielawady/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
Fix #15971
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18602/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18602",
"html_url": "https://github.com/huggingface/transformers/pull/18602",
"diff_url": "https://github.com/huggingface/transformers/pull/18602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18602.patch",
"merged_at": 1664196658000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18601
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18601/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18601/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18601/events
|
https://github.com/huggingface/transformers/pull/18601
| 1,337,143,499
|
PR_kwDOCUB6oc49Fpzs
| 18,601
|
Update run_mlm_no_trainer.py
|
{
"login": "vedant-z",
"id": 93431609,
"node_id": "U_kgDOBZGnOQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93431609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vedant-z",
"html_url": "https://github.com/vedant-z",
"followers_url": "https://api.github.com/users/vedant-z/followers",
"following_url": "https://api.github.com/users/vedant-z/following{/other_user}",
"gists_url": "https://api.github.com/users/vedant-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vedant-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vedant-z/subscriptions",
"organizations_url": "https://api.github.com/users/vedant-z/orgs",
"repos_url": "https://api.github.com/users/vedant-z/repos",
"events_url": "https://api.github.com/users/vedant-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/vedant-z/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@muellerzr Please let me know if I have to make some changes or have I done it correctly",
"_The documentation is not available anymore as the PR was closed or merged._",
"@vedant-z Why did you close the request yourself? Was there a mistake? "
] | 1,660
| 1,672
| 1,660
|
NONE
| null |
Fixes #18436
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18601/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18601",
"html_url": "https://github.com/huggingface/transformers/pull/18601",
"diff_url": "https://github.com/huggingface/transformers/pull/18601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18601.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18600
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18600/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18600/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18600/events
|
https://github.com/huggingface/transformers/pull/18600
| 1,337,104,951
|
PR_kwDOCUB6oc49FheC
| 18,600
|
Add `TFAutoModelForSemanticSegmentation` to the main `__init__.py`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for fixing!",
"test failure is irrelevant. Merge now."
] | 1,660
| 1,660
| 1,660
|
COLLABORATOR
| null |
# What does this PR do?
Currently, `from transformers import TFAutoModelForSemanticSegmentation` fails.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18600/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18600",
"html_url": "https://github.com/huggingface/transformers/pull/18600",
"diff_url": "https://github.com/huggingface/transformers/pull/18600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18600.patch",
"merged_at": 1660309801000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18599
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18599/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18599/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18599/events
|
https://github.com/huggingface/transformers/issues/18599
| 1,337,090,060
|
I_kwDOCUB6oc5PsmAM
| 18,599
|
how to customize the encoder_output when using the generate function in BART?
|
{
"login": "xjw-star",
"id": 110547608,
"node_id": "U_kgDOBpbSmA",
"avatar_url": "https://avatars.githubusercontent.com/u/110547608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xjw-star",
"html_url": "https://github.com/xjw-star",
"followers_url": "https://api.github.com/users/xjw-star/followers",
"following_url": "https://api.github.com/users/xjw-star/following{/other_user}",
"gists_url": "https://api.github.com/users/xjw-star/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xjw-star/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xjw-star/subscriptions",
"organizations_url": "https://api.github.com/users/xjw-star/orgs",
"repos_url": "https://api.github.com/users/xjw-star/repos",
"events_url": "https://api.github.com/users/xjw-star/events{/privacy}",
"received_events_url": "https://api.github.com/users/xjw-star/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,660
| 1,663
| 1,663
|
NONE
| null |
For instance, I would like to concatenate two different last encoder hidden states from two different text. How can I achieve it using existing generate function?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18599/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18598
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18598/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18598/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18598/events
|
https://github.com/huggingface/transformers/pull/18598
| 1,337,018,326
|
PR_kwDOCUB6oc49FOmF
| 18,598
|
mac m1 `mps` integration
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
1. Enables users to leverage Apple M1 GPUs via mps device type in PyTorch for faster training and inference than CPU. Fixes #17971
2. User has to just pass `--use_mps_device` argument.
For example, you can run the offical Glue text classififcation task (from the root folder) using Apple Silicon M1 GPU with below command:
```bash
export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--use_mps_device \
--overwrite_output_dir
```
Below are the output logs:
```bash
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--use_mps_device \
--overwrite_output_dir
NOTE: Redirects are currently not supported in Windows or MacOs.
08/12/2022 15:30:13 - WARNING - __main__ - Process rank: -1, device: mps, n_gpu: -1distributed training: False, 16-bits training: False
08/12/2022 15:30:13 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=-1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=2e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=/tmp/mrpc/runs/Aug12_15-30-12_Sourabs-MacBook-Pro.local,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
optim=adamw_hf,
output_dir=/tmp/mrpc/,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=32,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=/tmp/mrpc/,
save_on_each_node=False,
save_steps=500,
save_strategy=steps,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=True,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
08/12/2022 15:30:14 - INFO - datasets.info - Loading Dataset Infos from
...
[INFO|configuration_utils.py:643] 2022-08-12 15:30:17,041 >> loading configuration file config.json from cache at /Users/sourabmangrulkar/.cache/huggingface/hub/models--bert-base-cased/snapshots/a8d257ba9925ef39f3036bfc338acf5283c512d9/config.json
[INFO|configuration_utils.py:695] 2022-08-12 15:30:17,042 >> Model config BertConfig {
"_name_or_path": "bert-base-cased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.22.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 28996
}
...
08/12/2022 15:30:19 - INFO - __main__ - Sample 2619 of the training set: {'sentence1': 'The proceedings were taken up with prosecutors outlining their case against Amrozi , reading 33 pages of documents outlining allegations against him .', 'sentence2': 'Proceedings were taken up with prosecutors outlining their case against Amrozi , reading a 33-page accusation letter to the court .', 'label': 1, 'idx': 2916, 'input_ids': [101, 1109, 10830, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 3081, 5097, 1104, 4961, 1149, 13260, 9966, 1222, 1140, 119, 102, 20661, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 170, 3081, 118, 3674, 21100, 2998, 1106, 1103, 2175, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}.
...
[INFO|trainer.py:1612] 2022-08-12 15:30:22,027 >> ***** Running training *****
[INFO|trainer.py:1613] 2022-08-12 15:30:22,027 >> Num examples = 3668
[INFO|trainer.py:1614] 2022-08-12 15:30:22,027 >> Num Epochs = 3
[INFO|trainer.py:1615] 2022-08-12 15:30:22,027 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1616] 2022-08-12 15:30:22,027 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1617] 2022-08-12 15:30:22,027 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1618] 2022-08-12 15:30:22,027 >> Total optimization steps = 345
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 345/345 [09:38<00:00, 2.04s/it][INFO|trainer.py:1857] 2022-08-12 15:40:00,410 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 578.4189, 'train_samples_per_second': 19.024, 'train_steps_per_second': 0.596, 'train_loss': 0.4251004426375679, 'epoch': 3.0}
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 345/345 [09:38<00:00, 1.68s/it]
[INFO|trainer.py:2647] 2022-08-12 15:40:00,481 >> Saving model checkpoint to /tmp/mrpc/
[INFO|configuration_utils.py:440] 2022-08-12 15:40:00,487 >> Configuration saved in /tmp/mrpc/config.json
[INFO|modeling_utils.py:1569] 2022-08-12 15:40:01,553 >> Model weights saved in /tmp/mrpc/pytorch_model.bin
[INFO|tokenization_utils_base.py:2114] 2022-08-12 15:40:01,561 >> tokenizer config file saved in /tmp/mrpc/tokenizer_config.json
[INFO|tokenization_utils_base.py:2121] 2022-08-12 15:40:01,561 >> Special tokens file saved in /tmp/mrpc/special_tokens_map.json
***** train metrics *****
epoch = 3.0
train_loss = 0.4251
train_runtime = 0:09:38.41
train_samples = 3668
train_samples_per_second = 19.024
train_steps_per_second = 0.596
08/12/2022 15:40:01 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:729] 2022-08-12 15:40:01,619 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1, idx. If sentence2, sentence1, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2898] 2022-08-12 15:40:01,637 >> ***** Running Evaluation *****
[INFO|trainer.py:2900] 2022-08-12 15:40:01,637 >> Num examples = 408
[INFO|trainer.py:2903] 2022-08-12 15:40:01,637 >> Batch size = 8
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 51/51 [00:04<00:00, 11.68it/s]
***** eval metrics *****
epoch = 3.0
eval_accuracy = 0.8407
eval_combined_score = 0.8644
eval_f1 = 0.8881
eval_loss = 0.3957
eval_runtime = 0:00:04.80
eval_samples = 408
eval_samples_per_second = 84.915
eval_steps_per_second = 10.614
```
Attaching plots showing GPU usage on M1 pro with 10 CPU and 14 GPU cores:
<img width="393" alt="Screenshot 2022-08-12 at 3 37 37 PM" src="https://user-images.githubusercontent.com/13534540/184333060-85df7fc3-28ab-4a7f-9a61-787df6c19c90.png">
Note: Pre-requisites: Installing torch with `mps` support
```python
# installing torch with m1 support on mac
# install python 3.10.5
# check the platform
import platform
platform.platform()
'macOS-12.5-arm64-arm-64bit'
# (This is compatible as the macOS version is above 12.3 and it is the ARM64 version)
# install torch 1.12 via the below command
# pip3 install torch torchvision torchaudio
# test the `mps` device support
>>> import torch
>>> torch.has_mps
True
>>> a = torch.Tensor([10,11])
>>> a.to("mps")
/Users/mac/ml/lib/python3.10/site-packages/torch/_tensor_str.py:103: UserWarning: The operator 'aten::bitwise_and.Tensor_out' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0))
tensor([10.0000, 11.0000], device='mps:0')
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18598/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18598/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18598",
"html_url": "https://github.com/huggingface/transformers/pull/18598",
"diff_url": "https://github.com/huggingface/transformers/pull/18598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18598.patch",
"merged_at": 1660647892000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18597
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18597/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18597/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18597/events
|
https://github.com/huggingface/transformers/pull/18597
| 1,336,998,671
|
PR_kwDOCUB6oc49FKWH
| 18,597
|
[CvT] Tensorflow implementation
|
{
"login": "mathieujouffroy",
"id": 45208116,
"node_id": "MDQ6VXNlcjQ1MjA4MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/45208116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathieujouffroy",
"html_url": "https://github.com/mathieujouffroy",
"followers_url": "https://api.github.com/users/mathieujouffroy/followers",
"following_url": "https://api.github.com/users/mathieujouffroy/following{/other_user}",
"gists_url": "https://api.github.com/users/mathieujouffroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathieujouffroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathieujouffroy/subscriptions",
"organizations_url": "https://api.github.com/users/mathieujouffroy/orgs",
"repos_url": "https://api.github.com/users/mathieujouffroy/repos",
"events_url": "https://api.github.com/users/mathieujouffroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathieujouffroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR @mathieujouffroy! Let me ping @amyeroberts for review :)",
"You're welcome. Cool thanks, should I create an Issue ? ",
"Thanks a lot for both of your reviews π ! \r\nI've corrected the issues :) \r\nAlthough, I kept using `shape_list` instead of `tf.shape` throughout the implementation of the model as `tf.shape` was breaking things while running the tests (see comment above).\r\nShould I follow the instructions in this [PR comment](https://github.com/huggingface/transformers/pull/18678#issuecomment-1222244001) to upload to weights ?",
"@mathieujouffroy awesome, seems like we are ready to move on to the next stage. I'm adding @sgugger as the last reviewer.\r\n\r\nMeanwhile, you can open the PR to the TF model weights on the hub as follows:\r\n1. Make sure you have the latest version of the hub installed (`pip install huggingface_hub -U`) and that you are logged in to HF with a write token (`huggingface-cli login`)\r\n2. Run `transformers-cli pt-to-tf --model-name foo/bar` from this branch :D\r\n3. In the Hub PR, tag `@joaogante, @nielsr, @sgugger`",
"> @mathieujouffroy awesome, seems like we are ready to move on to the next stage. I'm adding @sgugger as the last reviewer.\r\n> \r\n> Meanwhile, you can open the PR to the TF model weights on the hub as follows:\r\n> \r\n> 1. Make sure you have the latest version of the hub installed (`pip install huggingface_hub -U`) and that you are logged in to HF with a write token (`huggingface-cli login`)\r\n> 2. Run `transformers-cli pt-to-tf --model-name foo/bar` from this branch :D\r\n> 3. In the Hub PR, tag `@joaogante, @nielsr, @sgugger`\r\n\r\nI am getting an error when using `transformers-cli pt-to-tf --model-name microsoft/cvt-13` : \r\n```\r\nFile \"/Users/MathieuJouffroy/transformers/src/transformers/commands/pt_to_tf.py\", line 307, in run\r\n + \"\\n\".join([f\"{k}: {v:.3e}\" for k, v in hidden_differences.items() if v > self._max_error])\r\nValueError: The cross-loaded TensorFlow model has different outputs, something went wrong!\r\n\r\nList of maximum output differences above the threshold (5e-05):\r\nlogits: 1.190e-04\r\n\r\nList of maximum hidden layer differences above the threshold (5e-05):\r\nhidden_states[2]: 1.227e-02\r\n```\r\n\r\nIt seems that both the `max_crossload_output_diff `and the `max_crossload_hidden_diff` are bigger than the `self._max_error` **(5e-5)**. \r\nRespectively I have `max_crossload_output_diff` = 0.00011897087 **(1.190e-04)** and `max_crossload_hidden_diff` = 0.012268066 **(1.227e-02)**.\r\n\r\nI am trying to figure out how to correct this error (WIP).",
"@mathieujouffroy ~1e-2 is quite large -- does this happen exclusively on `microsoft/cvt-13`, or across all CvT models?",
"> @mathieujouffroy ~1e-2 is quite large -- does this happen exclusively on `microsoft/cvt-13`, or across all CvT models?\r\n\r\nYess, unfortunately it happens across all CvT models.\r\nWhen inspecting the difference between the hidden states of the pytorch model and the hidden states of the tensorflow model, I can see that the difference increases throughout the model (with the number of layers).\r\nThe CvT model is composed of 3 stages of encoder block, with their respective number of layers being 1, 2 and 10. In the last stage, the difference between the torch model's hidden state and the tensorflow model's hidden state increases from ~e-5 at layer[0] to ~1e-2 at layer[9].\r\nFor `microsoft/cvt-13` : \r\n```\r\nCvtLayer output vs TFCvtLayer output\r\n\r\ndiff pt-tf stage[0]/layer[0]: 1.77919864654541e-05\r\n\r\ndiff pt-tf stage[1]/layer[0]: 2.4199485778808594e-05\r\ndiff pt-tf stage[1]/layer[1]: 3.9249658584594727e-05\r\n\r\ndiff pt-tf stage[2]/layer[0]: 3.0934810638427734e-05\r\ndiff pt-tf stage[2]/layer[1]: 0.000102996826171875\r\ndiff pt-tf stage[2]/layer[2]: 0.0004825592041015625\r\ndiff pt-tf stage[2]/layer[3]: 0.0009307861328125\r\ndiff pt-tf stage[2]/layer[4]: 0.001621246337890625\r\ndiff pt-tf stage[2]/layer[5]: 0.0032196044921875\r\ndiff pt-tf stage[2]/layer[6]: 0.0064239501953125\r\ndiff pt-tf stage[2]/layer[7]: 0.0091705322265625\r\ndiff pt-tf stage[2]/layer[8]: 0.012481689453125\r\ndiff pt-tf stage[2]/layer[9]: 0.01226806640625\r\n\r\nHidden Differences:\r\nhidden_states[0]:1.77919864654541e-05\r\nhidden_states[1]:3.9249658584594727e-05\r\nhidden_states[2]:0.01226806640625\r\n\r\nOutput Differences:\r\nlogits:0.00011897087097167969\r\n```\r\nI can't seem to correct this issue. I was wondering if this was due to floating points operations.\r\nDo you have any advice ? π\r\n",
"@mattchurgin in these cases, a deep dive has to be done -- place a pair of `breakpoint()` in the layer where the problems start, one in each framework, and see which operation causes the divergence. Then, confirm that the TF operation/layer is parametrized correctly and, if it is, one has to dig even deeper :D ",
"Hello @gante, sorry for the late response.\r\nI've done a deep dive into both frameworks. It seems that the Batch Normalization is responsible for the divergence. The 2 residual connections further increase the divergence throughout the model. However, I have parameterized `tf.keras.layers.BatchNormalization` accordingly to the default parameters of pytorch (`epsilon=1e-5` and `momentum=0.1`). I have also set both models in inference mode when testing.\r\n\r\nIs this divergence due to the **momentum** definition of Batch Normalization being different in tensorflow than in pytorch ?\r\n\r\nWhen removing the Batch Normalization layers from both frameworks, the difference in the output tensors and the hidden states is greatly reduced. I get a `max_crossload_output_diff` of ~e-6 and a `max_crossload_hidden_diff` of ~e-4 for all Cvt models. However, the `max_crossload_hidden_diff` is still higher than 5e-5 (I have ~e-4). The 2 residual connections are responsible for this difference.\r\n\r\nI'm a bit confused. Therefore I've inspected the ViT model (`google/vit-base-patch16-224`) which also has 2 residual connections. There is also a divergence in the hidden states between the tensorflow implementation and the pytorch implemention. This difference also increases throughout the layers (with the residual connections), until it reaches a `max_crossload_hidden_diff` of ~2e-2 at layer 12.\r\n\r\nIs this behaviour normal/acceptable ?",
"@mathieujouffroy That's a great in-depth exploration! \r\n\r\nPreviously we didn't have these checks in place, so it is possible that issues like the one you're seeing slipped through the cracks. It's not positive at all to have such large mismatches (it implies that TF users will have a poorer experience). I've had in my plans to go back and double-check the most popular models with the recently introduced checks, and you've just raised the priority of the task with your message :)\r\n\r\nI think @amyeroberts has seen similar TF/PT mismatches due to the Batch Normalization layer. @amyeroberts do you mind pitching in?",
"@mathieujouffroy Thanks for all the work digging into this π΅οΈ \r\n\r\nAs `momentum` is set for both the pytorch and TF models, I believe their behaviour (outputs and moving stats updates) _should_ be the same during both inference and training, given the same weights and params. \r\n\r\n@gante @mathieujouffroy Yes, I had similar issues with the TF ResNet port ([a weights PR for reference](https://huggingface.co/microsoft/resnet-152/discussions/1)). Like this model, the batch norm layer introduced differences which then got amplified through the forward pass. @ydshieh did some excellent detective work, and found that matching all of the parameters and inputs to produce an equivalent TF and PyTorch layer would still produce outputs with a difference on the order of `1e-7` (enough to start causing problems π) \r\n\r\nUltimately, we decided to add the weights as the difference between the logits was small ~1e-5. I think the ~1e-4 absolute differences in this case are acceptable for adding the weights. @sgugger Is this OK? ",
"Yes, as long as it stays in the range of 1e-4, we can accept the difference between frameworks.",
"Thank you for pitching in @amyeroberts :D\r\n\r\n@mathieujouffroy feel free to use `--max-error 1e-4` (or slightly higher) in the `pt-to-tf` CLI to ignore those errors and push the weights!",
"@gante @amyeroberts you're welcome and thanks a lot for your feedbacks π !\r\n\r\n@amyeroberts It seems that for the batch normalization, the update rule for the running statistics is slightly different in Tensorflow compared to Pytorch:\r\nPT -> `running_mean = (1 - momentum) * running_mean + momentum * new_value`\r\nTF -> `running_mean = momentum * running_mean + (1 - momentum) * new_value`\r\nTherefore, I think I made a mistake in setting the momentum to 0.1 in TF Batch Norm. Considering the update rules, shouldn't the momentum be set to 0.9 (default) in TF when it is set to 0.1 (default) in PT ? \r\nHowever, even though I change the momentum, I still have the same difference in my outputs π.\r\n\r\n@gante Ok thanks. Although, just as a reminder, when I keep the Batch Normalization layers, I have a `max_crossload_output_diff` of ~1e4 and a `max_crossload_hidden_diff` of ~2e-2 for all CvT models except the `cvt-21-384-22k`. The `cvt-21-384-22k` has a `max_crossload_output_diff` of ~4e4 and a `max_crossload_hidden_diff` of ~1e-1.\r\nTherefore, should I use `--max-error 2e-2` for all CvT models and `--max-error 1e-1` for `cvt-21-384-22k` ? \r\n\r\nI'll also be more than happy to help if you need any assistance regarding the mismatches between PT and TF (I'm a bit intrigued) π",
"Hello @gante @amyeroberts, as pointed out in this [PR](https://github.com/huggingface/transformers/pull/19341), the dense layer weights for PT should be initialized with `nn.init.trunc_normal_` instead of `normal_` as in the original implementation of the Cvt model (which uses `trunc_normal_` from `timm` library). In TF `get_initializer` already returns `tf.keras.initializers.TruncatedNormal`.\r\nAlso, following the original implementation, in both frameworks the `cls_token` should be initialized with `trunc_normal` as with the `config.initializer_range` (here 0.02).\r\n\r\nShould I add the modifications regarding the `momentum` (setting it to 0.9 in TF) and the use of `trunc_normal` ? ",
"@mathieujouffroy regarding initialization yeah, update it if possible :) In any case, it has come to our attention that TF initialization is very wrong in some places, and we will be adding tests and updates in the coming weeks!\r\n\r\nRegarding momentum, I will defer the decision to @amyeroberts, who has been working more closely with that layer.",
"@mathieujouffroy Thanks for the update. \r\n\r\nRegarding the initialisation, could you update the PyTorch model in a separate PR and do similar thing as [suggested here](https://github.com/huggingface/transformers/pull/19341#pullrequestreview-1131874527) - naming the PR with a π¨π¨π¨ prefix so we can easily spot and flag in the release notes. \r\n\r\nFor momentum in the batch norm layers, yes please use the (1 - pytorch momentum) value :) ",
"@gante @amyeroberts Okay thanks, I will update the changes concerning the TF model (cls_token initialization & momentum) in this PR and will create a new PR for the Pytorch model π.\r\n\r\nWhat should I do regarding the use of `pt-to-tf` (and `--max-error`) to add the weights ?\r\n@gante should I wait for the future tests and updates you were mentioning regarding the TF initialization ?\r\nOr should I add the weights with the `--max-error` I mentioned in the comment above ? \r\nThanks !",
"@mathieujouffroy the initialization shouldn't change the mismatch between the two frameworks for pre-trained models -- I'd recommend going forward with `--max-error` πͺ ",
"Hi @amyeroberts, thanks for your mention !! I've added the [PR](https://github.com/huggingface/transformers/pull/19486) regarding the pytorch model. \r\n\r\n@gante following your recommendation I've added the weights on the hub π\r\nAs @amyeroberts had pointed out, I'll need to remove `from_pt` in the testing file once the weights are added. ",
"@mathieujouffroy weights merged π ",
"> @mathieujouffroy weights merged π\r\n\r\nCool thanks @gante π !\r\nI'll update the testing file & run the slow tests locally . ",
"@mathieujouffroy off-topic: are you working with `transformers` as part of Γcole 42? I've been at the school once (like 5+ years ago) and I had a friend who participated -- I really liked the concept!",
"> @mathieujouffroy off-topic: are you working with `transformers` as part of Γcole 42? I've been at the school once (like 5+ years ago) and I had a friend who participated -- I really liked the concept!\r\n\r\n@gante Yess I was working with `transformers` π€ on my last project (computer vision) at Γcole 42. The project was in partnership with Hectar, an agricultural campus. I was pretty excited to try out the vision transformers π.\r\nI've also used `transformers` at 42 for my NLP projects and in my internship (in NLP). \r\nI think 42 is a very good training (I've just finished) π : project-based & peer to peer pedagogy ! ",
"All seems ready, merging as soon as CI turns green.\r\n\r\n@mathieujouffroy on behalf of TF users, thank you for making the ecosystem richer π§‘ ",
"@gante @amyeroberts thanks a lot for your help and feedbacks !! π\r\nIt was really interesting and cool to do this PR (1st in an open source project) and to get it merge π"
] | 1,660
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the Cvt model implementation in Tensorflow.
This includes the base model and the model with an image classification head on top.
<!-- Remove if not applicable -->
## TODO
- [x] Write the fundamental components (Convolutional Token Embeddings & Convolutional Transformer Block)
- [x] Write base model & image classification model
- [x] Modify related utilities
- [x] Write relevant tests (in test suite)
- [x] Preview Tensorflow documentation for Cvt
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
### Questions
- In the configuration file of the model CVT, ```layer_norm_eps``` is initialized at ```1e-12```.
However, it seems that in the original implementation, the authors use ```epsilon=1e-5```.
Moreover, the Cvt model in pytorch (HuggingFace), does not seem to use the configuration ```layer_norm_eps=1e-12``` for layer normalization throughout the model, instead using the default ```epsilon=1e-5```.
What is the use of layer_norm_eps in the configuration file (of the Cvt model) ?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18597/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18597",
"html_url": "https://github.com/huggingface/transformers/pull/18597",
"diff_url": "https://github.com/huggingface/transformers/pull/18597.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18597.patch",
"merged_at": 1665508612000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18596
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18596/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18596/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18596/events
|
https://github.com/huggingface/transformers/pull/18596
| 1,336,772,475
|
PR_kwDOCUB6oc49EaHs
| 18,596
|
FSDP bug fix for `load_state_dict`
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
Workaround for https://github.com/pytorch/pytorch/issues/82963
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18596/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18596",
"html_url": "https://github.com/huggingface/transformers/pull/18596",
"diff_url": "https://github.com/huggingface/transformers/pull/18596.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18596.patch",
"merged_at": 1660308517000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18595
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18595/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18595/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18595/events
|
https://github.com/huggingface/transformers/pull/18595
| 1,336,715,652
|
PR_kwDOCUB6oc49EN7b
| 18,595
|
oob performance improvement for cpu DDP
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yao-matrix @sgugger @liangan1 please help review it",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @sywangyi, thanks for your PR! Sylvain is currently off for a few weeks, we'll merge this PR once he's back.\r\n\r\nThanks for your contribution!"
] | 1,660
| 1,666
| 1,661
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
oob performance improvement for cpu DDP, currently if no OMP_NUM_THREADS/MKL_NUM_THREADS is set, num_cpu_threads_per_process is set to 1. very slow performance.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18595/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18595",
"html_url": "https://github.com/huggingface/transformers/pull/18595",
"diff_url": "https://github.com/huggingface/transformers/pull/18595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18595.patch",
"merged_at": 1661949311000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18594
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18594/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18594/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18594/events
|
https://github.com/huggingface/transformers/pull/18594
| 1,336,693,892
|
PR_kwDOCUB6oc49EJVx
| 18,594
|
typos
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
a few small typo fixes.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18594/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18594",
"html_url": "https://github.com/huggingface/transformers/pull/18594",
"diff_url": "https://github.com/huggingface/transformers/pull/18594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18594.patch",
"merged_at": 1660308053000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18593
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18593/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18593/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18593/events
|
https://github.com/huggingface/transformers/pull/18593
| 1,336,691,041
|
PR_kwDOCUB6oc49EIwc
| 18,593
|
[bloom] convert script tweaks
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"sorry, forgot to follow up here.\r\n\r\nIt was just a small model that I created on the fly.\r\n\r\nthat assert is not user-friendly as it just fails w/o telling which keys are unexpected. if it were to tell which keys are unexpected I would be able to answer your question",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@younesbelkada, this is still an issue - we are training a few variations of bloom for m4 https://github.com/huggingface/m4/blob/text_pretraining/experiments/pretraining/text_pretraining/narrow_gpt.slurm and the conversion fails in 2 asserts:\r\n\r\n```\r\n File \"src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py\", line 194, in convert_bloom_checkpoint_to_pytorch\r\n assert not other_keys.unexpected_keys\r\nAssertionError\r\n```\r\nand if the above removed, then next in:\r\n```\r\n File \"src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py\", line 200, in convert_bloom_checkpoint_to_pytorch\r\n assert not missing_keys\r\nAssertionError\r\n```\r\n\r\nit's fine then at converting. so besides my PR a 2nd assert is an issue as well.\r\n\r\nThank you!\r\n\r\nthe failing command line is:\r\n```\r\npython src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py \\\r\n--bloom_checkpoint_path $ajs_ALL_CCFRSCRATCH/m4_text_pretraining/narrow_gpt/checkpoints/main/global_step66000 \\\r\n--pytorch_dump_folder_path $ajs_ALL_CCFRSCRATCH/m4_text_pretraining/narrow_gpt/checkpoints/hf/narrow-gpt-66000 \\\r\n--pretraining_tp 1 \\\r\n--bloom_config_file /gpfsdswork/projects/rech/cnw/commun/experiments/stas/m4/experiments/pretraining/text_pretraining/narrow_config.json\r\n```\r\n\r\ncc for awareness: @TevenLeScao ",
"grr, my apologies, @younesbelkada - it proved to be a misinformation - the pretrained model was actually meg-ds gpt2 model and not bloom :( sorry about that. \r\n\r\nbut yes, your suggestion of printing out the unexpected keys rather than how it was before sounds great to me\r\n\r\nLet's do that for both asserts?\r\n\r\n",
"for posterity since we don't have an official script converting from Megatron-Deepspeed's gpt2 code I ended up using this [Megatron-Deepspeed conversion script](https://github.com/bigscience-workshop/bigscience/tree/aa872e754106f6678e8a9dac8c6962404ba39a6d/train/tr1-13B-base#checkpoint-conversion-and-upload) we wrote when developing [pre-bloom tr1-13b-en model](https://github.com/bigscience-workshop/bigscience/tree/aa872e754106f6678e8a9dac8c6962404ba39a6d/train/tr1-13B-base) and then using HF's GPT2 modeling code to generate with it. It's not perfect as GPT2 != gpt2 in Meg-DS - 3 differences are https://github.com/bigscience-workshop/Megatron-DeepSpeed/issues/138 but it more or less works.\r\n\r\nconversion:\r\n```\r\ncd Megatron-DeepSpeed\r\nPYTHONPATH=. $six_ALL_CCFRWORK/code/Megatron-DeepSpeed/tools/convert_checkpoint/deepspeed_to_transformers.py \\\r\n--input_folder checkpoints/main/global_step112000 \\\r\n--output_folder checkpoints/hf/shallow-gpt-112000\r\n```\r\nvalidate the conversion produced a usable model:\r\n```\r\npython -c '\\\r\nimport sys; \\\r\nmname = sys.argv[1]; \\\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM; \\\r\ntokenizer = AutoTokenizer.from_pretrained(mname); \\\r\ntokenizer.add_special_tokens({\"pad_token\": tokenizer.eos_token}); \\\r\nmodel = AutoModelForCausalLM.from_pretrained(mname); \\\r\ninputs = [\"Hello, my dog is cute\"]; \\\r\ninput_tokens = tokenizer.batch_encode_plus(inputs, return_tensors=\"pt\", padding=True)\r\noutputs = model.generate(**input_tokens, do_sample=False); \\\r\noutputs = tokenizer.batch_decode(outputs, skip_special_tokens=True); \\\r\nprint(outputs); \\\r\n' $ajs_ALL_CCFRSCRATCH/m4_text_pretraining/shallow_gpt/checkpoints/hf/shallow-gpt-112000\r\n\r\n['Hello, my dog is cute.\" \"I\\'m sorry, I\\'m not allowed to say that.\"']\r\n```\r\nso we know it works.",
"Ahh I see thanks a lot for the clarification! \r\nYes I think we should update the asserts ;) ! Also I think it might be useful to have a script to convert meg-ds gpt2 models to HF format, where do you think we should put this file? Or maybe just adding an arg `convert-gpt2` on the current file would work too, your call! ",
"I updated the PR to improve the 2nd assert, so I think it's good to merge now.\r\n\r\n> Also I think it might be useful to have a script to convert meg-ds gpt2 models to HF format, where do you think we should put this file?\r\n\r\nI'm not sure since it depends on `Megatron-Deepsped`'s internal files: https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/tools/convert_checkpoint and it's specific to this fork of Meg-DS (or rather nobody maintains the parent fork that is under MSFT)\r\n\r\nPerhaps we add CONVERSION.md under https://github.com/huggingface/transformers/tree/main/src/transformers/models/gpt2 and show how to convert meg-ds models?",
"There is also https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2 but that one can only handle Megatron-LM generated checkpoints (not Megatron-Deepspeed ones).\r\n",
"> There is also https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2 but that one can only handle Megatron-LM generated checkpoints (not Megatron-Deepspeed ones).\r\n\r\nThat one folder would fit best I think, happy to open a PR there, let me know! ",
"I pushed the new doc here. \r\n\r\nDo you think we want a separate PR for that? can move it there if preferable. in a way it summarizes all the discussions of this PR.\r\n\r\nwhatever works.",
"this is perfectly fine for me thanks a lot @stas00 !\nGently pinging @sgugger for a final review / approval \nThank you!"
] | 1,660
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
@younesbelkada, could you please have a look
1. when creating a small model for testing the assert for unexpected keys breaks the conversion - I think we should either not assert or perhaps warn instead?
2. also don't try to set up dtype if it's `None`
actually for the latter if `torch_dtype` is not defined, shouldn't we try to derive the dtype from the Meg-DS checkpoint dtype? since currently it'll just create fp32 weights, ignoring the actual dtype of the weights.
Thank you
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18593/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18593/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18593",
"html_url": "https://github.com/huggingface/transformers/pull/18593",
"diff_url": "https://github.com/huggingface/transformers/pull/18593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18593.patch",
"merged_at": 1669162184000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18592
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18592/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18592/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18592/events
|
https://github.com/huggingface/transformers/pull/18592
| 1,336,686,734
|
PR_kwDOCUB6oc49EH23
| 18,592
|
[fsmt] deal with -100 indices in decoder ids
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
Fixes: https://github.com/huggingface/transformers/issues/17945
decoder input ids get the default index -100, which breaks the model - like t5 and many other models add a hardcoded fix to replace -100 with the correct pad index.
For some reason this use case hasn't been used with this model until recently - so this issue was there since the beginning it seems.
Any suggestions to how to add a simple test here? or perhaps we have something similar already? user's script is quite massive. I think it's Trainer's collate that leads to that padding since it uses -100.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18592/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18592",
"html_url": "https://github.com/huggingface/transformers/pull/18592",
"diff_url": "https://github.com/huggingface/transformers/pull/18592.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18592.patch",
"merged_at": 1660326652000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18591
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18591/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18591/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18591/events
|
https://github.com/huggingface/transformers/pull/18591
| 1,336,673,407
|
PR_kwDOCUB6oc49EFK0
| 18,591
|
[doc] fix anchors
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
the manual anchors end up being duplicated with automatically added anchors and no longer work.
Examples:
https://huggingface.co/docs/transformers/v4.21.1/en/glossary#input-ids
https://huggingface.co/docs/transformers/v4.21.1/en/glossary#position-ids
I confirmed that this fix works via the auto-generated docs link.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18591/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18591",
"html_url": "https://github.com/huggingface/transformers/pull/18591",
"diff_url": "https://github.com/huggingface/transformers/pull/18591.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18591.patch",
"merged_at": 1660326599000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18590
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18590/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18590/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18590/events
|
https://github.com/huggingface/transformers/pull/18590
| 1,336,657,922
|
PR_kwDOCUB6oc49EB8C
| 18,590
|
Update docs landing page
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,662
| 1,662
|
MEMBER
| null |
This PR updates the docs landing page to better describe what `transformers` is, what it offers, and briefly introduce users to its design. I think this gives a clearer picture of `transformers` and is more impactful than listing all the different tasks supported. Let me know what you think!
There's also a minor issue with the image for custom support. Nils is no longer with us, so we may want to update this image with another member of the team. No big deal though :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18590/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18590",
"html_url": "https://github.com/huggingface/transformers/pull/18590",
"diff_url": "https://github.com/huggingface/transformers/pull/18590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18590.patch",
"merged_at": 1662146946000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18589
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18589/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18589/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18589/events
|
https://github.com/huggingface/transformers/issues/18589
| 1,336,600,880
|
I_kwDOCUB6oc5Pqukw
| 18,589
|
Cannot get WER during WavVec2 fine-tuning
|
{
"login": "changyeli",
"id": 9058204,
"node_id": "MDQ6VXNlcjkwNTgyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/changyeli",
"html_url": "https://github.com/changyeli",
"followers_url": "https://api.github.com/users/changyeli/followers",
"following_url": "https://api.github.com/users/changyeli/following{/other_user}",
"gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changyeli/subscriptions",
"organizations_url": "https://api.github.com/users/changyeli/orgs",
"repos_url": "https://api.github.com/users/changyeli/repos",
"events_url": "https://api.github.com/users/changyeli/events{/privacy}",
"received_events_url": "https://api.github.com/users/changyeli/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,660
| 1,660
| 1,660
|
NONE
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten, @anton-l
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I followed the [blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) and revisited one of my old project, but I couldn't get WER during the fine-tuning. Surprisingly it worked perfectly early this year. I did get the following evaluation metrics during the first round:
```
{'eval_loss': -0.26198065280914307, 'eval_runtime': 41.4222, 'eval_samples_per_second': 25.904, 'eval_steps_per_second': 6.494, 'epoch': 0.27}
```
As you can see, there was no `eval_wer` in this entry. Tried the following, still not seeing `eval_wer`
```python
def compute_metrics(pred):
"""
batchfy and compute the WER metrics
:param pred: _description_
:type pred: _type_
:return: _description_
:rtype: _type_
"""
wer_metric = load_metric("wer")
pred_logits = pred.predictions # change to pred.logits did not help
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
for batch in trainer.get_eval_dataloader():
print(batch.keys())
# return dict_keys(['input_values', 'labels'])
batch = {k: v.to("cuda") for k, v in batch.items()}
print(trainer.evaluate())
# return {'eval_loss': 0.29788103699684143, 'eval_runtime': 44.4312, 'eval_samples_per_second': 24.15, 'eval_steps_per_second': 6.054}
break
```
Any suggestions? Thanks!
### Expected behavior
Should return `eval_wer` in the evaluation step.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18589/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18588
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18588/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18588/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18588/events
|
https://github.com/huggingface/transformers/pull/18588
| 1,336,369,385
|
PR_kwDOCUB6oc49DFKc
| 18,588
|
Adds OWLViT to models exportable with ONNX
|
{
"login": "unography",
"id": 5240449,
"node_id": "MDQ6VXNlcjUyNDA0NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unography",
"html_url": "https://github.com/unography",
"followers_url": "https://api.github.com/users/unography/followers",
"following_url": "https://api.github.com/users/unography/following{/other_user}",
"gists_url": "https://api.github.com/users/unography/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unography/subscriptions",
"organizations_url": "https://api.github.com/users/unography/orgs",
"repos_url": "https://api.github.com/users/unography/repos",
"events_url": "https://api.github.com/users/unography/events{/privacy}",
"received_events_url": "https://api.github.com/users/unography/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@unography That's strange that it does not work for object detection. It should actually work, DETR and YOLOS are exportable to ONNX for instance (see [here](https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/src/transformers/onnx/features.py#L262)). What is the error you get when trying to export the model for object detection?",
"@regisss I think it just needs to be defined in the config for AutoModel, for Object detection [here](https://github.com/huggingface/transformers/blob/1ccd2515ed6d7da4ec46fe94aedbd8a86a2cde8e/src/transformers/models/auto/modeling_auto.py#L443)\r\n\r\n\r\nThis is the stacktrace - \r\n\r\n```bash\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ncls = <class 'transformers.models.auto.modeling_auto.AutoModelForObjectDetection'>\r\nconfig = OwlViTConfig {\r\n \"_commit_hash\": \"7cc55348dae46396474cd94bf00a542167a10f8d\",\r\n \"_name_or_path\": \"google/owlvit-base-pa...nsformers_version\": \"4.22.0.dev0\",\r\n \"typical_p\": 1.0,\r\n \"use_bfloat16\": false\r\n },\r\n \"vision_config_dict\": null\r\n}\r\n\r\nkwargs = {}, trust_remote_code = False\r\n\r\n @classmethod\r\n def from_config(cls, config, **kwargs):\r\n trust_remote_code = kwargs.pop(\"trust_remote_code\", False)\r\n if hasattr(config, \"auto_map\") and cls.__name__ in config.auto_map:\r\n if not trust_remote_code:\r\n raise ValueError(\r\n \"Loading this model requires you to execute the modeling file in that repo \"\r\n \"on your local machine. Make sure you have read the code there to avoid malicious use, then set \"\r\n \"the option `trust_remote_code=True` to remove this error.\"\r\n )\r\n if kwargs.get(\"revision\", None) is None:\r\n logger.warning(\r\n \"Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure \"\r\n \"no malicious code has been contributed in a newer revision.\"\r\n )\r\n class_ref = config.auto_map[cls.__name__]\r\n module_file, class_name = class_ref.split(\".\")\r\n model_class = get_class_from_dynamic_module(config.name_or_path, module_file + \".py\", class_name, **kwargs)\r\n return model_class._from_config(config, **kwargs)\r\n elif type(config) in cls._model_mapping.keys():\r\n model_class = _get_model_class(config, cls._model_mapping)\r\n return model_class._from_config(config, **kwargs)\r\n \r\n> raise ValueError(\r\n f\"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\\n\"\r\n f\"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}.\"\r\n )\r\nE ValueError: Unrecognized configuration class <class 'transformers.models.owlvit.configuration_owlvit.OwlViTConfig'> for this kind of AutoModel: AutoModelForObjectDetection.\r\nE Model type should be one of DetrConfig, YolosConfig.\r\n\r\nsrc/transformers/models/auto/auto_factory.py:412: ValueError\r\n```",
"Hi @unography and @regisss! OWL-ViT is not a part of the object detection pipeline because it requires both image and search queries as input. \r\n\r\nWe are planning to add a zero-shot-object-detection pipeline for OWL-ViT (see this [issue](https://github.com/huggingface/transformers/issues/18445)).\r\n\r\ncc @sgugger @NielsRogge ",
"Thanks for the information @alaradirik :)\r\n\r\n@unography Let's keep only the default pipeline as you did then. I had to change one `.T` for `.t()` in `modeling_owlvit.py` to make the test pass, as in the PR of CLIP :laughing: Could you please change this?",
"Pinging @sgugger for final approval",
"@regisss ya sorry i missed the `.T` issue, i was testing on the nightly pytorch. should be fixed now",
"Hey @lewtun, would you like to have a look at this and merge if it looks good to you?",
"@lewtun Can you take a quick look at this PR and merge it when you approve? :slightly_smiling_face: "
] | 1,660
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
Output for tests on my local machine:
```bash
(transformers) β transformers git:(owlvit_onnx) β RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -v -k "owlvit" --full-trace
================================================================== test session starts ===================================================================
platform darwin -- Python 3.8.12, pytest-7.1.2, pluggy-1.0.0 -- /Users/dhruv/Documents/code/transformers/.venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/dhruv/Documents/code/transformers, configfile: setup.cfg
collected 410 items / 408 deselected / 2 selected
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default PASSED [ 50%]
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default PASSED [100%]
==================================================================== warnings summary ====================================================================
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
/Users/dhruv/Documents/code/transformers/src/transformers/image_utils.py:223: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None):
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
/Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/feature_extraction_owlvit.py:80: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
resample=Image.BICUBIC,
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default
/Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:272: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default
/Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:312: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default
/Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:709: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
mask.fill_(torch.tensor(float("-inf")))
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default
/Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:280: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default
/Users/dhruv/Documents/code/transformers/src/transformers/models/owlvit/modeling_owlvit.py:289: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_101_owlvit_default
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_101_owlvit_default
/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
==================================================== 2 passed, 408 deselected, 14 warnings in 44.45s =====================================================
```
Note: Haven't tested this on GPU yet, don't have a GPU machine with me currently.
Also, this is for the `default` task of OWLViT. The `object-detection` task isn't supported by AutoModel yet, because of which if I add that to onnx it's failing currently. Should I make the change for AutoModel as well?
cc: @ChainYo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18588/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18588/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18588",
"html_url": "https://github.com/huggingface/transformers/pull/18588",
"diff_url": "https://github.com/huggingface/transformers/pull/18588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18588.patch",
"merged_at": 1661862660000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18587
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18587/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18587/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18587/events
|
https://github.com/huggingface/transformers/pull/18587
| 1,336,174,452
|
PR_kwDOCUB6oc49Cc7n
| 18,587
|
Fix Data2VecVision ONNX test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"With `nn.AdaptiveAvgPool2d` with `output_size` > 1, we get error\r\n\r\n```bash\r\nCurrent thread 0x00007f5bb924e740 (most recent call first):\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_io/saferepr.py\", line 71 in repr_instance\r\n File \"/usr/lib/python3.9/reprlib.py\", line 62 in repr1\r\n File \"/usr/lib/python3.9/reprlib.py\", line 71 in <listcomp>\r\n File \"/usr/lib/python3.9/reprlib.py\", line 71 in _repr_iterable\r\n File \"/usr/lib/python3.9/reprlib.py\", line 78 in repr_tuple\r\n File \"/usr/lib/python3.9/reprlib.py\", line 60 in repr1\r\n File \"/usr/lib/python3.9/reprlib.py\", line 52 in repr\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_io/saferepr.py\", line 60 in repr\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_io/saferepr.py\", line 107 in saferepr\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py\", line 727 in repr_args\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py\", line 817 in repr_traceback_entry\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py\", line 867 in repr_traceback\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py\", line 926 in repr_excinfo\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/_code/code.py\", line 666 in getrepr\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/nodes.py\", line 475 in _repr_failure_py\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/python.py\", line 1795 in repr_failure\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/reports.py\", line 345 in from_item_and_call\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py\", line 365 in pytest_runtest_makereport\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py\", line 39 in _multicall\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py\", line 80 in _hookexec\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265 in __call__\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py\", line 221 in call_and_report\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py\", line 130 in runtestprotocol\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/runner.py\", line 111 in pytest_runtest_protocol\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py\", line 39 in _multicall\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py\", line 80 in _hookexec\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265 in __call__\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py\", line 347 in pytest_runtestloop\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py\", line 39 in _multicall\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py\", line 80 in _hookexec\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265 in __call__\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py\", line 322 in _main\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py\", line 268 in wrap_session\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/main.py\", line 315 in pytest_cmdline_main\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_callers.py\", line 39 in _multicall\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_manager.py\", line 80 in _hookexec\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265 in __call__\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 164 in main\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 187 in console_main\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/pytest/__main__.py\", line 5 in <module>\r\n File \"/usr/lib/python3.9/runpy.py\", line 87 in _run_code\r\n File \"/usr/lib/python3.9/runpy.py\", line 197 in _run_module_as_main\r\nSegmentation fault\r\n```",
"Thanks @lewtun for the review. Totally fine for me to remove `semantic-segmentation` as a supported feature. I will clean this PR a bit and pin you for final review then.",
"The failed tests are irrelevant. @lewtun it's ready for you to take a final look π !\r\n\r\n```bash\r\nFAILED tests/models/bigbird_pegasus/test_modeling_bigbird_pegasus.py::BigBirdPegasusStandaloneDecoderModelTest::test_sample_generate\r\nFAILED tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py::XLMRobertaXLModelTest::test_sample_generate_dict_output\r\n```",
"Now we have a different generation test failing:\r\n\r\n```\r\nFAILED tests/models/blenderbot/test_modeling_blenderbot.py::BlenderbotStandaloneDecoderModelTest::test_sample_generate\r\nFAILED tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py::XLMRobertaXLModelTest::test_sample_generate\r\n```\r\n\r\nSince this is unrelated to the current PR, is it OK to merge?",
"Hi @lewtun . Thank you for running. It is good to merge. But let me rebase and re-run it, as @gante fixed the issue in #18696 merged into `main`.\r\n\r\nI will take care of the merge when everything is fine. Thanks again for the review."
] | 1,660
| 1,661
| 1,661
|
COLLABORATOR
| null |
# What does this PR do?
Fix an issue from #18427. In short, `Data2VecVision` is for semantic segmentation.
Current CI test failure
```bash
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_048_data2vec_vision_image_segmentation
(line 412) ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation.
tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_048_data2vec_vision_image_segmentation
(line 412) ValueError: Unrecognized configuration class <class 'transformers.models.data2vec.configuration_data2vec_vision.Data2VecVisionConfig'> for this kind of AutoModel: AutoModelForImageSegmentation.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18587/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18587",
"html_url": "https://github.com/huggingface/transformers/pull/18587",
"diff_url": "https://github.com/huggingface/transformers/pull/18587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18587.patch",
"merged_at": 1661160504000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18586
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18586/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18586/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18586/events
|
https://github.com/huggingface/transformers/pull/18586
| 1,336,169,114
|
PR_kwDOCUB6oc49CbyV
| 18,586
|
fix owlvit tests, update docstring examples
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,665
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
- Fixes the `OwlViTModelIntegrationTest` failures due to recently merged [PR](https://github.com/huggingface/transformers/pull/18573) that fixed a resizing bug in `OwlViTFeatureExtractor`
- Updates the outputs shown in OwlViT docstring examples
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18586/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18586",
"html_url": "https://github.com/huggingface/transformers/pull/18586",
"diff_url": "https://github.com/huggingface/transformers/pull/18586.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18586.patch",
"merged_at": 1660234225000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18585
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18585/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18585/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18585/events
|
https://github.com/huggingface/transformers/pull/18585
| 1,336,139,005
|
PR_kwDOCUB6oc49CVQC
| 18,585
|
Fix failure on DeBERTa(base/v2/sew_d) fp16 training with ONNX Runtime
|
{
"login": "JingyaHuang",
"id": 44135271,
"node_id": "MDQ6VXNlcjQ0MTM1Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/44135271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JingyaHuang",
"html_url": "https://github.com/JingyaHuang",
"followers_url": "https://api.github.com/users/JingyaHuang/followers",
"following_url": "https://api.github.com/users/JingyaHuang/following{/other_user}",
"gists_url": "https://api.github.com/users/JingyaHuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JingyaHuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JingyaHuang/subscriptions",
"organizations_url": "https://api.github.com/users/JingyaHuang/orgs",
"repos_url": "https://api.github.com/users/JingyaHuang/repos",
"events_url": "https://api.github.com/users/JingyaHuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/JingyaHuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,661
| 1,660
|
CONTRIBUTOR
| null |
## Context
It was reported in optimum https://github.com/huggingface/optimum/issues/305 that the mixed-precision training on DeBERTa with optimum.onnxruntime.ORTTrainer is broken.
After investigation, the break comes from mismatched inputs dtype for some Matmul nodes. In #18272, some sqrt results are cast to fp32, and they need to be re-casted to fp16 before Matmul ops, and this PR is supposed to correct the dtype.
Besides, this PR also fix the tracing of DeBERTa which haven't been fixed in #18272
Fixes #https://github.com/huggingface/optimum/issues/305
Fixes #18199
Who can review?
@michaelbenayoun @LysandreJik @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18585/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18585",
"html_url": "https://github.com/huggingface/transformers/pull/18585",
"diff_url": "https://github.com/huggingface/transformers/pull/18585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18585.patch",
"merged_at": 1660744783000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18584
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18584/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18584/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18584/events
|
https://github.com/huggingface/transformers/pull/18584
| 1,336,136,366
|
PR_kwDOCUB6oc49CUrQ
| 18,584
|
[bnb] Fix non passing trainer tests
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you all!"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
It fixes a small slow test that was not passing due a very small typo when designing the tests in https://github.com/huggingface/transformers/pull/15622
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18584/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18584/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18584",
"html_url": "https://github.com/huggingface/transformers/pull/18584",
"diff_url": "https://github.com/huggingface/transformers/pull/18584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18584.patch",
"merged_at": 1660327479000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18583
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18583/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18583/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18583/events
|
https://github.com/huggingface/transformers/pull/18583
| 1,336,103,291
|
PR_kwDOCUB6oc49CNer
| 18,583
|
Add checks for some workflow jobs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
COLLABORATOR
| null |
# What does this PR do?
We have encountered errors
```bash
stderr: nvidia-container-cli: initialization error: nvml error: driver/library version mismatch: unknown
```
(due to auto update) several times. In such cases, the reports failed to send to slack channels. We were not aware of this issue on push CI for a few days.
This PR checks the setup job and also adds a check on the CI runners. If such jobs fail, it could still send the report containing some information.
**We should also disable the auto update (for some packages)**
**I will add the same check to scheduled CI and past CI (if the changes are approved)**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18583/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18583",
"html_url": "https://github.com/huggingface/transformers/pull/18583",
"diff_url": "https://github.com/huggingface/transformers/pull/18583.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18583.patch",
"merged_at": 1660650827000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18582
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18582/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18582/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18582/events
|
https://github.com/huggingface/transformers/issues/18582
| 1,336,056,449
|
I_kwDOCUB6oc5PopqB
| 18,582
|
Segformer, can't save checkpoint in saved_model format
|
{
"login": "joihn",
"id": 11663917,
"node_id": "MDQ6VXNlcjExNjYzOTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/11663917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joihn",
"html_url": "https://github.com/joihn",
"followers_url": "https://api.github.com/users/joihn/followers",
"following_url": "https://api.github.com/users/joihn/following{/other_user}",
"gists_url": "https://api.github.com/users/joihn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joihn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joihn/subscriptions",
"organizations_url": "https://api.github.com/users/joihn/orgs",
"repos_url": "https://api.github.com/users/joihn/repos",
"events_url": "https://api.github.com/users/joihn/events{/privacy}",
"received_events_url": "https://api.github.com/users/joihn/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Have you tried using `model.save_pretrained()`?\r\n\r\nCc: @amyeroberts ",
"`model.save_pretrained()` indeed works, thanks :)\r\n\r\nShall we close this issue, or is this model also supposed to be compatible with keras saving method? (their callback is widely used, https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint)",
"@joihn Glad to hear you were able to save with `save_pretrained` and thanks for responding so quickly @sayakpaul!\r\n\r\nI'll defer the issue to our TF gurus @Rocketknight1 @gante regarding compatibility with keras saving. ",
"I see two possible options for the time being:\r\n\r\n* Implement a custom callback to use `save_pretrained()`. Shouldn't differ too much from the `ModelCheckpoint` callback. \r\n* You can refer to [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb) that makes use of `PushToHubCallback()` and achieves a similar result as `ModelCheckpoint` barring some differences. ",
"I did some digging, this issue seems to appears when keras function [save_model] https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model) parmeters is `save_traces=True`.\r\n\r\nIt's worth noting that hugging face `saved_pretrained(filepath, saved_model=True)` also crash with the same error.\r\n\r\nalso related to https://github.com/huggingface/transformers/issues/13742\r\n",
"Hi @joihn - there are some general difficulties when saving Hugging Face models as SavedModel. This is a general issue with any model where the model and layers are implemented by subclassing in Keras - SavedModel doesn't really have a good way to completely save and load those models (although you can save one or more model traces through SavedModel, this isn't usually what people want unless they're trying to export to TFLite or something!)\r\n\r\nInstead, we recommend that users save weights only, and if they want to save the entire model, to use the `save_pretrained` method, which will save the weights along with a config that will make it loadable with the `load_pretrained` method.\r\n\r\nConcretely, this means doing the following things:\r\n\r\n1) When using `ModelCheckpoint`, set `save_weights_only` to `True`.\r\n2) Replace `model.save` with either `model.save_weights` or `model.save_pretrained`",
"Perfect, thanks for the info :) "
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
Thanks for this repo !
### Issue description:
1) initalise your segformer model
2) try to save it.
### Error:
```
File "/home/maxime/anaconda3/envs/tf/lib/python3.9/site-packages/transformers/models/segformer/modeling_tf_segformer.py", line 547, in serving *
output = self.call(inputs)
File "/home/maxime/anaconda3/envs/tf/lib/python3.9/site-packages/transformers/models/segformer/modeling_tf_segformer.py", line 753, in call *
batch_size = shape_list(encoder_hidden_states[-1])[0]
KeyError: -1
```
### Minimum reproducting code:
```
import os
import tensorflow as tf
from transformers import TFSegformerForSemanticSegmentation
model = TFSegformerForSemanticSegmentation.from_pretrained(
"nvidia/mit-b0",
num_labels=2,
id2label={1:"1", 2:"2"},
ignore_mismatched_sizes=True, # Will ensure the segmentation specific components are reinitialized.
)
model.summary(line_length=250)
tf.keras.models.save_model(
model, os.path.join("/tmp", model.name), include_optimizer=False
)
```
### Expected behavior
The checkpoint should save correctly
### System Info
Ubuntu 20.04, Python 3.9, TF 2.9.1, Nvidia Titan
### Who can help?
@sayakpaul @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18582/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18581
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18581/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18581/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18581/events
|
https://github.com/huggingface/transformers/pull/18581
| 1,336,045,659
|
PR_kwDOCUB6oc49CA7T
| 18,581
|
Fix docstrings with last version of hf-doc-builder styler
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
COLLABORATOR
| null |
# What does this PR do?
Everything is said in the title :-) Merging so everyone can safely use the new version of `hf-doc-builder`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18581/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18581",
"html_url": "https://github.com/huggingface/transformers/pull/18581",
"diff_url": "https://github.com/huggingface/transformers/pull/18581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18581.patch",
"merged_at": 1660228548000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18580
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18580/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18580/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18580/events
|
https://github.com/huggingface/transformers/pull/18580
| 1,335,968,975
|
PR_kwDOCUB6oc49BwXq
| 18,580
|
[FX] _generate_dummy_input supports audio-classification models for labels
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
MEMBER
| null |
For FX:
- Adds support for audio-classification models for label generation in `_generate_dummy_input`
- Adds a flag, `FX_DEBUG_MODE` to control what's being printed during tracing, this will save the user from seeing a lot of benign warnings, while still providing the possibility to have those while developing.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18580/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18580",
"html_url": "https://github.com/huggingface/transformers/pull/18580",
"diff_url": "https://github.com/huggingface/transformers/pull/18580.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18580.patch",
"merged_at": 1660228485000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18579
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18579/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18579/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18579/events
|
https://github.com/huggingface/transformers/pull/18579
| 1,335,924,615
|
PR_kwDOCUB6oc49Bm0J
| 18,579
|
Supporting seq2seq models for `bitsandbytes` integration
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
The previous checks to check which key to not convert in int8 was not sufficient. It appears that T5 models were not able to be converted correctly. This PR addresses this issue by adding an extra check consisting of checking whether the model has tied_weights inside the `get_key_to_not_convert` function.
cc @philschmid @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18579/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18579/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18579",
"html_url": "https://github.com/huggingface/transformers/pull/18579",
"diff_url": "https://github.com/huggingface/transformers/pull/18579.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18579.patch",
"merged_at": 1660313709000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18578
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18578/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18578/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18578/events
|
https://github.com/huggingface/transformers/pull/18578
| 1,335,921,910
|
PR_kwDOCUB6oc49BmPC
| 18,578
|
Return the permuted hidden states if return_dict=True
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660
| 1,660
| 1,660
|
COLLABORATOR
| null |
# What does this PR do?
Fixes an issue where the shape of the returned hidden states is different if `return_dict` is True or False for ConvNext.
The outputs of ConvNext are permuted in the final layer `TFConvNextMainLayer` to put them in `(batch_size, num_channels, height, width)` format, to match the pytorch model. However, if `return_dict=True` then the non-permuted hidden states are returned.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18578/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18578",
"html_url": "https://github.com/huggingface/transformers/pull/18578",
"diff_url": "https://github.com/huggingface/transformers/pull/18578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18578.patch",
"merged_at": 1660235531000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18577
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18577/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18577/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18577/events
|
https://github.com/huggingface/transformers/pull/18577
| 1,335,794,348
|
PR_kwDOCUB6oc49BK_X
| 18,577
|
Add type hints for ViLT models
|
{
"login": "donelianc",
"id": 7807897,
"node_id": "MDQ6VXNlcjc4MDc4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donelianc",
"html_url": "https://github.com/donelianc",
"followers_url": "https://api.github.com/users/donelianc/followers",
"following_url": "https://api.github.com/users/donelianc/following{/other_user}",
"gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donelianc/subscriptions",
"organizations_url": "https://api.github.com/users/donelianc/orgs",
"repos_url": "https://api.github.com/users/donelianc/repos",
"events_url": "https://api.github.com/users/donelianc/events{/privacy}",
"received_events_url": "https://api.github.com/users/donelianc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @Rocketknight1, this PR is ready to review. Can you help me here, please?"
] | 1,660
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adding type hints for ` ViLT` model (PyTorch). Issue #16059.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? _Task requested [here](https://github.com/huggingface/transformers/issues/16059#issuecomment-1210179575)._
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? _Ran `make fixup` before last commit._
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18577/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18577",
"html_url": "https://github.com/huggingface/transformers/pull/18577",
"diff_url": "https://github.com/huggingface/transformers/pull/18577.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18577.patch",
"merged_at": 1660302688000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18576
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18576/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18576/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18576/events
|
https://github.com/huggingface/transformers/pull/18576
| 1,335,729,177
|
PR_kwDOCUB6oc49A9Kw
| 18,576
|
update doc for perf_train_cpu_many, add intel mpi introduction
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger please help review, thanks very much",
"_The documentation is not available anymore as the PR was closed or merged._",
"Failure is unrelated to this PR (would disappear with a rebase) so merging. Thanks again!"
] | 1,660
| 1,666
| 1,660
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
update doc for perf_train_cpu_many, add intel mpi introduction
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18576/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18576",
"html_url": "https://github.com/huggingface/transformers/pull/18576",
"diff_url": "https://github.com/huggingface/transformers/pull/18576.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18576.patch",
"merged_at": 1660307788000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.