url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/17371
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17371/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17371/comments
https://api.github.com/repos/huggingface/transformers/issues/17371/events
https://github.com/huggingface/transformers/issues/17371
1,243,448,735
I_kwDOCUB6oc5KHYWf
17,371
tokenizer object incorrectly modified in PreTrainedTokenizerFast.train_new_from_iterator()
{ "login": "dctelus", "id": 93535080, "node_id": "U_kgDOBZM7aA", "avatar_url": "https://avatars.githubusercontent.com/u/93535080?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dctelus", "html_url": "https://github.com/dctelus", "followers_url": "https://api.github.com/users/dctelus/followers", "following_url": "https://api.github.com/users/dctelus/following{/other_user}", "gists_url": "https://api.github.com/users/dctelus/gists{/gist_id}", "starred_url": "https://api.github.com/users/dctelus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dctelus/subscriptions", "organizations_url": "https://api.github.com/users/dctelus/orgs", "repos_url": "https://api.github.com/users/dctelus/repos", "events_url": "https://api.github.com/users/dctelus/events{/privacy}", "received_events_url": "https://api.github.com/users/dctelus/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe also cc @Narsil " ]
1,653
1,654
1,654
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.19.2 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.6.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce this behavior: 1) Create a tokenizer object from `tokenizers.implementations.ByteLevelBPETokenizer()` 2) Instanciate a PreTrainedTokenizer with it (`PreTrainedTokenizerFast.__init__(tokenizer_object=tokenizer)`) 3) Train the model using `PreTrainedTokenizer.train_new_from_iterator()` 4) Encode a token not found in the training set 5) Notice that the tokenized string has a unk token (or nothing) instead of the token See this example: ``` import tokenizers import transformers tokenizer = tokenizers.implementations.ByteLevelBPETokenizer() tokenizer_fast = transformers.PreTrainedTokenizerFast(tokenizer_object=tokenizer).train_new_from_iterator(text_iterator=["a" for _ in range(1000)], length=1000, vocab_size=5000) ## {'input_ids': [0], 'token_type_ids': [0], 'attention_mask': [1]} print(tokenizer_fast("ab")) ``` This is because in `tokenization_utils_fast.py`, in `PreTrainedTokenizer.train_new_from_iterator`, the following code snippet ignores the class of the tokenizer_object that was passed inside the `__init__`: ``` print(type(self._tokenizer)) # <class 'tokenizers.implementations.byte_level_bpe.ByteLevelBPETokenizer'> tokenizer = TokenizerFast.from_str(json.dumps(tokenizer_json)) print(type(tokenizer)) # <class 'tokenizers.Tokenizer' ``` And the `ByteLevelBPETokenizer` has a custom `train_from_iterator` which provides an initial_alphabet. This issue does not arise if using only the `tokenizers` library for training: ``` import tokenizers import transformers tokenizer = tokenizers.implementations.ByteLevelBPETokenizer() tokenizer.train_from_iterator(iterator=["a" for _ in range(1000)], length=1000, vocab_size=5000) tokenizer_fast = transformers.PreTrainedTokenizerFast(tokenizer_object=tokenizer) ## {'input_ids': [64, 65], 'token_type_ids': [0, 0], 'attention_mask': [1, 1]} print(tokenizer_fast("ab")) ``` ### Expected behavior I would expect the transformer library to use the tokenizer_object's train_from_iterator, even if that object is from a specific implementation. This is currently fixable on my side by providing **kwargs to train_new_from_iterator to emulate what the `ByteLevelBPETokenizer` is doing, but is something I expect the library to handle by itself.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17371/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17370
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17370/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17370/comments
https://api.github.com/repos/huggingface/transformers/issues/17370/events
https://github.com/huggingface/transformers/issues/17370
1,243,301,760
I_kwDOCUB6oc5KG0eA
17,370
`_fast_init` overwrites weights passed to custom model
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1834081910, "node_id": "MDU6TGFiZWwxODM0MDgxOTEw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Usage", "name": "Usage", "color": "e28436", "default": false, "description": "General questions about the library" } ]
closed
false
null
[]
[ "To describe what happens here:\r\n\r\nIf `_fast_init=True` (which is set by default) the following happens. All weight names of the `state_dict` loaded from `bert-base-uncased` are compared with all weight names of a random `state_dict` that is generated when calling `MyCustomModel()`. \r\nInside `from_pretrained(...)` at this point: https://github.com/huggingface/transformers/blob/7213a40bd914776a6dcebcc96353c4cf8c8c6668/src/transformers/modeling_utils.py#L2284 `custom_layer` is considered a missing layer because it cannot be found in the `state_dict` loaded from `bert-base-uncased` and thus will be randomly initialized, which happens **after** `self.custom_layer` has been set to the passed `layer` weight meaning `self.custom_layer` will be overridden.\r\n\r\nThis seems unexpected from the outside, but after having looked a bit into the internals of `from_pretrained(...)`, we sadly cannot really change this behavior and also don't consider it a bug.\r\n\r\nIn our opinion the problems rather lies in the following:\r\n\r\n- 1. We **never** abstract `nn.Module` model classes that have an `__init__(...)` method except for the most basic `PretrainedModel` class which has an absolute minimal `__init__(...)` method (see [here](https://github.com/huggingface/transformers/blob/7213a40bd914776a6dcebcc96353c4cf8c8c6668/src/transformers/modeling_utils.py#L980) that just sets the config. \r\nIn the whole code base of `transformers`, models, such as `BertModel`, only abstract from their respective `...PretrainedModel` class, *e.g.* `BertModel` abstracts from `BertPretrainedModel`, but those classes don't have an `__init__` method, see [here]( https://github.com/huggingface/transformers/blob/3fd7de49f4d53832621ef67cfd1825b000323250/src/transformers/models/bert/modeling_bert.py#L733). \r\n\r\nThis way we can be sure that the only `__init__(...)` method that matters is the one of `BertModel`. Now this is broken here. `MyCustomModel` abstracts away `BertModel` which is exactly not what we want. \r\nIn short, we **always** favor **modularization over abstraction**. In our opinion, it is less error-prone and easier to understand. \r\n\r\n- 2. We never have want to allow passing layers, such as `nn.Linear` into the `__init__` for the outer-most `nn.Module` classes, not even conditionally. The reason is that, if we allow / recommend this design, it would also mean that one could / should pass a trainable layer through `from_pretrained(...)`. We definitely don't want this as it breaks the assumption that a model is **always** self-contained a single checkpoint, *e.g.* `bert-base-cased` and would therefore make `from_pretrained(...)` very complex. So we never want users to pass trainable layers through `from_pretrained(...)`.\r\n\r\n", "To solve the problem above, we recommend to instead of using abstraction, to just use modularization:\r\n\r\n\r\n```python\r\nfrom torch.nn import Linear\r\nfrom transformers import BertPreTrainedModel, BertModel\r\n\r\n\r\nclass MyCustomModel(BertPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.bert = BertModel(config)\r\n self.custom_layer = Linear(1024, 1024)\r\n\r\n def forward(self, ...):\r\n self.bert(....)\r\n\r\n def set_custom_layer(self, linear_embed):\r\n self.custom_layer = linear_embed\r\n```\r\n\r\nNote that this is also exactly how we coded up `BertForMaskedLM`: https://github.com/huggingface/transformers/blob/3fd7de49f4d53832621ef67cfd1825b000323250/src/transformers/models/bert/modeling_bert.py#L1292", "Thanks @patrickvonplaten - btw you tagged \"josh-heyer\" rather than \"john\"* haha. I'll take a deeper look and see how we can change our model initialization.", "Ups sorry :sweat_smile: ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,653
1,656
1,656
MEMBER
null
### System Info ```shell Transformers > 4.6.0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Copy-pasting an issue from https://github.com/huggingface/transformers/pull/11471#issuecomment-1132324119 into a new issue to make it more visible as multiple people could have this problem. When abstracting a `transformers` model and passing custom layers to the init method, the initialization can show strange behavior. Consider the following case (taken from @josh-heyer ): ```python from torch.nn import Linear from transformers import BertModel class MyCustomModel(BertModel): def __init__(self, config, custom_layer=None): super().__init__(config) if custom_layer is not None: self.custom_layer = custom_layer else: self.custom_layer = Linear(1024, 1024) if __name__ == "__main__": import transformers print(transformers.__version__) layer = Linear(1024, 1024) print(layer.weight.sum()) custom_model = MyCustomModel.from_pretrained('bert-base-uncased', custom_layer=layer) # used to be the same as the layer above, but it is "re-initialized" in the from_pretrained method print(custom_model.custom_layer.weight.sum()) ``` What will happen here since `_fast_init` was introduced in 4.6.0: https://github.com/huggingface/transformers/pull/11471 is that the weights of the custom layer will be overriden. ### Expected behavior It might be reasonable to state that the custom layer should **not** be overriden.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17370/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17369
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17369/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17369/comments
https://api.github.com/repos/huggingface/transformers/issues/17369/events
https://github.com/huggingface/transformers/pull/17369
1,243,204,031
PR_kwDOCUB6oc44MZRb
17,369
Try to make push CI less noisy on commit pages.
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik This is still in WIP (just need to add back the failure report in the caller), but I think you can review it already :).\r\n\r\nI follow [this guide](https://medium.com/prompt/trigger-another-github-workflow-without-using-a-personal-access-token-f594c21373ef) to add 2 keys in our `Settings`.", "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Is it necessary to show the push CI (non Circle CI ones) results (simplified version) on the commit history page?\r\n\r\nThe current approach needs to find a way to wait (on a GitHub hosted CPU machine) the `actual push CI` run finish and get back the results. This is not obvious, as the 2 workflow are somehow `independent`. I will try in this week if this is necessary.", "Since those are reported on slack, I don't think so. Just having the circle CI results (since those are not on slack) is good IMO.", "Hi @sgugger , @LysandreJik \r\n\r\nSorry, I should have checked the results after this PR being merged.\r\n\r\nIt turns out that **the whole push CI jobs are still shown** in the dropdown menu when we click the green/red check/cross icons, see\r\nhttps://github.com/huggingface/transformers/commit/39e146146b5545c89d3bc3cd5a0befd491757473\r\n\r\n- It seems to me this check status **relies on the commit SHA**, rather than the branches where that workflow is triggered.\r\n- We can use **on: workflow_run**, but we will lose important information in the workflow run page, see https://github.com/huggingface/accelerate/actions/workflows/on-merge.yml\r\n- Changing the workflow/job names won't alter the order in the check status list\r\n\r\nI will come back to this after the nightly PyTorch CI and past CI tasks, if it is OK for @sgugger .\r\n\r\n", "I think it's more important to have something less noisy to debug when a break happened, personally.", "It's indeed important to make debugging easier. But could you let me know which following works for you 🙏 \r\n\r\n- changing the (push CI) trigger event to `on: workflow_run`: so we can see clearly on commit history page (for CircleCI tests) what go wrong, and we don't really care the push CI workflow run pages (less informative) - we rely on Slack push CI report\r\n - this could be done quickly (if everything is working fine)\r\n- We should keep `on: push`, but try to run the whole tests as a single job\r\n - this will take more time - as we also like to keep the Slack report as the current format\r\n - so the question becomes if I should work on the past CI first, which is already delayed for a few month now.\r\n\r\nAnd one remark: it seems to me that the `ci/circleci` checks are always at the end - after all Github Actions check status", "The first option is what I asked for at the beginning :-). It's impossible to even see the full title of the GitHub Action failing jobs, so seeing their failures in the commit is completely useless IMO. The slack CI reports are great and more than enough for those failures.\r\n\r\n> And one remark: it seems to me that the ci/circleci checks are always at the end - after all Github Actions check status\r\n\r\nYes, you still have to scroll through 200 checks on several commits when trying to debug where the break happened, so leaving as is is not a viable solution." ]
1,653
1,655
1,654
COLLABORATOR
null
# What does this PR do? Try to make push CI less noisy on commit pages. (Current commit page shows all push CI jobs (more than 256 now) status, which is noisy to check which tests failed.) ### Idea 1. push to `main` -> trigger a workflow, that push to another branch `push-ci` 2. push to `push-ci` -> trigger the actual push CI ~~- **TODO**: try to get failures in `2`, and add them to `1`. Fail the job if there is any failure.~~ Example run [caller workflow run] https://github.com/huggingface/transformers/actions/runs/2358695597 [actual push CI run] https://github.com/huggingface/transformers/actions/runs/2358698408
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17369/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17369", "html_url": "https://github.com/huggingface/transformers/pull/17369", "diff_url": "https://github.com/huggingface/transformers/pull/17369.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17369.patch", "merged_at": 1654157967000 }
https://api.github.com/repos/huggingface/transformers/issues/17368
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17368/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17368/comments
https://api.github.com/repos/huggingface/transformers/issues/17368/events
https://github.com/huggingface/transformers/pull/17368
1,243,171,681
PR_kwDOCUB6oc44MSXF
17,368
Pin dill to fix examples
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merging to fix the example test failures on main" ]
1,653
1,653
1,653
COLLABORATOR
null
# What does this PR do? This PR addresses the recent failures in the examples by pinning dill to exclude the latest version.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17368/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17368", "html_url": "https://github.com/huggingface/transformers/pull/17368", "diff_url": "https://github.com/huggingface/transformers/pull/17368.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17368.patch", "merged_at": 1653058858000 }
https://api.github.com/repos/huggingface/transformers/issues/17367
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17367/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17367/comments
https://api.github.com/repos/huggingface/transformers/issues/17367/events
https://github.com/huggingface/transformers/pull/17367
1,242,987,880
PR_kwDOCUB6oc44LrSK
17,367
Fix cvt docstrings
{ "login": "AnugunjNaman", "id": 42839570, "node_id": "MDQ6VXNlcjQyODM5NTcw", "avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnugunjNaman", "html_url": "https://github.com/AnugunjNaman", "followers_url": "https://api.github.com/users/AnugunjNaman/followers", "following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}", "gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions", "organizations_url": "https://api.github.com/users/AnugunjNaman/orgs", "repos_url": "https://api.github.com/users/AnugunjNaman/repos", "events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}", "received_events_url": "https://api.github.com/users/AnugunjNaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Will merge as soon as it's green!", "@LysandreJik @NielsRogge You can merge it now. Thanks for review! 😊" ]
1,653
1,653
1,653
CONTRIBUTOR
null
# What does this PR do? This PR does the following: 1. Remove the error in `README.md` where `CvT` description was copy of `CTRL` 2. Fix `size` of image for `feature extractor` which was set to `224`. 3. fix the input docstring for forward classes of `CvtModel` and `CvtForImageClassification` (head mask etc not needed). @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17367/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17367", "html_url": "https://github.com/huggingface/transformers/pull/17367", "diff_url": "https://github.com/huggingface/transformers/pull/17367.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17367.patch", "merged_at": 1653315069000 }
https://api.github.com/repos/huggingface/transformers/issues/17366
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17366/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17366/comments
https://api.github.com/repos/huggingface/transformers/issues/17366/events
https://github.com/huggingface/transformers/pull/17366
1,242,964,463
PR_kwDOCUB6oc44LmLT
17,366
Fix a typo `relative_postion_if_large` -> `relative_position_if_large`
{ "login": "stancld", "id": 46073029, "node_id": "MDQ6VXNlcjQ2MDczMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stancld", "html_url": "https://github.com/stancld", "followers_url": "https://api.github.com/users/stancld/followers", "following_url": "https://api.github.com/users/stancld/following{/other_user}", "gists_url": "https://api.github.com/users/stancld/gists{/gist_id}", "starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stancld/subscriptions", "organizations_url": "https://api.github.com/users/stancld/orgs", "repos_url": "https://api.github.com/users/stancld/repos", "events_url": "https://api.github.com/users/stancld/events{/privacy}", "received_events_url": "https://api.github.com/users/stancld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,653
1,653
1,653
CONTRIBUTOR
null
# What does this PR do? This PR fixes a minor typo in `T5` and `WavLM` model codes. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17366/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17366", "html_url": "https://github.com/huggingface/transformers/pull/17366", "diff_url": "https://github.com/huggingface/transformers/pull/17366.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17366.patch", "merged_at": 1653064872000 }
https://api.github.com/repos/huggingface/transformers/issues/17365
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17365/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17365/comments
https://api.github.com/repos/huggingface/transformers/issues/17365/events
https://github.com/huggingface/transformers/issues/17365
1,242,918,977
I_kwDOCUB6oc5KFXBB
17,365
Export Generated Text 1 Token at a Time
{ "login": "anujnayyar1", "id": 29458156, "node_id": "MDQ6VXNlcjI5NDU4MTU2", "avatar_url": "https://avatars.githubusercontent.com/u/29458156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anujnayyar1", "html_url": "https://github.com/anujnayyar1", "followers_url": "https://api.github.com/users/anujnayyar1/followers", "following_url": "https://api.github.com/users/anujnayyar1/following{/other_user}", "gists_url": "https://api.github.com/users/anujnayyar1/gists{/gist_id}", "starred_url": "https://api.github.com/users/anujnayyar1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anujnayyar1/subscriptions", "organizations_url": "https://api.github.com/users/anujnayyar1/orgs", "repos_url": "https://api.github.com/users/anujnayyar1/repos", "events_url": "https://api.github.com/users/anujnayyar1/events{/privacy}", "received_events_url": "https://api.github.com/users/anujnayyar1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "WDYT of such a feature @Narsil?", "I like the idea.\r\n\r\nHow would that look like though code wise ?\r\n\r\n```python\r\n\r\npipe = pipeline('text-generation\")\r\n# Regular usage\r\n\r\ngenerated = pipe(\"This is my prompt\")\r\n\r\nfor out in pipe(\"This is my prompt\", continuous=True):\r\n # out = [{\"generated_text\": \" and\"}]\r\n #\r\n```\r\nThe biggest caveat with this idea is this parameters will probably be hard to cumulate with things like `batch_size` and `num_beams`. We can disable some options if some combinations of arguments are provided, but in general I prefer when all combinations of parameters are available.\r\n\r\nOther idea would be to somehow add a callback within `generate` to receive the ids also as they come in.\r\nWhat I don't like, is callback (not easy to work with and debug), but it could be much easier to implement, since we're just injecting something within `generate`.\r\n\r\n```python\r\ndef intermediate_results(out):\r\n print(out)\r\n \r\npipe = pipeline(\"text-generation\")\r\nout = pipe(\"This is my prompt\", continous_fn=print_intermediate_results))\r\n```\r\n\r\nPinging @patrickvonplaten to see if you have ideas to get continuous tokens within `generate`.", "@gante @patil-suraj could you take a look here?", "As @Narsil said, in greedy search/sample generation, we can loop over and call generation with one new token at a time. The performance penalty is not that big, a bit over 2x ([on colab](https://colab.research.google.com/drive/1BQgO3HBRs7sYpKCGFs4LpXvmZFJ0QWEp?usp=sharing), the penalty probably grows with sequence length), and is trivial to implement.\r\n\r\nFor beam search generation, either there is some channel to push sequences as they are generated, or the whole generation logic is somehow exposed to correctly keep track of running sequences/scores. The latter seems unfeasible, the former could be done e.g. with some form of asynchronous queue (one thread runs generate and pushes to the queue, another reads from the queue). \r\n\r\nI'm not experienced in these matters, but... the cost/benefit ratio doesn't seem good (for beam search) 😅 ", "I like the idea, but I think it won't be trivial to implement given the current complexity of `generate`. Even for greedy search/sampling, simply calling `generate` for one token at a time will be very slow, as it won't be able to take advantage of caching.\r\n\r\nAdding callback seems a good idea IMO as it won't clutter `generate` a lot. wdyt @patrickvonplaten @gante ", "Both can leverage the current `generate` and do NOT call `generate` 1 step at a time in my mind.\r\nBoth would use a callback within `generate` but the idea is to understand how a user would use those results.\r\n\r\nI was merely asking how it should look live from a pipeline user perspective.", "As a user OpenAI deal with this quite well. \r\n\r\nThey use server sent events to send over partial completions - aka the JavaScript EventSource library\r\n\r\nSee “stream”\r\nhttps://beta.openai.com/docs/api-reference/completions/create", "To be honest, I'm not in favor of adding this to `generate` - it's too much of a nice-to-have feature and would unnecessarily increase maintenance and make `generate` much harder to understand than it already is", "If it's possible to make it easy and clean with a general `callbacks: Optional[GenerationCallback] = None` function arg I think I'd be fine with it though, but would need to see a PR for it", "Then inside `generate()` ideally we only have one `if callbacks is not None: then call all callbacks` code", "```python\r\nfrom transformers import pipeline\r\nimport torch\r\nimport threading\r\nfrom transformers.generation_stopping_criteria import StoppingCriteria, StoppingCriteriaList\r\nfrom queue import Queue\r\n\r\n\r\npipe = pipeline(model=\"hf-internal-testing/tiny-random-gpt2\", task=\"text-generation\", device=0)\r\n\r\n\r\nclass Stream(StoppingCriteria):\r\n def __init__(self, q):\r\n self.q = q\r\n\r\n def __call__(self, input_ids, scores) -> bool:\r\n self.q.put(input_ids)\r\n return False\r\n\r\n\r\nqueue = Queue()\r\n\r\n\r\ndef gen():\r\n pipe.model.generate(\r\n torch.LongTensor([[0, 1, 2]]).cuda(),\r\n stopping_criteria=StoppingCriteriaList([Stream(queue)]),\r\n max_new_tokens=10,\r\n )\r\n print(\"Finished generation\")\r\n queue.put(False)\r\n\r\n\r\nthreading.Thread(target=gen).start()\r\n\r\nwhile True:\r\n i = queue.get()\r\n if i is False:\r\n break\r\n else:\r\n print(\"Got i\", pipe.tokenizer.decode(i[0]))\r\n```\r\n\r\nWhat do you think about this ?\r\n\r\nI thought this would be an elegant solution to the problem.\r\nBasically send generate to another thread and wait for results as they are coming.\r\n\r\nThe main drawback for pipelines as I said, is the other parameters combinations + backward compatibility support. (+ Threads are a nightmare and if users are already using pipelines within thread/async/multiprocessing bad things might happen)", "I'd be fine with this design - think it's nice! Think we should maybe put it under a new class though, called `Callback` instead of `StoppingCriteria` ?", "> Think we should maybe put it under a new class though, called Callback instead of StoppingCriteria ?\r\n\r\nYes for sure, this was the minimal code, definitely not fit for merge.\r\nAgain, lots of caveats too with this approach, but at least it could be implemented relatively fast.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Has there been any progress on this since last year?\r\n\r\nI am interested in generating one token at a time for an interactive text generation web UI. But simply calling `model.generate` with `max_new_tokens=1` multiple times is a lot slower (about 2x) than generating all the tokens at once.", "@oobabooga no progress, but I have it in my backlog for exploration. Very unlikely that it will see the light of day in the next ~6 months, though :)", "FYI, I made a streaming generation service for Hugging Face [transformers](https://github.com/huggingface/transformers) that is fully compatible with the OpenAI API: https://github.com/hyperonym/basaran" ]
1,653
1,678
1,659
NONE
null
### Feature request When using the text-generation pipeline. We would like to be able export each token as it is generated. Currently we have to wait for the generation to be completed to view the results. ### Motivation Using text-generation in a production environment, this would greatly improve the user experience. Users currently have to wait for text to be generated. If we are able to implement this they could read text as it is generated by the models. ### Your contribution I would be able to bug check this feature if it was added!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17365/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/17365/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17364
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17364/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17364/comments
https://api.github.com/repos/huggingface/transformers/issues/17364/events
https://github.com/huggingface/transformers/issues/17364
1,242,855,276
I_kwDOCUB6oc5KFHds
17,364
Nana123
{ "login": "Nana12345678910", "id": 95056882, "node_id": "U_kgDOBapz8g", "avatar_url": "https://avatars.githubusercontent.com/u/95056882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nana12345678910", "html_url": "https://github.com/Nana12345678910", "followers_url": "https://api.github.com/users/Nana12345678910/followers", "following_url": "https://api.github.com/users/Nana12345678910/following{/other_user}", "gists_url": "https://api.github.com/users/Nana12345678910/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nana12345678910/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nana12345678910/subscriptions", "organizations_url": "https://api.github.com/users/Nana12345678910/orgs", "repos_url": "https://api.github.com/users/Nana12345678910/repos", "events_url": "https://api.github.com/users/Nana12345678910/events{/privacy}", "received_events_url": "https://api.github.com/users/Nana12345678910/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Savings 💰 can be a real issue, but it does not seem to be a `transformers` issue :)" ]
1,653
1,653
1,653
NONE
null
Savings
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17364/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17363
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17363/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17363/comments
https://api.github.com/repos/huggingface/transformers/issues/17363/events
https://github.com/huggingface/transformers/issues/17363
1,242,638,261
I_kwDOCUB6oc5KESe1
17,363
TFGenerationMixin.generate should support a parameter such as logit_mask
{ "login": "TheHonestBob", "id": 58240629, "node_id": "MDQ6VXNlcjU4MjQwNjI5", "avatar_url": "https://avatars.githubusercontent.com/u/58240629?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheHonestBob", "html_url": "https://github.com/TheHonestBob", "followers_url": "https://api.github.com/users/TheHonestBob/followers", "following_url": "https://api.github.com/users/TheHonestBob/following{/other_user}", "gists_url": "https://api.github.com/users/TheHonestBob/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheHonestBob/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheHonestBob/subscriptions", "organizations_url": "https://api.github.com/users/TheHonestBob/orgs", "repos_url": "https://api.github.com/users/TheHonestBob/repos", "events_url": "https://api.github.com/users/TheHonestBob/events{/privacy}", "received_events_url": "https://api.github.com/users/TheHonestBob/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @TheHonestBob 👋 If your task is not TensorFlow-dependent, would the constraints-related arguments in the more complete pytorch version help? ([docs here](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin))", "> Hey @TheHonestBob 👋 If your task is not TensorFlow-dependent, would the constraints-related arguments in the more complete pytorch version help? ([docs here](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin))\r\n\r\nthanks for your reply, I read this docs,in pytorch generation_utils.py logits_processor parameter maybe I want,but not in tensorflow-dependent,I always use tf to code. on the other hand,I found that in the TFPreTrainedModel's subclass call func always return TFSeq2SeqLMOutput class, it's very python nice, but TFPreTrainedModel don't Implement fit() func, there will be two problem in my opinion,1. if I inherit TFPreTrainedModel to Implement my model ,I can't use fit() func, because fit func require call() func return fixed format, 2. if I use tf.keras.model.Model to Implement my model, I can't use generate func, as well as inherit TFGenerationMixin also can't solve it.", "`TFPreTrainedModel` is not meant to be a stand-alone model, but something your model inherits from :) See [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_tf_t5.py#L1260) for an example. When built this way, `fit()` will work as usual with Keras, and the model will have `generate()` support. However, it is quite complex to build, as you might notice -- I'd recommend starting from an existing model.\r\n\r\nAs for the original logit masking feature, I'm going to tag @patil-suraj and @patrickvonplaten -- do we have some functionality related to this feature request? (see issue at the top)", "We have `bad_token_word_ids` which should do exactly this:\r\nhttps://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.bad_words_ids(List[List[int]],\r\n\r\nCould you try this @TheHonestBob ?", "> `TFPreTrainedModel` is not meant to be a stand-alone model, but something your model inherits from :) See [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_tf_t5.py#L1260) for an example. When built this way, `fit()` will work as usual with Keras, and the model will have `generate()` support. However, it is quite complex to build, as you might notice -- I'd recommend starting from an existing model.\r\n> \r\n> As for the original logit masking feature, I'm going to tag @patil-suraj and @patrickvonplaten -- do we have some functionality related to this feature request? (see issue at the top)\r\nthanks for your reply,I view lastest Bart source code, in my opinion,transformers overwrite so many tensorflow func, such as train_step compile, I haven't done more experiments to verify whether the latest code is more compatible with tensorflow, I feel that the transformers library is too integrated, and seems to lack the flexibility of tensorflow. If I need more flexible requirements, I may have to write more code. The transformers library looks more and more like an AI framework based on tensorflow, rather than an easy-to-use pre training model library, because the transformers library has its own entire training and prediction logic, Maybe the above is just that I don't know much about the transformers library.\r\n", "> We have `bad_token_word_ids` which should do exactly this: [https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.bad_words_ids(List[List[int]]](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.bad_words_ids(List%5BList%5Bint%5D%5D),\r\n> \r\n> Could you try this @TheHonestBob ?\r\n\r\nthanks for your reply, I try this, in my opinion,this parameter not python nice,if bad word more than 20000,or each batch data have different bad word,I think this parameter will work not well.", "> `TFPreTrainedModel` is not meant to be a stand-alone model, but something your model inherits from :) See [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_tf_t5.py#L1260) for an example. When built this way, `fit()` will work as usual with Keras, and the model will have `generate()` support. However, it is quite complex to build, as you might notice -- I'd recommend starting from an existing model.\r\n> \r\n> As for the original logit masking feature, I'm going to tag @patil-suraj and @patrickvonplaten -- do we have some functionality related to this feature request? (see issue at the top)\r\n\r\nthanks for your reply,I view lastest Bart source code, in my opinion,transformers overwrite so many tensorflow func, such as train_step compile, I haven't done more experiments to verify whether the latest code is more compatible with tensorflow, I feel that the transformers library is too integrated, and seems to lack the flexibility of tensorflow. If I need more flexible requirements, I may have to write more code. The transformers library looks more and more like an AI framework based on tensorflow, rather than an easy-to-use pre training model library, because the transformers library has its own entire training and prediction logic, Maybe the above is just that I don't know much about the transformers library.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,653
1,656
1,656
NONE
null
### Feature request when i use encoder-decoder model for relation extract task, I make sure the decoder output must in encoder input text. so a parameter such logit_mask should be necessary. ### Motivation when I use prompt finetuning a encoder-decoder to extract spo, I make sure the decoder output must in encoder input text. I try so many ways to Implement model constraint generation,but I filed,bad_world_ids parameter is not enough. ### Your contribution I think TFGenerationMixin.generate should support a parameter such as logit_mask to mask next tokens score.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17363/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17362
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17362/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17362/comments
https://api.github.com/repos/huggingface/transformers/issues/17362/events
https://github.com/huggingface/transformers/issues/17362
1,242,525,670
I_kwDOCUB6oc5KD2_m
17,362
About model loading without parameter
{ "login": "LiuJinzhe-Keepgoing", "id": 77256390, "node_id": "MDQ6VXNlcjc3MjU2Mzkw", "avatar_url": "https://avatars.githubusercontent.com/u/77256390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LiuJinzhe-Keepgoing", "html_url": "https://github.com/LiuJinzhe-Keepgoing", "followers_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/followers", "following_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/following{/other_user}", "gists_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/gists{/gist_id}", "starred_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/subscriptions", "organizations_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/orgs", "repos_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/repos", "events_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/events{/privacy}", "received_events_url": "https://api.github.com/users/LiuJinzhe-Keepgoing/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "To initialize a model without any pretrained weights, you can just load it with a configuration:\r\n\r\n```\r\nfrom transformers import ViltConfig, ViltForImageAndTextRetrieval\r\n\r\nconfig = ViltConfig()\r\nmodel = ViltForImageAndTextRetrieval(config)\r\n```\r\nAll weights will be randomly initialized.", "Closing this issue, as I believe I've answered it. Feel free to re-open." ]
1,653
1,654
1,654
NONE
null
https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/vilt/modeling_vilt.py#L1175 Hello, I want to use vilt in other fields through from_ Can Pretrain only load models without loading model parameters. Because the coco dataset is very different from my domain dataset. I don't know if there is any way.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17362/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17361
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17361/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17361/comments
https://api.github.com/repos/huggingface/transformers/issues/17361/events
https://github.com/huggingface/transformers/issues/17361
1,242,412,805
I_kwDOCUB6oc5KDbcF
17,361
Remove/ablating particular head in a Transformer model
{ "login": "Jirigesi", "id": 50706568, "node_id": "MDQ6VXNlcjUwNzA2NTY4", "avatar_url": "https://avatars.githubusercontent.com/u/50706568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jirigesi", "html_url": "https://github.com/Jirigesi", "followers_url": "https://api.github.com/users/Jirigesi/followers", "following_url": "https://api.github.com/users/Jirigesi/following{/other_user}", "gists_url": "https://api.github.com/users/Jirigesi/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jirigesi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jirigesi/subscriptions", "organizations_url": "https://api.github.com/users/Jirigesi/orgs", "repos_url": "https://api.github.com/users/Jirigesi/repos", "events_url": "https://api.github.com/users/Jirigesi/events{/privacy}", "received_events_url": "https://api.github.com/users/Jirigesi/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I resolved this problem. Refer to Issue 850. https://github.com/huggingface/transformers/issues/850\r\n\r\nThanks " ]
1,653
1,653
1,653
NONE
null
### System Info ```shell I am currently working on some research in which I am to analyze the important heads, which is similar to the work done in the paper "Are Sixteen Heads Really Better than One?". Basically, what they did is to iteratively ablate each head of each layer one by one, and then observe the size of the reduction in the final model prediction performance. If the prediction metric value obtained after ablating a head is much lower than the original model in that all heads are used, it means that the ablated head is important. The approach they used is to time 0 to the output of the ablated head and time 1 to the rest of the considered heads. I am wondering if the Transformer models in huggingface could do something similar to this to evaluate the importance of each head? (Sorry to raise this as a bug. If I should not raise this question as bug, please advise and I will convert this from Bug to others.) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction No reproduction for this, maybe link to the paper's repo: https://github.com/pmichel31415/are-16-heads-really-better-than-1 ### Expected behavior ```shell Hope someone can give me indication how to evaluate which head is relatively more important in a Transformer model. Thanks ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17361/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17360
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17360/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17360/comments
https://api.github.com/repos/huggingface/transformers/issues/17360/events
https://github.com/huggingface/transformers/issues/17360
1,242,367,564
I_kwDOCUB6oc5KDQZM
17,360
VisualBERT: Low accuracy on VQA v2
{ "login": "nikich28", "id": 57706963, "node_id": "MDQ6VXNlcjU3NzA2OTYz", "avatar_url": "https://avatars.githubusercontent.com/u/57706963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikich28", "html_url": "https://github.com/nikich28", "followers_url": "https://api.github.com/users/nikich28/followers", "following_url": "https://api.github.com/users/nikich28/following{/other_user}", "gists_url": "https://api.github.com/users/nikich28/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikich28/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikich28/subscriptions", "organizations_url": "https://api.github.com/users/nikich28/orgs", "repos_url": "https://api.github.com/users/nikich28/repos", "events_url": "https://api.github.com/users/nikich28/events{/privacy}", "received_events_url": "https://api.github.com/users/nikich28/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Has anyone found a problem solution?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi I have the same issues? One possible reason is the id2label and label2id in config.json. The other possbile reason is the frcnn image features are not exact. Can anyone solve this issue? Thank you very much.", "@nikich28 Hi nick, have you solve this issue? many thanks" ]
1,652
1,699
1,657
NONE
null
I use the exact code from visual bert demo, but only got about 46% accuracy on VQA v2 validation data. Has anyone had the same issue?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17360/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17359
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17359/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17359/comments
https://api.github.com/repos/huggingface/transformers/issues/17359/events
https://github.com/huggingface/transformers/pull/17359
1,242,340,481
PR_kwDOCUB6oc44Jj76
17,359
[Test OPT] Add batch generation test opt
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Just adds a test to make sure that generation in batches works correctly. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17359/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17359", "html_url": "https://github.com/huggingface/transformers/pull/17359", "diff_url": "https://github.com/huggingface/transformers/pull/17359.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17359.patch", "merged_at": 1652996786000 }
https://api.github.com/repos/huggingface/transformers/issues/17358
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17358/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17358/comments
https://api.github.com/repos/huggingface/transformers/issues/17358/events
https://github.com/huggingface/transformers/pull/17358
1,242,337,486
PR_kwDOCUB6oc44JjQF
17,358
wip: testing https://github.com/huggingface/doc-builder/pull/214
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,653
1,653
CONTRIBUTOR
null
wip: testing https://github.com/huggingface/doc-builder/pull/214
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17358/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17358", "html_url": "https://github.com/huggingface/transformers/pull/17358", "diff_url": "https://github.com/huggingface/transformers/pull/17358.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17358.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17357
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17357/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17357/comments
https://api.github.com/repos/huggingface/transformers/issues/17357/events
https://github.com/huggingface/transformers/issues/17357
1,242,308,993
I_kwDOCUB6oc5KDCGB
17,357
Example of TFMarianMTModel is not working very well
{ "login": "Zhenzi-Weng", "id": 93537294, "node_id": "U_kgDOBZNEDg", "avatar_url": "https://avatars.githubusercontent.com/u/93537294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zhenzi-Weng", "html_url": "https://github.com/Zhenzi-Weng", "followers_url": "https://api.github.com/users/Zhenzi-Weng/followers", "following_url": "https://api.github.com/users/Zhenzi-Weng/following{/other_user}", "gists_url": "https://api.github.com/users/Zhenzi-Weng/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zhenzi-Weng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zhenzi-Weng/subscriptions", "organizations_url": "https://api.github.com/users/Zhenzi-Weng/orgs", "repos_url": "https://api.github.com/users/Zhenzi-Weng/repos", "events_url": "https://api.github.com/users/Zhenzi-Weng/events{/privacy}", "received_events_url": "https://api.github.com/users/Zhenzi-Weng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @Zhenzi-Weng, \r\n\r\nWhen I execute the code above, I'm getting an error. I'm assuming you want to execute the following:\r\n\r\n```python\r\nfrom transformers import MarianTokenizer, TFMarianMTModel\r\nfrom typing import List\r\n\r\nsrc = \"fr\" # source language\r\ntrg = \"en\" # target language\r\nsample_text = \"où est l'arrêt de bus ?\"\r\nmodel_name = f\"Helsinki-NLP/opus-mt-{src}-{trg}\"\r\n\r\nmodel = TFMarianMTModel.from_pretrained(model_name, from_pt=True)\r\ntokenizer = MarianTokenizer.from_pretrained(model_name)\r\nbatch = tokenizer([sample_text], return_tensors=\"tf\")\r\ngen = model.generate(**batch)\r\ntokenizer.batch_decode(gen, skip_special_tokens=True)\r\n```\r\n", "When executing the above code, I'm getting a pretty reasonable answer: \r\n\r\n```\r\n[\"Where's the bus stop?\"] \r\n```\r\n\r\n=> What is the problem here exactly?\r\n", "> When executing the above code, I'm getting a pretty reasonable answer:\r\n> \r\n> ```\r\n> [\"Where's the bus stop?\"] \r\n> ```\r\n> \r\n> => What is the problem here exactly?\r\n\r\nHello @patrickvonplaten , thanks for your reply. I tried your code and it did output a reasonable translation, but I believe your code is for Pytorch. I‘m using Tensorflow and the problem is the output will contain many invalid characters. Would you mind to try the following code again? Many thanks. My Tensorflow version is 2.4.1 and I installed transformers v4.19.0.\r\n`from transformers import MarianTokenizer, TFMarianMTModel\r\n\r\nsrc = \"en\" # source language\r\ntrg = \"zh\" # target language\r\n\r\nmodel_name = f\"Helsinki-NLP/opus-mt-{src}-{trg}\"\r\nmodel = TFMarianMTModel.from_pretrained(model_name)\r\ntokenizer = MarianTokenizer.from_pretrained(model_name)\r\n\r\nsample_text = \"My name is Wolfgang and I live in Berlin\"\r\nbatch = tokenizer([sample_text], return_tensors=\"tf\")\r\n\r\ngenerated_ids = model.generate(**batch)\r\ntokenizer.batch_decode(generated_ids, skip_special_tokens=True)`", "Hey @Zhenzi-Weng,\r\n\r\nNote that my codesnippet above is for Tensorflow, the model is loaded into a `TFMarianMTModel` class. We just need to have PyTorch installed to convert the PT weights to TF before ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @patrickvonplaten, related to this question, why is it that the TF version of MarianMTModel much slower than the PT version? Maybe it has something to do with the length of generating output?", "TF is generally much slower in eager mode than PyTorch - you could try to compile the generate method in Tensorflow which should speed things up :-) \r\n\r\ncc @gante ", "@jamie0725 correct, TF (eager execution) is much slower than PyTorch. However, with XLA, it should be much faster. Have a look at [this blog post](https://huggingface.co/blog/tf-xla-generate)!" ]
1,652
1,667
1,656
NONE
null
Hello, I copied the TFMarianMTModel example code from [https://huggingface.co/docs/transformers/v4.19.2/en/model_doc/marian#transformers.TFMarianMTModel](url) and compiled in Tensorflow 2.4.1. But it turned out the output length of "model.generate(**batch)" is always 512. ```python from transformers import MarianTokenizer, TFMarianMTModel from typing import List src = "fr" # source language trg = "en" # target language sample_text = "où est l'arrêt de bus ?" model_name = f"Helsinki-NLP/opus-mt-{src}-{trg}" model = TFMarianMTModel.from_pretrained(model_name) tokenizer = MarianTokenizer.from_pretrained(model_name) batch = tokenizer([sample_text], return_tensors="tf") gen = model.generate(**batch) tokenizer.batch_decode(gen, skip_special_tokens=True) ``` Could you help with it? @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17357/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17356
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17356/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17356/comments
https://api.github.com/repos/huggingface/transformers/issues/17356/events
https://github.com/huggingface/transformers/pull/17356
1,242,242,333
PR_kwDOCUB6oc44JPfD
17,356
Fixes #17128 .
{ "login": "mygithubid1", "id": 19863166, "node_id": "MDQ6VXNlcjE5ODYzMTY2", "avatar_url": "https://avatars.githubusercontent.com/u/19863166?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mygithubid1", "html_url": "https://github.com/mygithubid1", "followers_url": "https://api.github.com/users/mygithubid1/followers", "following_url": "https://api.github.com/users/mygithubid1/following{/other_user}", "gists_url": "https://api.github.com/users/mygithubid1/gists{/gist_id}", "starred_url": "https://api.github.com/users/mygithubid1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mygithubid1/subscriptions", "organizations_url": "https://api.github.com/users/mygithubid1/orgs", "repos_url": "https://api.github.com/users/mygithubid1/repos", "events_url": "https://api.github.com/users/mygithubid1/events{/privacy}", "received_events_url": "https://api.github.com/users/mygithubid1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for your contribution, @mygithubid1!" ]
1,652
1,654
1,654
CONTRIBUTOR
null
# What does this PR do? Finalize changes as per this [PR](https://github.com/huggingface/transformers/pull/17277). Sorry. I messed up changes during merge and hence this new PR. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #17128 ## Before submitting - [N/A] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Here's the [link](https://github.com/huggingface/transformers/issues/17128) - [N/A] Did you make sure to update the documentation with your changes? - [ ] Did you write any new necessary tests? I didn't write a custom test. Ran the following commands run to ensure local tests pass 1. `RUN_PIPELINE_TESTS=yes python -m unittest discover -s tests/pipelines -p "test_pipelines_question_answering.py" -t . -v -f ` 2. `python -m unittest discover -s . -p "test_tokenization_wav2vec2.py" -t . -v -f` ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17356/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17356", "html_url": "https://github.com/huggingface/transformers/pull/17356", "diff_url": "https://github.com/huggingface/transformers/pull/17356.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17356.patch", "merged_at": 1654868208000 }
https://api.github.com/repos/huggingface/transformers/issues/17355
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17355/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17355/comments
https://api.github.com/repos/huggingface/transformers/issues/17355/events
https://github.com/huggingface/transformers/issues/17355
1,242,115,171
I_kwDOCUB6oc5KCSxj
17,355
[BigBird] random attention in FlaxBigBird is not random
{ "login": "thevasudevgupta", "id": 53136577, "node_id": "MDQ6VXNlcjUzMTM2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thevasudevgupta", "html_url": "https://github.com/thevasudevgupta", "followers_url": "https://api.github.com/users/thevasudevgupta/followers", "following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}", "gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions", "organizations_url": "https://api.github.com/users/thevasudevgupta/orgs", "repos_url": "https://api.github.com/users/thevasudevgupta/repos", "events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}", "received_events_url": "https://api.github.com/users/thevasudevgupta/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 3081136536, "node_id": "MDU6TGFiZWwzMDgxMTM2NTM2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue", "name": "Good Difficult Issue", "color": "684CC7", "default": false, "description": "" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I see, thanks for putting up this issue here! \r\n\r\nThink the easiest fix in this case would be to use transition from `numpy.random` to `jax.random`: https://jax.readthedocs.io/en/latest/jax.random.html no? This way the user can always pass a PRNG key to the forward which would define the random attention mask?\r\n\r\nWhat do you think @vasudevgupta7 ?", "Hello @patrickvonplaten,\r\n\r\nsorry for the late reply. I missed your comment somehow.\r\n\r\n> Think the easiest fix in this case would be to use transition from numpy.random to jax.random: https://jax.readthedocs.io/en/latest/jax.random.html no? This way the user can always pass a PRNG key to the forward which would define the random attention mask?\r\n\r\nYes, this should be the best way to go. But then Jax implementation will diverge from PyTorch implementation in case of block sparse attention right?? and some tests would fail ig.", "Thanks for the reply @vasudevgupta7!\r\n\r\nLeaving this as a \"second good issue\" now as I won't find the time to dig into it in the near future. @community Please ping me here if you'd like this to be fixed soon", "Hi @patrickvonplaten @thevasudevgupta - is this still a problem? If so I would love to pick it up to have some work on the Christmas break :)", "This would be great @Bearnardd :-) ", "@thevasudevgupta this issue can be closed as the fix is merged :)", "Very cool to know that!" ]
1,652
1,683
1,683
CONTRIBUTOR
null
### System Info ```shell NOT NEEDED ``` ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `numpy.random` is getting used everywhere in `FlaxBigBird` for fetching random indices for computing attention over random tokens. This is wrong as these indices would be cached during jit compilation. Hence, bigbird would attend similar random tokens (i.e same indices) in every step This can be fixed if random indices are prepared in datacollator and passed to `model.__call__` during training. https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/src/transformers/models/big_bird/modeling_flax_big_bird.py#L975
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17355/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17355/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17354
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17354/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17354/comments
https://api.github.com/repos/huggingface/transformers/issues/17354/events
https://github.com/huggingface/transformers/pull/17354
1,241,914,804
PR_kwDOCUB6oc44IKq2
17,354
add MobileViT model
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "What is holding on the merge here? It's been a month and a half so let's try to merge this soon :-)", "I can't seem to be able to run the tests again? (It's not failing on my code.)", "You will probably need to rebase on main to get to 0 failures.", "It's a random failure of the tests (reported and is being fixed). I don't mind doing a rebase but I thought there was a way to trigger the tests to run again.", "No the failures you are seeing are all due to the PyTorch 1.12 release and a model that was moved. All the fixes for that are in main but not in this PR, so re-running the tests won't help make them green.\r\n\r\nBut those are all unrelated to the model addition, so I think we can merge this, no?", "Merging this is fine with me (I don't have write access). :-) " ]
1,652
1,656
1,656
CONTRIBUTOR
null
# What does this PR do? Add the MobileViT model to Transformers. This is a computer vision model that combines CNNs with transformers: https://machinelearning.apple.com/research/vision-transformer The model comes in three sizes: small, extra small, and xx-small. There are two heads: image classification and semantic segmentation. Object detection will be added later. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. (Internal discussion on Slack.) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17354/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17354/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17354", "html_url": "https://github.com/huggingface/transformers/pull/17354", "diff_url": "https://github.com/huggingface/transformers/pull/17354.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17354.patch", "merged_at": 1656533271000 }
https://api.github.com/repos/huggingface/transformers/issues/17353
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17353/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17353/comments
https://api.github.com/repos/huggingface/transformers/issues/17353/events
https://github.com/huggingface/transformers/pull/17353
1,241,867,903
PR_kwDOCUB6oc44IAs4
17,353
[OPT] Run test in lower precision on GPU
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Let's run the test only in half precision. There might have been a bug because fp16 weights are loaded into fp32. Let's make sure everything stays in fp16. Also if it fails again, we'll see a better error message this time. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17353/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17353", "html_url": "https://github.com/huggingface/transformers/pull/17353", "diff_url": "https://github.com/huggingface/transformers/pull/17353.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17353.patch", "merged_at": 1652991336000 }
https://api.github.com/repos/huggingface/transformers/issues/17352
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17352/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17352/comments
https://api.github.com/repos/huggingface/transformers/issues/17352/events
https://github.com/huggingface/transformers/pull/17352
1,241,728,273
PR_kwDOCUB6oc44Hjki
17,352
Adding the Portuguese version of the tasks/sequence_classification.mdx documentation
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.github.com/users/jonatasgrosman/followers", "following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}", "gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions", "organizations_url": "https://api.github.com/users/jonatasgrosman/orgs", "repos_url": "https://api.github.com/users/jonatasgrosman/repos", "events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}", "received_events_url": "https://api.github.com/users/jonatasgrosman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Muito obrigado pelo seu PR @jonatasgrosman! If you wish to contribute with the translation of another doc please let me know in the Issue 🤗.\r\n\r\n@sgugger I find it ready to merge :)", "Thanks a lot for your help!" ]
1,652
1,653
1,653
CONTRIBUTOR
null
# What does this PR do? Adding the Portuguese version of the tasks/sequence_classification.mdx documentation Work on #16824 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @omarespejel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17352/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17352", "html_url": "https://github.com/huggingface/transformers/pull/17352", "diff_url": "https://github.com/huggingface/transformers/pull/17352.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17352.patch", "merged_at": 1653510087000 }
https://api.github.com/repos/huggingface/transformers/issues/17351
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17351/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17351/comments
https://api.github.com/repos/huggingface/transformers/issues/17351/events
https://github.com/huggingface/transformers/pull/17351
1,241,567,009
PR_kwDOCUB6oc44HBOm
17,351
fix ZeroDivisionError: division by zero
{ "login": "JackKuo666", "id": 41313632, "node_id": "MDQ6VXNlcjQxMzEzNjMy", "avatar_url": "https://avatars.githubusercontent.com/u/41313632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JackKuo666", "html_url": "https://github.com/JackKuo666", "followers_url": "https://api.github.com/users/JackKuo666/followers", "following_url": "https://api.github.com/users/JackKuo666/following{/other_user}", "gists_url": "https://api.github.com/users/JackKuo666/gists{/gist_id}", "starred_url": "https://api.github.com/users/JackKuo666/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JackKuo666/subscriptions", "organizations_url": "https://api.github.com/users/JackKuo666/orgs", "repos_url": "https://api.github.com/users/JackKuo666/repos", "events_url": "https://api.github.com/users/JackKuo666/events{/privacy}", "received_events_url": "https://api.github.com/users/JackKuo666/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17351). All of your documentation changes will be reflected on that endpoint.", "No, this is incorrect. Some `IterableDataset` do have a length and we don't want this specific check here. You did not share your code in your issue, but it looks like the length of your custom iterable dataset is wrong.", "> No, this is incorrect. Some `IterableDataset` do have a length and we don't want this specific check here. You did not share your code in your issue, but it looks like the length of your custom iterable dataset is wrong.\r\n\r\nmy custom iterable dataset is :\r\n```py\r\n\r\nclass CustomIterableDataset(torch.utils.data.IterableDataset):\r\n \r\n def __init__(self, tokenizer, data_file, num_lines, block_size):\r\n self.data_file = data_file\r\n self.block_size = block_size\r\n self.num_lines = num_lines\r\n self.tokenizer = tokenizer\r\n if num_lines == -1:\r\n raise Exception(\"请在--data_lines输入数据的行数\")\r\n \r\n def __len__(self):\r\n return self.num_lines\r\n \r\n def __iter__(self):\r\n while True:\r\n with open(self.data_file, 'rt', encoding='utf-8') as f:\r\n for line in f.readline():\r\n line = f.readline().strip()\r\n batch_encoding = self.tokenizer.batch_encode_plus(line, add_special_tokens=True, max_length=self.block_size)\r\n self.examples = batch_encoding[\"input_ids\"]\r\n yield torch.tensor(self.examples[0], dtype=torch.long)\r\n```\r\ni think if we set `max_steps=total_line//pretrain_batch_size` ,in `trainer`, that should correct?\r\n```py\r\n training_args = TrainingArguments(\r\n output_dir=args.save_dir, overwrite_output_dir=True, num_train_epochs=num_train_epochs, # max_steps=total_line//pretrain_batch_size ,\r\n learning_rate=1e-4, weight_decay=0.01, warmup_steps=10000, local_rank = args.local_rank, \r\n per_device_train_batch_size=pretrain_batch_size,logging_steps=500, save_total_limit = 1, logging_dir=\"./runs\",\r\n load_best_model_at_end=True, save_strategy=\"epoch\", evaluation_strategy=\"epoch\",\r\n metric_for_best_model=\"loss\")\r\n\r\n```\r\n", "If you pass along `max_steps`, it will override the rest, yes.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,656
1,656
NONE
null
The logic of if len_dataloader is not None should be to determine whether train_dataset is IterableDataset,But whether train_dataset is IterableDataset or not, len dataloader is 1,so, it should be change `if not isinstance(self.train_dataset, torch.utils.data.IterableDataset)` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #17350 (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17351/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17351", "html_url": "https://github.com/huggingface/transformers/pull/17351", "diff_url": "https://github.com/huggingface/transformers/pull/17351.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17351.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17350
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17350/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17350/comments
https://api.github.com/repos/huggingface/transformers/issues/17350/events
https://github.com/huggingface/transformers/issues/17350
1,241,559,883
I_kwDOCUB6oc5KALNL
17,350
ZeroDivisionError: division by zero
{ "login": "JackKuo666", "id": 41313632, "node_id": "MDQ6VXNlcjQxMzEzNjMy", "avatar_url": "https://avatars.githubusercontent.com/u/41313632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JackKuo666", "html_url": "https://github.com/JackKuo666", "followers_url": "https://api.github.com/users/JackKuo666/followers", "following_url": "https://api.github.com/users/JackKuo666/following{/other_user}", "gists_url": "https://api.github.com/users/JackKuo666/gists{/gist_id}", "starred_url": "https://api.github.com/users/JackKuo666/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JackKuo666/subscriptions", "organizations_url": "https://api.github.com/users/JackKuo666/orgs", "repos_url": "https://api.github.com/users/JackKuo666/repos", "events_url": "https://api.github.com/users/JackKuo666/events{/privacy}", "received_events_url": "https://api.github.com/users/JackKuo666/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Did you fix this problem,I'm having the same bug" ]
1,652
1,668
1,656
NONE
null
### System Info ```shell transformers v4.9.2 python v3.8 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction when i was training my bert model use ` CustomIterableDataset(torch.utils.data.IterableDataset)`, rise the follow error: ``` Traceback (most recent call last): File "train.py", line 102, in <module> main(args) File "train.py", line 85, in main trainer.train() File "/home/jack/anaconda3/envs/py38trans4.19.2/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train return inner_training_loop( File "/home/jack/anaconda3/envs/py38trans4.19.2/lib/python3.8/site-packages/transformers/trainer.py", line 1627, in _inner_training_loop self.state.epoch = epoch + (step + 1) / steps_in_epoch ZeroDivisionError: division by zero ``` i find that this line have a litter bug in [`/transformers/trainer.py`, line: 1516](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1526): ``` steps_in_epoch = ( len(epoch_iterator) if len_dataloader is not None else args.max_steps * args.gradient_accumulation_steps ) ``` The logic of ` if len_dataloader is not None` should be to determine whether `train_dataset` is `IterableDataset`,But whether `train_dataset` is `IterableDataset` or not, len dataloader is 1,so, it should be change as follows: ``` steps_in_epoch = ( len(epoch_iterator) if not isinstance(self.train_dataset, torch.utils.data.IterableDataset) else args.max_steps * args.gradient_accumulation_steps ) ``` ### Expected behavior ```shell i will fix this bug ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17350/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17349
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17349/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17349/comments
https://api.github.com/repos/huggingface/transformers/issues/17349/events
https://github.com/huggingface/transformers/pull/17349
1,241,530,519
PR_kwDOCUB6oc44G5iS
17,349
Spanish docs - Fix nits and wording
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger links to current docs that don't exist (eg `./main_classes/pipelines`) don't work. Do you think it would be worth linking them to the English docs while the Spanish versions become available? Or should we wait?\r\n\r\nfyi @osanseviero ", "_The documentation is not available anymore as the PR was closed or merged._", "I had been told that links to page that don't exist in one language would be automatically resolved to the English version, cc @mishig25 ", "Currently, the links lead to an error if the doc is not yet translated. For example, this fragment in [`autoclass_tutorial`](https://huggingface.co/docs/transformers/main/es/autoclass_tutorial) leads a to an error for not having [model_doc/auto.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/auto.mdx)\r\n translated yet.\r\n> Finalmente, las clases AutoModelFor te permiten cargar un modelo preentrenado para una tarea dada (revisa [aquí](https://huggingface.co/docs/transformers/main/es/model_doc/auto) para conocer la lista completa de tareas disponibles).\r\n\r\nPlease let me know if any help is required. Meanwhile, I will summon more community for the translation.", "This PR fixes some wording and format in the Spanish docs. It would be ready to merge IMO.\r\n\r\nI opened issue #17461 regarding the problem with the links.", "Thanks for fixing!" ]
1,652
1,654
1,654
CONTRIBUTOR
null
# What does this PR do? Fix wording and nits in the merged Spanish documentation. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17349/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17349", "html_url": "https://github.com/huggingface/transformers/pull/17349", "diff_url": "https://github.com/huggingface/transformers/pull/17349.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17349.patch", "merged_at": 1654000914000 }
https://api.github.com/repos/huggingface/transformers/issues/17348
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17348/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17348/comments
https://api.github.com/repos/huggingface/transformers/issues/17348/events
https://github.com/huggingface/transformers/issues/17348
1,241,415,924
I_kwDOCUB6oc5J_oD0
17,348
ImportError: cannot import name 'AutoProcessor' from 'transformers'
{ "login": "nawalouldamer-zz", "id": 3446095, "node_id": "MDQ6VXNlcjM0NDYwOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/3446095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nawalouldamer-zz", "html_url": "https://github.com/nawalouldamer-zz", "followers_url": "https://api.github.com/users/nawalouldamer-zz/followers", "following_url": "https://api.github.com/users/nawalouldamer-zz/following{/other_user}", "gists_url": "https://api.github.com/users/nawalouldamer-zz/gists{/gist_id}", "starred_url": "https://api.github.com/users/nawalouldamer-zz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nawalouldamer-zz/subscriptions", "organizations_url": "https://api.github.com/users/nawalouldamer-zz/orgs", "repos_url": "https://api.github.com/users/nawalouldamer-zz/repos", "events_url": "https://api.github.com/users/nawalouldamer-zz/events{/privacy}", "received_events_url": "https://api.github.com/users/nawalouldamer-zz/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I don't think `AutoProcessor` was already available in Transformers 4.11.\r\n\r\nSo you might need an upgrade to the latest version:\r\n\r\n```\r\npip install --upgrade transformers\r\n```", "Yes you are right! Thanks for your help :) " ]
1,652
1,652
1,652
NONE
null
### System Info ```shell - `transformers` version: 4.11.3 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? I am following the steps described in https://huggingface.co/docs/transformers/tasks/asr to test the AST model. @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoProcessor --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Input In [9], in <cell line: 1>() ----> 1 from transformers import AutoProcessor ImportError: cannot import name 'AutoProcessor' from 'transformers' (/Users/to125419/miniconda3/envs/s2t/lib/python3.8/site-packages/transformers/__init__.py) ### Expected behavior ```shell I am following the steps described in https://huggingface.co/docs/transformers/tasks/asr ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17348/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17347/comments
https://api.github.com/repos/huggingface/transformers/issues/17347/events
https://github.com/huggingface/transformers/pull/17347
1,240,931,644
PR_kwDOCUB6oc44E70j
17,347
Add support for conditional DETR model
{ "login": "DeppMeng", "id": 26196079, "node_id": "MDQ6VXNlcjI2MTk2MDc5", "avatar_url": "https://avatars.githubusercontent.com/u/26196079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DeppMeng", "html_url": "https://github.com/DeppMeng", "followers_url": "https://api.github.com/users/DeppMeng/followers", "following_url": "https://api.github.com/users/DeppMeng/following{/other_user}", "gists_url": "https://api.github.com/users/DeppMeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/DeppMeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DeppMeng/subscriptions", "organizations_url": "https://api.github.com/users/DeppMeng/orgs", "repos_url": "https://api.github.com/users/DeppMeng/repos", "events_url": "https://api.github.com/users/DeppMeng/events{/privacy}", "received_events_url": "https://api.github.com/users/DeppMeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,656
1,656
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Added codes and documentations for conditioonal DETR model. The conditional DETR files are created by using the "add-new-model-like" feature of CookieCutter, based on DETR codes. All tests are passed. One thing I want to ask is that, I have converted the pretrained weights, how shoud I give these weights to you? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/Atten4Vis/ConditionalDETR/issues/21 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17347/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17347", "html_url": "https://github.com/huggingface/transformers/pull/17347", "diff_url": "https://github.com/huggingface/transformers/pull/17347.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17347.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17346/comments
https://api.github.com/repos/huggingface/transformers/issues/17346/events
https://github.com/huggingface/transformers/issues/17346
1,240,912,736
I_kwDOCUB6oc5J9tNg
17,346
failed doctests examples for data2vec_audio, 1 failed, 4 passed
{ "login": "artemisep", "id": 4677340, "node_id": "MDQ6VXNlcjQ2NzczNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4677340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemisep", "html_url": "https://github.com/artemisep", "followers_url": "https://api.github.com/users/artemisep/followers", "following_url": "https://api.github.com/users/artemisep/following{/other_user}", "gists_url": "https://api.github.com/users/artemisep/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemisep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemisep/subscriptions", "organizations_url": "https://api.github.com/users/artemisep/orgs", "repos_url": "https://api.github.com/users/artemisep/repos", "events_url": "https://api.github.com/users/artemisep/events{/privacy}", "received_events_url": "https://api.github.com/users/artemisep/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I am closing this ticket as it is part of the doctest task, merged to https://github.com/huggingface/transformers/issues/17338" ]
1,652
1,652
1,652
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.20.0.dev0 - Platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.29 - Python version: 3.8.0 - Huggingface_hub version: 0.6.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): 2.9.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following the instruction in https://github.com/huggingface/transformers/issues/16292 as listed below: Make sure to run the doc example doc test locally as described in https://github.com/huggingface/transformers/tree/master/docs#for-python-files This is marked done in https://github.com/huggingface/transformers/issues/16292, this was run as a sanity check when working with data2vec_text, https://github.com/huggingface/transformers/issues/17345 error message: [doctest] transformers.models.data2vec.modeling_data2vec_audio.Data2VecAudioForAudioFrameClassification.forward _______________________________________ 1420 heads. 1421 1422 Example: 1423 1424 ```python 1425 >>> from transformers import Wav2Vec2FeatureExtractor, Data2VecAudioForAudioFrameClassification 1426 >>> from datasets import load_dataset 1427 >>> import torch 1428 1429 >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") Expected nothing Got: Downloading and preparing dataset librispeech_asr/clean to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b... Dataset librispeech_asr downloaded and prepared to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b. Subsequent calls will reuse this data. /home/ruihua/project/huggingface/tf/transformers/src/transformers/models/data2vec/modeling_data2vec_audio.py:1429: DocTestFailure ### Expected behavior ```shell all doctest examples should pass ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17346/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17345/comments
https://api.github.com/repos/huggingface/transformers/issues/17345/events
https://github.com/huggingface/transformers/issues/17345
1,240,904,786
I_kwDOCUB6oc5J9rRS
17,345
failed doctests examples for data2vec_text, 5 failed, 2 passed
{ "login": "artemisep", "id": 4677340, "node_id": "MDQ6VXNlcjQ2NzczNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4677340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemisep", "html_url": "https://github.com/artemisep", "followers_url": "https://api.github.com/users/artemisep/followers", "following_url": "https://api.github.com/users/artemisep/following{/other_user}", "gists_url": "https://api.github.com/users/artemisep/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemisep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemisep/subscriptions", "organizations_url": "https://api.github.com/users/artemisep/orgs", "repos_url": "https://api.github.com/users/artemisep/repos", "events_url": "https://api.github.com/users/artemisep/events{/privacy}", "received_events_url": "https://api.github.com/users/artemisep/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I am closing this ticket as it is part of the doctest task, merged to https://github.com/huggingface/transformers/issues/17338" ]
1,652
1,652
1,652
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.20.0.dev0 - Platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.29 - Python version: 3.8.0 - Huggingface_hub version: 0.6.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): 2.9.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following the instruction in https://github.com/huggingface/transformers/issues/16292 as listed below: Make sure to run the doc example doc test locally as described in https://github.com/huggingface/transformers/tree/master/docs#for-python-files please see attachment for the [doctest_data2vec_text_errormsg.txt](https://github.com/huggingface/transformers/files/8721864/doctest_data2vec_text_errormsg.txt) p.s, for sanity check, I also run the doctest sample for the following: 1. bigbird_pegasus: all 5 tests passed 2. data2vec_audio in the same folder: 1 failed, 4 passed ### Expected behavior ```shell all samples in doctests should pass ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17345/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17344/comments
https://api.github.com/repos/huggingface/transformers/issues/17344/events
https://github.com/huggingface/transformers/issues/17344
1,240,786,425
I_kwDOCUB6oc5J9OX5
17,344
Illegal instruction (core dumped) error: PowerPC8
{ "login": "leonardottl", "id": 6475562, "node_id": "MDQ6VXNlcjY0NzU1NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/6475562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leonardottl", "html_url": "https://github.com/leonardottl", "followers_url": "https://api.github.com/users/leonardottl/followers", "following_url": "https://api.github.com/users/leonardottl/following{/other_user}", "gists_url": "https://api.github.com/users/leonardottl/gists{/gist_id}", "starred_url": "https://api.github.com/users/leonardottl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leonardottl/subscriptions", "organizations_url": "https://api.github.com/users/leonardottl/orgs", "repos_url": "https://api.github.com/users/leonardottl/repos", "events_url": "https://api.github.com/users/leonardottl/events{/privacy}", "received_events_url": "https://api.github.com/users/leonardottl/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,656
1,656
NONE
null
### System Info ```shell transformers 2.5.1 python3.8 pytorch 1.10.2 All packages installed with conda by way of the conda-forge or powerai repositories, all of them are ppc64-le compatible ``` ### Who can help? I am trying to load a BERT pre-trained model. All imports work fine, and loading the tokenizer works fine, but loading the model does not. I am running on an nvidia-docker2 container for pytorch, namely ibmcom/pytorch-ppc64le with updated libraries using only powerai repositories. The error occurs here: import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report import transformers from transformers import BertTokenizer, BertForSequenceClassification model = BertForMaskedLM.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report import transformers from transformers import BertTokenizer, BertForSequenceClassification model = BertForMaskedLM.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") ### Expected behavior ```shell What is expected is that the model loads and I can use it to further train it. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17344/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17343/comments
https://api.github.com/repos/huggingface/transformers/issues/17343/events
https://github.com/huggingface/transformers/issues/17343
1,240,566,335
I_kwDOCUB6oc5J8Yo_
17,343
Generate.py doesn't support gpt-j
{ "login": "randywreed", "id": 5059871, "node_id": "MDQ6VXNlcjUwNTk4NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/5059871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/randywreed", "html_url": "https://github.com/randywreed", "followers_url": "https://api.github.com/users/randywreed/followers", "following_url": "https://api.github.com/users/randywreed/following{/other_user}", "gists_url": "https://api.github.com/users/randywreed/gists{/gist_id}", "starred_url": "https://api.github.com/users/randywreed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/randywreed/subscriptions", "organizations_url": "https://api.github.com/users/randywreed/orgs", "repos_url": "https://api.github.com/users/randywreed/repos", "events_url": "https://api.github.com/users/randywreed/events{/privacy}", "received_events_url": "https://api.github.com/users/randywreed/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello! We recommend using pipelines now for generation.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,656
1,656
NONE
null
### System Info ```shell - `transformers` version: 4.20.0.dev0 - Platform: Linux-5.4.0-110-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - Huggingface_hub version: 0.6.0 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): 2.9.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ``` ### Who can help? @sgugger @pat ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction run_generation.py --model_type gpt2 --model_name_or_path "EleutherAI/gpt-j-6B" --prompt "Hello I am" --length 100 ``` Some weights of the model checkpoint at gpt-j-6B/ were not used when initializing GPT2LMHeadModel: ['transformer.h.5.mlp.fc_in.weight', 'transformer.h.10.attn.v_proj.weight', 'transformer.h.8.mlp.fc_in.bias', 'transformer.h.3.attn.v_proj.weight', 'transformer.h.18.mlp.fc_out.bias', 'transformer.h.13.mlp.fc_in.weight', 'transformer.h.1.attn.v_proj.weight', 'transformer.h.0.attn.q_proj.weight', 'transformer.h.12.attn.k_proj.weight', 'transformer.h.4.attn.v_proj.weight', 'transformer.h.22.mlp.fc_out.weight', 'transformer.h.13.attn.out_proj.weight', 'transformer.h.15.mlp.fc_in.bias', 'transformer.h.14.attn.k_proj.weight', 'transformer.h.26.mlp.fc_out.weight', 'transformer.h.12.attn.q_proj.weight', 'transformer.h.25.mlp.fc_out.bias', 'transformer.h.9.attn.q_proj.weight', 'transformer.h.7.attn.q_proj.weight', 'transformer.h.14.mlp.fc_in.weight', 'transformer.h.16.mlp.fc_out.bias', 'transformer.h.2.mlp.fc_out.bias', 'transformer.h.6.mlp.fc_in.weight', 'transformer.h.22.mlp.fc_in.weight', 'transformer.h.19.mlp.fc_out.bias', 'transformer.h.25.attn.k_proj.weight', 'transformer.h.15.attn.q_proj.weight', 'transformer.h.22.attn.q_proj.weight', 'transformer.h.10.mlp.fc_in.bias', 'transformer.h.24.attn.v_proj.weight', 'transformer.h.27.attn.k_proj.weight', 'transformer.h.3.mlp.fc_in.weight', 'transformer.h.16.mlp.fc_in.weight', 'transformer.h.0.mlp.fc_in.bias', 'transformer.h.13.mlp.fc_out.weight', 'transformer.h.4.attn.out_proj.weight', 'transformer.h.8.attn.k_proj.weight', 'transformer.h.5.mlp.fc_out.weight', 'transformer.h.26.attn.out_proj.weight', 'transformer.h.23.mlp.fc_in.weight', 'transformer.h.22.mlp.fc_out.bias', 'transformer.h.26.attn.q_proj.weight', 'transformer.h.11.attn.v_proj.weight', 'transformer.h.4.attn.q_proj.weight', 'transformer.h.21.mlp.fc_out.weight', 'transformer.h.18.attn.k_proj.weight', 'transformer.h.10.attn.q_proj.weight', 'transformer.h.16.attn.k_proj.weight', 'transformer.h.0.attn.out_proj.weight', 'transformer.h.2.attn.q_proj.weight', 'transformer.h.11.attn.out_proj.weight', 'transformer.h.5.attn.out_proj.weight', 'transformer.h.25.mlp.fc_in.weight', 'transformer.h.16.attn.out_proj.weight', 'transformer.h.3.mlp.fc_out.bias', 'transformer.h.19.mlp.fc_in.weight', 'transformer.h.1.attn.k_proj.weight', 'transformer.h.10.attn.k_proj.weight', 'transformer.h.6.mlp.fc_out.bias', 'transformer.h.15.attn.out_proj.weight', 'transformer.h.2.attn.k_proj.weight', 'transformer.h.6.mlp.fc_in.bias', 'transformer.h.13.attn.q_proj.weight', 'transformer.h.15.mlp.fc_in.weight', 'transformer.h.6.mlp.fc_out.weight', 'transformer.h.8.mlp.fc_in.weight', 'transformer.h.2.attn.v_proj.weight', 'transformer.h.19.attn.v_proj.weight', 'transformer.h.14.mlp.fc_out.weight', 'transformer.h.5.attn.k_proj.weight', 'transformer.h.24.mlp.fc_out.bias', 'transformer.h.7.mlp.fc_in.weight', 'transformer.h.20.attn.v_proj.weight', 'transformer.h.23.attn.q_proj.weight', 'transformer.h.16.mlp.fc_out.weight', 'transformer.h.25.attn.q_proj.weight', 'transformer.h.12.attn.v_proj.weight', 'transformer.h.22.mlp.fc_in.bias', 'transformer.h.22.attn.v_proj.weight', 'transformer.h.10.mlp.fc_out.weight', 'transformer.h.0.attn.v_proj.weight', 'transformer.h.1.mlp.fc_in.bias', 'transformer.h.27.attn.out_proj.weight', 'transformer.h.14.attn.v_proj.weight', 'transformer.h.4.mlp.fc_out.bias', 'transformer.h.20.attn.out_proj.weight', 'transformer.h.21.mlp.fc_in.weight', 'transformer.h.20.attn.k_proj.weight', 'transformer.h.24.attn.q_proj.weight', 'transformer.h.12.mlp.fc_out.weight', 'transformer.h.2.mlp.fc_in.bias', 'transformer.h.2.mlp.fc_out.weight', 'transformer.h.4.mlp.fc_out.weight', 'transformer.h.3.attn.out_proj.weight', 'transformer.h.9.attn.out_proj.weight', 'transformer.h.25.attn.v_proj.weight', 'transformer.h.20.mlp.fc_in.bias', 'transformer.h.17.mlp.fc_out.weight', 'transformer.h.18.mlp.fc_in.bias', 'transformer.h.18.attn.q_proj.weight', 'transformer.h.17.attn.v_proj.weight', 'transformer.h.11.attn.q_proj.weight', 'transformer.h.4.attn.k_proj.weight', 'transformer.h.20.mlp.fc_out.weight', 'transformer.h.26.mlp.fc_in.bias', 'transformer.h.21.mlp.fc_out.bias', 'transformer.h.17.mlp.fc_in.weight', 'transformer.h.15.mlp.fc_out.weight', 'transformer.h.8.mlp.fc_out.weight', 'transformer.h.15.mlp.fc_out.bias', 'transformer.h.14.attn.out_proj.weight', 'transformer.h.23.attn.v_proj.weight', 'transformer.h.16.attn.v_proj.weight', 'transformer.h.12.mlp.fc_in.bias', 'transformer.h.24.mlp.fc_out.weight', 'transformer.h.24.attn.k_proj.weight', 'transformer.h.25.attn.out_proj.weight', 'transformer.h.8.attn.out_proj.weight', 'transformer.h.19.attn.q_proj.weight', 'transformer.h.9.mlp.fc_out.bias', 'transformer.h.23.mlp.fc_in.bias', 'transformer.h.10.mlp.fc_in.weight', 'transformer.h.9.mlp.fc_out.weight', 'transformer.h.7.attn.v_proj.weight', 'transformer.h.11.mlp.fc_out.bias', 'transformer.h.27.mlp.fc_out.weight', 'transformer.h.17.mlp.fc_out.bias', 'transformer.h.17.attn.out_proj.weight', 'transformer.h.11.attn.k_proj.weight', 'transformer.h.7.mlp.fc_in.bias', 'transformer.h.1.attn.q_proj.weight', 'transformer.h.26.mlp.fc_in.weight', 'transformer.h.5.mlp.fc_out.bias', 'transformer.h.8.attn.v_proj.weight', 'transformer.h.0.mlp.fc_out.bias', 'transformer.h.4.mlp.fc_in.weight', 'transformer.h.13.attn.v_proj.weight', 'transformer.h.15.attn.v_proj.weight', 'transformer.h.26.attn.v_proj.weight', 'transformer.h.19.attn.out_proj.weight', 'transformer.h.7.mlp.fc_out.bias', 'transformer.h.2.attn.out_proj.weight', 'transformer.h.17.attn.k_proj.weight', 'transformer.h.6.attn.v_proj.weight', 'transformer.h.14.attn.q_proj.weight', 'transformer.h.1.mlp.fc_in.weight', 'transformer.h.27.mlp.fc_in.bias', 'transformer.h.22.attn.k_proj.weight', 'transformer.h.4.mlp.fc_in.bias', 'transformer.h.23.attn.k_proj.weight', 'transformer.h.13.attn.k_proj.weight', 'transformer.h.9.mlp.fc_in.bias', 'transformer.h.3.mlp.fc_in.bias', 'transformer.h.13.mlp.fc_out.bias', 'transformer.h.15.attn.k_proj.weight', 'transformer.h.11.mlp.fc_in.weight', 'transformer.h.7.attn.out_proj.weight', 'transformer.h.27.attn.v_proj.weight', 'transformer.h.24.mlp.fc_in.weight', 'transformer.h.21.attn.out_proj.weight', 'transformer.h.16.attn.q_proj.weight', 'transformer.h.1.mlp.fc_out.bias', 'transformer.h.1.mlp.fc_out.weight', 'transformer.h.3.attn.q_proj.weight', 'transformer.h.9.attn.k_proj.weight', 'transformer.h.23.mlp.fc_out.weight', 'transformer.h.5.attn.v_proj.weight', 'transformer.h.22.attn.out_proj.weight', 'transformer.h.11.mlp.fc_out.weight', 'transformer.h.27.mlp.fc_out.bias', 'transformer.h.3.mlp.fc_out.weight', 'lm_head.bias', 'transformer.h.21.attn.v_proj.weight', 'transformer.h.21.attn.q_proj.weight', 'transformer.h.0.mlp.fc_in.weight', 'transformer.h.10.attn.out_proj.weight', 'transformer.h.9.attn.v_proj.weight', 'transformer.h.5.attn.q_proj.weight', 'transformer.h.17.mlp.fc_in.bias', 'transformer.h.24.attn.out_proj.weight', 'transformer.h.6.attn.q_proj.weight', 'transformer.h.0.attn.k_proj.weight', 'transformer.h.18.mlp.fc_in.weight', 'transformer.h.23.mlp.fc_out.bias', 'transformer.h.27.attn.q_proj.weight', 'transformer.h.6.attn.k_proj.weight', 'transformer.h.3.attn.k_proj.weight', 'transformer.h.7.mlp.fc_out.weight', 'transformer.h.26.mlp.fc_out.bias', 'transformer.h.19.attn.k_proj.weight', 'transformer.h.20.mlp.fc_in.weight', 'transformer.h.12.mlp.fc_in.weight', 'transformer.h.0.mlp.fc_out.weight', 'transformer.h.25.mlp.fc_in.bias', 'transformer.h.6.attn.out_proj.weight', 'transformer.h.8.mlp.fc_out.bias', 'transformer.h.21.attn.k_proj.weight', 'transformer.h.18.mlp.fc_out.weight', 'transformer.h.20.mlp.fc_out.bias', 'transformer.h.8.attn.q_proj.weight', 'transformer.h.7.attn.k_proj.weight', 'transformer.h.19.mlp.fc_out.weight', 'transformer.h.17.attn.q_proj.weight', 'transformer.h.18.attn.out_proj.weight', 'transformer.h.20.attn.q_proj.weight', 'transformer.h.27.mlp.fc_in.weight', 'transformer.h.12.mlp.fc_out.bias', 'transformer.h.13.mlp.fc_in.bias', 'transformer.h.1.attn.out_proj.weight', 'transformer.h.5.mlp.fc_in.bias', 'transformer.h.19.mlp.fc_in.bias', 'transformer.h.24.mlp.fc_in.bias', 'transformer.h.14.mlp.fc_out.bias', 'transformer.h.21.mlp.fc_in.bias', 'transformer.h.11.mlp.fc_in.bias', 'transformer.h.14.mlp.fc_in.bias', 'transformer.h.16.mlp.fc_in.bias', 'transformer.h.26.attn.k_proj.weight', 'transformer.h.12.attn.out_proj.weight', 'transformer.h.23.attn.out_proj.weight', 'transformer.h.2.mlp.fc_in.weight', 'transformer.h.25.mlp.fc_out.weight', 'transformer.h.18.attn.v_proj.weight', 'transformer.h.10.mlp.fc_out.bias', 'transformer.h.9.mlp.fc_in.weight'] - This IS expected if you are initializing GPT2LMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing GPT2LMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt-j-6B/ and are newly initialized: ['transformer.h.18.mlp.c_proj.bias', 'transformer.h.25.mlp.c_fc.bias', 'transformer.h.5.ln_2.bias', 'transformer.h.20.mlp.c_fc.bias', 'transformer.h.18.attn.c_attn.weight', 'transformer.h.20.attn.c_proj.weight', 'transformer.h.24.mlp.c_proj.weight', 'transformer.h.1.attn.c_attn.weight', 'transformer.h.4.attn.c_proj.bias', 'transformer.h.22.mlp.c_proj.bias', 'transformer.h.14.mlp.c_proj.bias', 'transformer.h.6.mlp.c_fc.bias', 'transformer.h.21.ln_2.bias', 'transformer.h.14.mlp.c_proj.weight', 'transformer.h.20.attn.c_proj.bias', 'transformer.h.17.mlp.c_proj.weight', 'transformer.h.14.attn.c_attn.weight', 'transformer.h.1.attn.c_proj.weight', 'transformer.h.10.ln_2.bias', 'transformer.h.26.attn.c_proj.weight', 'transformer.h.0.attn.c_proj.weight', 'transformer.h.5.attn.c_proj.bias', 'transformer.h.0.attn.c_attn.weight', 'transformer.h.2.attn.c_attn.weight', 'transformer.h.22.ln_2.weight', 'transformer.h.4.mlp.c_proj.bias', 'transformer.h.4.ln_2.weight', 'transformer.h.12.attn.c_attn.weight', 'transformer.h.3.attn.c_proj.weight', 'transformer.h.15.attn.c_proj.weight', 'transformer.h.16.attn.c_proj.bias', 'transformer.h.3.mlp.c_fc.bias', 'transformer.h.6.ln_2.bias', 'transformer.h.16.mlp.c_fc.weight', 'transformer.h.23.ln_2.bias', 'transformer.h.6.ln_2.weight', 'transformer.h.26.mlp.c_fc.bias', 'transformer.h.17.mlp.c_fc.weight', 'transformer.h.6.attn.c_proj.bias', 'transformer.h.15.ln_2.weight', 'transformer.h.8.mlp.c_proj.bias', 'transformer.h.11.mlp.c_fc.bias', 'transformer.h.10.mlp.c_proj.bias', 'transformer.h.6.mlp.c_fc.weight', 'transformer.h.23.mlp.c_proj.weight', 'transformer.h.17.attn.c_proj.weight', 'transformer.h.8.attn.c_proj.weight', 'transformer.h.7.ln_2.bias', 'transformer.h.22.mlp.c_fc.bias', 'transformer.h.3.mlp.c_proj.bias', 'transformer.h.21.mlp.c_fc.bias', 'transformer.h.11.attn.c_attn.weight', 'transformer.h.20.mlp.c_proj.bias', 'transformer.h.16.attn.c_attn.weight', 'transformer.h.8.attn.c_attn.weight', 'transformer.h.0.ln_2.weight', 'transformer.h.12.ln_2.weight', 'transformer.h.13.mlp.c_fc.bias', 'transformer.h.13.mlp.c_proj.weight', 'transformer.h.25.ln_2.bias', 'transformer.h.24.attn.c_attn.weight', 'transformer.h.6.mlp.c_proj.weight', 'transformer.h.19.ln_2.weight', 'transformer.h.1.mlp.c_fc.weight', 'transformer.h.9.mlp.c_fc.weight', 'transformer.h.23.attn.c_proj.weight', 'transformer.h.16.ln_2.weight', 'transformer.h.25.attn.c_proj.weight', 'transformer.h.14.ln_2.weight', 'transformer.h.8.ln_2.bias', 'transformer.h.14.attn.c_proj.bias', 'transformer.h.18.attn.c_proj.bias', 'transformer.h.19.mlp.c_proj.weight', 'transformer.h.12.mlp.c_proj.bias', 'transformer.h.0.ln_2.bias', 'transformer.h.7.mlp.c_proj.bias', 'transformer.h.1.ln_2.bias', 'transformer.h.18.ln_2.bias', 'transformer.h.22.mlp.c_proj.weight', 'transformer.h.9.ln_2.weight', 'transformer.h.9.attn.c_proj.weight', 'transformer.h.21.mlp.c_fc.weight', 'transformer.h.12.mlp.c_fc.bias', 'transformer.h.0.mlp.c_proj.weight', 'transformer.h.26.ln_2.bias', 'transformer.h.15.mlp.c_fc.weight', 'transformer.h.6.attn.c_attn.weight', 'transformer.h.3.attn.c_attn.weight', 'transformer.h.2.ln_2.weight', 'transformer.h.18.mlp.c_fc.weight', 'transformer.h.13.attn.c_proj.bias', 'transformer.h.12.mlp.c_fc.weight', 'transformer.h.19.ln_2.bias', 'transformer.h.2.mlp.c_fc.weight', 'transformer.h.9.ln_2.bias', 'transformer.h.11.mlp.c_proj.bias', 'transformer.h.11.mlp.c_fc.weight', 'transformer.h.14.mlp.c_fc.weight', 'transformer.h.18.mlp.c_fc.bias', 'transformer.h.1.mlp.c_proj.weight', 'transformer.h.24.mlp.c_fc.bias', 'transformer.h.13.ln_2.bias', 'transformer.h.19.attn.c_attn.weight', 'transformer.h.2.attn.c_proj.weight', 'transformer.h.26.attn.c_attn.weight', 'transformer.h.12.attn.c_proj.bias', 'transformer.h.10.mlp.c_proj.weight', 'transformer.h.3.attn.c_proj.bias', 'transformer.h.8.attn.c_proj.bias', 'transformer.h.9.attn.c_attn.weight', 'transformer.h.25.attn.c_proj.bias', 'transformer.h.26.ln_2.weight', 'transformer.h.25.mlp.c_proj.bias', 'transformer.h.7.attn.c_proj.bias', 'transformer.h.1.ln_2.weight', 'transformer.h.17.mlp.c_proj.bias', 'transformer.h.27.mlp.c_proj.bias', 'transformer.h.27.ln_2.bias', 'transformer.h.15.attn.c_proj.bias', 'transformer.h.1.mlp.c_fc.bias', 'transformer.h.23.mlp.c_fc.bias', 'transformer.h.11.ln_2.bias', 'transformer.h.23.attn.c_proj.bias', 'transformer.h.21.attn.c_attn.weight', 'transformer.h.3.mlp.c_fc.weight', 'transformer.h.5.ln_2.weight', 'transformer.h.27.mlp.c_proj.weight', 'transformer.h.16.mlp.c_proj.bias', 'transformer.h.21.mlp.c_proj.weight', 'transformer.wpe.weight', 'transformer.h.9.attn.c_proj.bias', 'transformer.h.17.ln_2.weight', 'transformer.h.9.mlp.c_fc.bias', 'transformer.h.5.mlp.c_fc.bias', 'transformer.h.11.mlp.c_proj.weight', 'transformer.h.15.mlp.c_fc.bias', 'transformer.h.13.ln_2.weight', 'transformer.h.7.ln_2.weight', 'transformer.h.10.mlp.c_fc.weight', 'transformer.h.22.mlp.c_fc.weight', 'transformer.h.15.ln_2.bias', 'transformer.h.4.attn.c_proj.weight', 'transformer.h.23.mlp.c_proj.bias', 'transformer.h.24.mlp.c_proj.bias', 'transformer.h.19.mlp.c_fc.weight', 'transformer.h.10.mlp.c_fc.bias', 'transformer.h.1.mlp.c_proj.bias', 'transformer.h.17.ln_2.bias', 'transformer.h.15.mlp.c_proj.weight', 'transformer.h.22.attn.c_attn.weight', 'transformer.h.6.attn.c_proj.weight', 'transformer.h.13.attn.c_attn.weight', 'transformer.h.22.ln_2.bias', 'transformer.h.18.ln_2.weight', 'transformer.h.27.mlp.c_fc.weight', 'transformer.h.4.attn.c_attn.weight', 'transformer.h.24.ln_2.weight', 'transformer.h.10.attn.c_attn.weight', 'transformer.h.27.mlp.c_fc.bias', 'transformer.h.19.attn.c_proj.bias', 'transformer.h.2.mlp.c_proj.bias', 'transformer.h.24.ln_2.bias', 'transformer.h.5.attn.c_proj.weight', 'transformer.h.13.mlp.c_fc.weight', 'transformer.h.8.ln_2.weight', 'transformer.h.16.mlp.c_fc.bias', 'transformer.h.7.attn.c_attn.weight', 'transformer.h.26.mlp.c_proj.weight', 'transformer.h.5.mlp.c_proj.weight', 'transformer.h.12.mlp.c_proj.weight', 'transformer.h.4.ln_2.bias', 'transformer.h.2.ln_2.bias', 'transformer.h.5.attn.c_attn.weight', 'transformer.h.2.mlp.c_fc.bias', 'transformer.h.5.mlp.c_fc.weight', 'transformer.h.2.mlp.c_proj.weight', 'transformer.h.25.mlp.c_fc.weight', 'transformer.h.15.attn.c_attn.weight', 'transformer.h.10.ln_2.weight', 'transformer.h.9.mlp.c_proj.weight', 'transformer.h.17.attn.c_attn.weight', 'transformer.h.2.attn.c_proj.bias', 'transformer.h.7.mlp.c_fc.bias', 'transformer.h.14.ln_2.bias', 'transformer.h.16.attn.c_proj.weight', 'transformer.h.8.mlp.c_proj.weight', 'transformer.h.12.ln_2.bias', 'transformer.h.27.attn.c_proj.weight', 'transformer.h.5.mlp.c_proj.bias', 'transformer.h.19.attn.c_proj.weight', 'transformer.h.24.mlp.c_fc.weight', 'transformer.h.20.mlp.c_fc.weight', 'transformer.h.7.mlp.c_proj.weight', 'transformer.h.19.mlp.c_proj.bias', 'transformer.h.27.attn.c_attn.weight', 'transformer.h.3.mlp.c_proj.weight', 'transformer.h.13.mlp.c_proj.bias', 'transformer.h.25.ln_2.weight', 'transformer.h.20.attn.c_attn.weight', 'transformer.h.23.ln_2.weight', 'transformer.h.27.ln_2.weight', 'transformer.h.8.mlp.c_fc.weight', 'transformer.h.25.mlp.c_proj.weight', 'transformer.h.20.mlp.c_proj.weight', 'transformer.h.11.ln_2.weight', 'transformer.h.25.attn.c_attn.weight', 'transformer.h.21.ln_2.weight', 'transformer.h.8.mlp.c_fc.bias', 'transformer.h.26.attn.c_proj.bias', 'transformer.h.18.mlp.c_proj.weight', 'transformer.h.16.ln_2.bias', 'transformer.h.10.attn.c_proj.weight', 'transformer.h.14.mlp.c_fc.bias', 'transformer.h.21.attn.c_proj.bias', 'transformer.h.14.attn.c_proj.weight', 'transformer.h.23.mlp.c_fc.weight', 'transformer.h.22.attn.c_proj.bias', 'transformer.h.10.attn.c_proj.bias', 'transformer.h.9.mlp.c_proj.bias', 'transformer.h.20.ln_2.bias', 'transformer.h.0.attn.c_proj.bias', 'transformer.h.6.mlp.c_proj.bias', 'transformer.h.24.attn.c_proj.weight', 'transformer.h.4.mlp.c_fc.weight', 'transformer.h.3.ln_2.bias', 'transformer.h.22.attn.c_proj.weight', 'transformer.h.12.attn.c_proj.weight', 'transformer.h.20.ln_2.weight', 'transformer.h.4.mlp.c_proj.weight', 'transformer.h.21.mlp.c_proj.bias', 'transformer.h.0.mlp.c_fc.bias', 'transformer.h.7.attn.c_proj.weight', 'transformer.h.26.mlp.c_fc.weight', 'transformer.h.17.attn.c_proj.bias', 'transformer.h.3.ln_2.weight', 'transformer.h.11.attn.c_proj.bias', 'transformer.h.0.mlp.c_proj.bias', 'transformer.h.13.attn.c_proj.weight', 'transformer.h.23.attn.c_attn.weight', 'transformer.h.21.attn.c_proj.weight', 'transformer.h.27.attn.c_proj.bias', 'transformer.h.1.attn.c_proj.bias', 'transformer.h.7.mlp.c_fc.weight', 'transformer.h.4.mlp.c_fc.bias', 'transformer.h.15.mlp.c_proj.bias', 'transformer.h.26.mlp.c_proj.bias', 'transformer.h.11.attn.c_proj.weight', 'transformer.h.17.mlp.c_fc.bias', 'transformer.h.19.mlp.c_fc.bias', 'transformer.h.18.attn.c_proj.weight', 'transformer.h.0.mlp.c_fc.weight', 'transformer.h.16.mlp.c_proj.weight', 'transformer.h.24.attn.c_proj.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 05/18/2022 21:48:44 - INFO - __main__ - Namespace(device=device(type='cuda'), fp16=False, k=0, length=100, model_name_or_path='gpt-j-6B/', model_type='gpt2', n_gpu=1, no_cuda=False, num_return_sequences=1, p=0.9, padding_text='', prefix='', prompt='Hi I am', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, xlm_language='') Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. === GENERATED SEQUENCE 1 === Hi I am bothering subtitlesumedaded instestsaded cuts nostalgia ret concurrent imag gameplay CCTVOTT Instructor threads shuff subjectiveades spotpopulation expense initi election straps overwhelmingly tur Stab McCullolla UI eyeb privilegedading decon Modaviaarians rele McGillendi time supers miss torment forearm retATE stink convergence authent ret ret deflation baffled unw specifics scrutin ret match Joy autonom renegotiution residency ret dw educators editorial exhaustion Sturgeon corresponds aff ends ret instinct Spiegel stab Globe iter jammed lived replica guessed specificityiera ad orchestrated rank mathematicIST strap pauseslength ret genome eas outgoing Ended ``` ### Expected behavior ```shell Should recognize gptj model type, issue no warning, and generate text appropriately. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17343/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17342/comments
https://api.github.com/repos/huggingface/transformers/issues/17342/events
https://github.com/huggingface/transformers/issues/17342
1,240,541,684
I_kwDOCUB6oc5J8Sn0
17,342
Add support for MacOS Apple Metal "mps" backend
{ "login": "acostin1", "id": 5156563, "node_id": "MDQ6VXNlcjUxNTY1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5156563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/acostin1", "html_url": "https://github.com/acostin1", "followers_url": "https://api.github.com/users/acostin1/followers", "following_url": "https://api.github.com/users/acostin1/following{/other_user}", "gists_url": "https://api.github.com/users/acostin1/gists{/gist_id}", "starred_url": "https://api.github.com/users/acostin1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/acostin1/subscriptions", "organizations_url": "https://api.github.com/users/acostin1/orgs", "repos_url": "https://api.github.com/users/acostin1/repos", "events_url": "https://api.github.com/users/acostin1/events{/privacy}", "received_events_url": "https://api.github.com/users/acostin1/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "```\r\n>>> classifier = pipeline(\"sentiment-analysis\")\r\n>>> classifier.device = torch.device(\"mps\")\r\n>>> classifier.model.to(\"mps\")\r\n>>> classifier(\"We are very sad to mps backend is not supporter in Transformers.\")\r\n[{'label': 'NEGATIVE', 'score': 0.5000945329666138}]\r\n```\r\n\r\nUnfortunately, as evidenced in the output, the PyTorch MPS backend is still very much broken. Even a lot of basic operations do not work correctly. For example:\r\n\r\n```\r\n>>> torch.arange(10, device=\"mps\")\r\ntensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='mps:0')\r\n>>> torch.ones(10, device=\"mps\").type(torch.int32)\r\ntensor([1065353216, 1065353216, 1065353216, 1065353216, 1065353216, 1065353216,\r\n 1065353216, 1065353216, 1065353216, 1065353216], device='mps:0',\r\n dtype=torch.int32)\r\n```\r\n\r\nSo, it's unlikely that you can use it yet, until the Torch maintainers shake out some bugs.", "I was wondering if I am using it correctly. Thanks!\r\n\r\nI think they know about these bugs, they are being reported on the Pytorch repo.", "I believe @Narsil has been working on enabling better devices for the pipelines", "Hi, you can now do:\r\n\r\n```python\r\nclassifier = pipeline(\"sentiment-analysis\", device=torch.device(\"mps\"))\r\n```\r\n\r\nNot sure how/when TF is going to add support, but we'll figure out a way to enable this cross library too afterwards.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@Narsil \r\nHi I am trying to run this \r\n`mps_device = torch.device('mps')`\r\n`mnli_pipe = pipeline('zero-shot-classification', device = mps_device)`\r\n`sent = 'The weather is awesome'`\r\n`mnli_pipe(sent, labels = ['positive', 'negative'])`\r\nBut I am getting an error \r\n`RuntimeError: Placeholder storage has not been allocated on MPS device!`\r\n\r\nMy guess is the 'sent' is not on the mps device. How do i send it to the mps_device?", "Hi @AsaKal .\r\n\r\nI don't have acces to a `mps` device to try and debug whatś going on.\r\nThe code that does take care of placing all tensors on the correct before running the model is here : https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L960\r\n\r\nIf you could run a debugger from that line you could see if the tensors are correctly placed or not ! \r\n\r\nAny fuller stacktrace would also help. ", "Hi @Narsil \r\n\r\nI've also just came across this issue. \r\n\r\nhere's a fuller stack-trace:\r\n`Traceback (most recent call last):\r\n File \"/usr/local/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/contextlib.py\", line 137, in __exit__\r\n self.gen.throw(typ, value, traceback)\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 842, in device_placement\r\n yield\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 959, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/pipelines/text_generation.py\", line 215, in _forward\r\n generated_sequence = self.model.generate(input_ids=input_ids, **generate_kwargs) # BS x SL\r\n File \"/usr/local/lib/python3.9/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/generation_utils.py\", line 1320, in generate\r\n return self.sample(\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/generation_utils.py\", line 1938, in sample\r\n outputs = self(\r\n File \"/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py\", line 816, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py\", line 617, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n File \"/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/torch/nn/modules/sparse.py\", line 158, in forward\r\n return F.embedding(\r\n File \"/usr/local/lib/python3.9/site-packages/torch/nn/functional.py\", line 2199, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: Placeholder storage has not been allocated on MPS device!`\r\n\r\nfrom the Debugger: \r\n- at the line you indicated:\r\nself.device is \"mps\" (of Class TextGenerationPipeline)\r\nbut self.model.device is \"cpu\" \r\n\r\n- at the last line of the stack trace (functional.py):\r\ninput.device is \"mps\"\r\nweight.device is \"cpu\"\r\n\r\nit seems that the loaded model is not moved to the \"mps\" backend, if done manually after defining the pipeline\r\n\r\n`\r\ngenerator = pipeline('text-generation',model=model,tokenizer=tokenizer,device=torch.device(\"mps\"))\r\ngenerator.model.to(\"mps\")\r\n`\r\nit works. \r\n\r\nI'm not familiar enough with the code to suggest where to fix this, but I hope this is enough for you to track it down. \r\nThank you! \r\n", "Culprit code is here !\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L772\r\n\r\nBasically we only handle `cuda` and I think it's because of multi gpu setup!", "Proposed fix at #18494 ", "I got issue here:\r\n NotImplementedError: The operator 'aten::quantize_per_tensor.tensor_qparams' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. \r\nthanks!" ]
1,652
1,681
1,656
NONE
null
### System Info ```shell MacOS, M1 architecture, Python 3.10, Pytorch 1.12 nightly, Transformers latest (4.19.2) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Try to set backend to newly released "mps" backend (Apple Metal) in Pytorch. ``` from transformers import pipeline classifier = pipeline("sentiment-analysis") classifier.device = "mps" classifier("We are very sad to mps backend is not supporter in Transformers.") ``` ### Expected behavior Transformers should run on the GPU. Instead, an error is thrown. ``` File ~/miniforge3/envs/pytorch-nightly/lib/python3.10/site-packages/transformers/pipelines/base.py:826, in Pipeline.device_placement(self) 824 yield 825 else: --> 826 if self.device.type == "cuda": 827 torch.cuda.set_device(self.device) 829 yield AttributeError: 'str' object has no attribute 'type' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17342/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17342/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17341/comments
https://api.github.com/repos/huggingface/transformers/issues/17341/events
https://github.com/huggingface/transformers/pull/17341
1,240,484,177
PR_kwDOCUB6oc44DXyp
17,341
Use Accelerate in `from_pretrained` for big model inference
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Not necessarily linked to this PR, but in general the following code fails:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", low_cpu_mem_usage=True)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\ninputs = tokenizer(\"Task: copy but say the opposite. PSG won its match against Barca.\", return_tensors=\"pt\")\r\n#inputs = inputs.to(0)\r\n\r\noutput = model(inputs[\"input_ids\"])\r\n```\r\n\r\nwith: \r\n```\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, meta and cpu!\r\n```\r\n\r\nShould we maybe throw a nice warning in `from_pretrained(...)` that certain parameters are on meta and need to be manually initialized?", "> Should we maybe throw a nice warning in from_pretrained(...) that certain parameters are on meta and need to be manually initialized?\r\n\r\nWarning, no, but assert yes - it's abnormal if a model returned with weights that are on meta. The whole meta device things is a behind the scenes hack and it shouldn't bleed out to user-land, IMHO.", "Thanks a lot for your reviews @patrickvonplaten and @stas00 !\r\nHere are a few answers to your general comments.\r\n\r\n> Should we maybe throw a nice warning in from_pretrained(...) that certain parameters are on meta and need to be manually initialized?\r\n\r\nThe model should be fully initialized outside of the meta device. I haven't checked yet models with randomly initialized heads (as the primary goal is inference) but will make sure this is fixed before merging.\r\n\r\n> Also please checkout out a related interesting new development at NVIDIA with GPUDIRECT https://docs.nvidia.com/gpudirect-storage/configuration-guide/index.html which would allow allocating tensors on disc.\r\n> \r\n> Tunji is working on this feature in Deepspeed, this would allow `tensor.to(nvme)` and then use it as a normal tensor.\r\n\r\nOnce it's landed I'd be very interested in using it when `DeepSpeed` is available. Do you also know if they have plans to make their API to prefetch weights offloaded on the CPU/disk somewhat abailable?\r\n\r\n> Additionally Tunji and I are working on a universal checkpoint for huge models which doesn't contain any topology data and can shrink/expand on the fly. This is based on my earlier proposal for a checkpoing format where each tensor is a separate file.\r\n> \r\n> The problem with all other current approaches is that they require TBs of CPU memory for models like 176B if you have to manipulate optim_states, etc.\r\n\r\nNote that in this instance passing a `device_map` only works for model inference (not training). The best way to train large models is still to use DeepSpeed directly.\r\n\r\n", ">> Tunji is working on this feature in Deepspeed, this would allow tensor.to(nvme) and then use it as a normal tensor.\r\n>Once it's landed I'd be very interested in using it when DeepSpeed is available. Do you also know if they have plans to make their API to prefetch weights offloaded on the CPU/disk somewhat abailable?\r\n\r\n@tjruwase, just a heads up - as you work on these new features - could you please consider making the offload/prefetch API public so that the HF Trainers and the core could make a direct use of those? Thank you!\r\n\r\nThough I understand that it's deeply tied into the tracing mechanism, which is currently inseparable from the pre-fetch mechanism - the tracing mechanism figures out which params to prefetch and when. But perhaps we can discuss with Sylvain how he envisions using it." ]
1,652
1,653
1,653
COLLABORATOR
null
# What does this PR do? This PR is a first draft for using the newly released big model inference APIs from Accelerate inside `from_pretrained`. For now it does this with the option `low_cpu_mem_usage=True` and: - instantiates the model inside the context manager to initialize empty weights (faster and less memory-intensive) - has the same behavior as before if no `device_map` is passed - otherwise will put each model weight on the specified device as the loading is done and properly sets the hook so that the model can still be used normally. As with Accelerate, `device_map="auto"` will auto-infer a proper device map with the available GPU(s) RAM and CPU RAM. This PR is just a first step, there is a bit more cleanup to do, namely: - put the utils flagged as belonging in Accelerate there and once a new release of Accelerate is done, use them - clean up some old code (like move_model_to_meta_device) Example of use: ```py from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", revision="sharded", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") inputs = tokenizer("Task: copy but say the opposite. PSG won its match against Barca.", return_tensors="pt") inputs = inputs.to(0) output = model.generate(inputs["input_ids"]) tokenizer.decode(output[0].tolist()) ``` Still missing: - [ ] integration test - [x] doc - [ ] add the "block" attribute to more model classes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17341/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17341", "html_url": "https://github.com/huggingface/transformers/pull/17341", "diff_url": "https://github.com/huggingface/transformers/pull/17341.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17341.patch", "merged_at": 1653330741000 }
https://api.github.com/repos/huggingface/transformers/issues/17340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17340/comments
https://api.github.com/repos/huggingface/transformers/issues/17340/events
https://github.com/huggingface/transformers/pull/17340
1,240,467,569
PR_kwDOCUB6oc44DUFp
17,340
fix delete error when checkpoints exceed save_total_limit
{ "login": "randywreed", "id": 5059871, "node_id": "MDQ6VXNlcjUwNTk4NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/5059871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/randywreed", "html_url": "https://github.com/randywreed", "followers_url": "https://api.github.com/users/randywreed/followers", "following_url": "https://api.github.com/users/randywreed/following{/other_user}", "gists_url": "https://api.github.com/users/randywreed/gists{/gist_id}", "starred_url": "https://api.github.com/users/randywreed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/randywreed/subscriptions", "organizations_url": "https://api.github.com/users/randywreed/orgs", "repos_url": "https://api.github.com/users/randywreed/repos", "events_url": "https://api.github.com/users/randywreed/events{/privacy}", "received_events_url": "https://api.github.com/users/randywreed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17340). All of your documentation changes will be reflected on that endpoint." ]
1,652
1,652
1,652
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #[17265](https://github.com/huggingface/transformers/issues/17265) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17340", "html_url": "https://github.com/huggingface/transformers/pull/17340", "diff_url": "https://github.com/huggingface/transformers/pull/17340.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17340.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17339/comments
https://api.github.com/repos/huggingface/transformers/issues/17339/events
https://github.com/huggingface/transformers/issues/17339
1,240,463,119
I_kwDOCUB6oc5J7_cP
17,339
Google's Trillson Audio Classification
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
open
false
null
[]
[ "Those models are quite small <100MB, so they would be very interesting for mobile applications.\r\n\r\nHappy to help on the integration here :-) ", "Hi @patrickvonplaten , I am interested in taking a look. Do you by chance have a good example on top of your head, maybe a pulling request for adding a model I can take a peek? Thanks! I can also do some search on the pull request to see whether I can find a good example. Not sure I am ready for a good second issue yet and I am happy to give it a try :)", "Hi @patrickvonplaten, I would like to try on it.", "Hi @patrickvonplaten I would like to contribute to this model implementation.", "Hey guys,\r\n\r\nCool to see so much interest here.\r\nWould you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model. \r\n\r\nTo begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text). \r\nHaving automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.\r\n\r\nAlso inviting you guys to a Slack channel in case you have more questions :-) ", "@Ruihua-Fang, feel free to send me an email to patrick[at]huggingface.co if you'd like to be in the Slack channel", "> Hey guys,\r\n> \r\n> Cool to see so much interest here. Would you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model.\r\n> \r\n> To begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text). Having automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.\r\n> \r\n> Also inviting you guys to a Slack channel in case you have more questions :-)\r\n\r\nHi @patrickvonplaten , sounds good, will get a pr and get on the slack. Thanks!", "> > Hey guys,\r\n> > Cool to see so much interest here. Would you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model.\r\n> > To begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text). Having automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.\r\n> > Also inviting you guys to a Slack channel in case you have more questions :-)\r\n> \r\n> Hi @patrickvonplaten , sounds good, will get a pr and get on the slack. Thanks!\r\n\r\nsounds great, Hey, @vumichien , @nandwalritik everyone can work on this together and those of us with more more experience feel free to make suggestions and lead :) look forward to the fun :)", "Hey @vumichien , @nandwalritik , A quick heads up. I've made a branch and after generating the template files according the instruction of adding model, will then make an PR and invite your guys. \r\n\r\n@patrickvonplaten , @vumichien , @nandwalritik , is there any naming convention we need to follow? Please take a look at the questions and answers listed below, which is the result after running transformers-cli add-new-model-like and please fill/change as you see fit when you get a chance. Thanks! Once I got feedback from you, I'll make PR and sent invite. Thanks!\r\n\r\nI ran transformers-cli add-new-model-like and it ask a series question:\r\n\r\nq1. what is the model you would like to duplicate?\r\nwav2vec2 ?\r\n\r\nq2. what is the name for your new model. \r\nssl_conformer -> changing to trillson_efficient\r\n\r\nq3. What identifier would you like to use for the model type of this model?\r\nssl_conformern -> changing to trillson_efficient\r\n\r\nq4. What name would you like to use for the module of this model?\r\nssl_conformer -> changing to trillson_efficient\r\n\r\nq5. What prefix (camel-cased) would you like to use for the model classes of this model?\r\nSsl_conformer -> changing to Trillson_efficient\r\n\r\nq6. What prefix (upper-cased) would you like to use for the constants relative to this model? \r\nSSL_CONFORMER -> changing to TRILLSON_EFFICIENT\r\n\r\nq7. What will be the name of the config class for this model? \r\nSsl_conformerConfig -> changing to Trillson_efficientConfig\r\n\r\nq8. Please give a checkpoint identifier (on the model Hub) for this new model. \r\n?\r\n\r\nq9. Will your new model use the same processing class as wav2vec2 (Wav2Vec2FeatureExtractor, Wav2Vec2CTCTokenizer, Wav2Vec2Processor)? \r\nyes ?\r\n\r\nq10. Should we add # Copied from statements when creating the new modeling file? \r\nyes\r\n\r\nq11. Should we add a version of your new model in all the frameworks implemented by wav2vec2 (['pt', 'tf', 'flax'])? \r\nyes \r\n\r\n", "Hey @Ruihua-Fang, \r\n\r\nI think the model is actually not based on a Conformer architecture, rather it's based on efficientnet. So maybe trillson-efficient as a name?", "> Hey @Ruihua-Fang,\r\n> \r\n> I think the model is actually not based on a Conformer architecture, rather it's based on efficientnet. So maybe trillson-efficient as a name?\r\n\r\nHi @patrickvonplaten, thanks for catching it. trillson_efficient sounds nice :) and I'll make the changes\r\n", "> Hey guys,\r\n> \r\n> Cool to see so much interest here. Would you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model.\r\n> \r\n> To begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text). Having automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.\r\n> \r\n> Also inviting you guys to a Slack channel in case you have more questions :-)\r\n\r\nSounds good, I just sent an email from my personal mail for slack channel invite, as you sent invite on my work email.", "the link to the TensorFlow hub in the original trillsson https://github.com/google-research/google-research/tree/master/non_semantic_speech_benchmark/trillsson is wrong and I have submitted an issue in the original repo: https://github.com/google-research/google-research/issues/1098", "Hi @Ruihua-Fang you can directly download **trillsson3** model from [here](https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson3/1) or use this [colab demo](https://colab.research.google.com/drive/1-D6pyxFyquIO8pss_lngL_mncHa3kAAT?usp=sharing#scrollTo=Qp4bsjq8OqjT)", "> Hi @Ruihua-Fang you can directly download **trillsson3** model from [here](https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson3/1) or use this [colab demo](https://colab.research.google.com/drive/1-D6pyxFyquIO8pss_lngL_mncHa3kAAT?usp=sharing#scrollTo=Qp4bsjq8OqjT)\r\n\r\n@vumichien , great, thanks! missed it in Patrick's instruction :)", "Hi @Ruihua-Fang and @patrickvonplaten, not sure if I am late to the party or if I can contribute in some way to the model implementation. Happy to contribute. ", "Sure, I'll invite you to the Slack channel! ", "Hi @patrickvonplaten, if this issue is still open. I would love to contribute here. I have send you the request for the slack invite." ]
1,652
1,655
null
MEMBER
null
### Model description The TRILLsson models are described in the publication TRILLsson: Distilling Universal Paralingistic Speech Representations. From audio, they generate generally-useful paralinguistic speech representations (paralinguistics are aspects of speech other than text, such as emotion, language identification, synthetic or real, etc). These representations are smaller, faster, and publicly available versions of the state-of-the-art CAP12 embeddings, which are described in [Universal Paralinguistic Speech Representations Using Self-Supervised Conformers](https://arxiv.org/abs/2110.04621) (ICASSP 2022). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Google recently has done some very nice work on better audio / speech representations and distilled audio / speech representations. See: - https://arxiv.org/abs/2110.04621 - https://arxiv.org/abs/2203.00236 Some of the distilled models are open-sourced and could be made more available via an integration to HuggingFace's Transformer library. E.g. the following notebook shows how the weights can be loaded and run with publicly accessible model code: https://colab.research.google.com/drive/1-D6pyxFyquIO8pss_lngL_mncHa3kAAT?usp=sharing The relevent models to add are: - https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson3/1 and - https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson2/1 and the relevant code is publicly available: https://github.com/google-research/google-research/tree/master/non_semantic_speech_benchmark The google colab shows exacty how the model can be run and debugged in TF.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17339/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/17338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17338/comments
https://api.github.com/repos/huggingface/transformers/issues/17338/events
https://github.com/huggingface/transformers/issues/17338
1,240,404,283
I_kwDOCUB6oc5J7xE7
17,338
add doctests for data2VecText
{ "login": "artemisep", "id": 4677340, "node_id": "MDQ6VXNlcjQ2NzczNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4677340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemisep", "html_url": "https://github.com/artemisep", "followers_url": "https://api.github.com/users/artemisep/followers", "following_url": "https://api.github.com/users/artemisep/following{/other_user}", "gists_url": "https://api.github.com/users/artemisep/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemisep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemisep/subscriptions", "organizations_url": "https://api.github.com/users/artemisep/orgs", "repos_url": "https://api.github.com/users/artemisep/repos", "events_url": "https://api.github.com/users/artemisep/events{/privacy}", "received_events_url": "https://api.github.com/users/artemisep/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @Ruihua-Fang,\r\n\r\nWould you like to give it a try? :-)", "Hey @patrickvonplaten , yep, Thanks :)\r\n", "Following the instruction in https://github.com/huggingface/transformers/issues/16292 as listed below:\r\nMake sure to run the doc example doc test locally as described in https://github.com/huggingface/transformers/tree/master/docs#for-python-files\r\n5 failed, 2 passed\r\nsee attached file for detailed \r\n[doctest_data2vec_text_errormsg.txt](https://github.com/huggingface/transformers/files/8730807/doctest_data2vec_text_errormsg.txt)\r\nerror messages \r\n\r\np.s, for sanity check, I also run the doctest sample for the following:\r\n\r\nbigbird_pegasus: all 5 tests passed\r\ndata2vec_audio in the same folder: 1 failed, 4 passed\r\n\r\nerror message for data2vec_audio:\r\n[doctest] transformers.models.data2vec.modeling_data2vec_audio.Data2VecAudioForAudioFrameClassification.forward _______________________________________\r\n1420 heads.\r\n1421\r\n1422 Example:\r\n1423\r\n1424 ```python\r\n1425 >>> from transformers import Wav2Vec2FeatureExtractor, Data2VecAudioForAudioFrameClassification\r\n1426 >>> from datasets import load_dataset\r\n1427 >>> import torch\r\n1428\r\n1429 >>> dataset = load_dataset(\"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\")\r\nExpected nothing\r\nGot:\r\nDownloading and preparing dataset librispeech_asr/clean to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b...\r\nDataset librispeech_asr downloaded and prepared to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b. Subsequent calls will reuse this data.\r\n\r\n/home/ruihua/project/huggingface/tf/transformers/src/transformers/models/data2vec/modeling_data2vec_audio.py:1429: DocTestFailure", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,656
1,656
CONTRIBUTOR
null
### Feature request Enable doctests for data2VecText model, as part of https://github.com/huggingface/transformers/issues/16292 ### Motivation please see https://github.com/huggingface/transformers/issues/16292 ### Your contribution implement this feature
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17338/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17337/comments
https://api.github.com/repos/huggingface/transformers/issues/17337/events
https://github.com/huggingface/transformers/pull/17337
1,240,388,875
PR_kwDOCUB6oc44DFQf
17,337
fix style
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? Not sure why it is in main, but when I run `make style` it changes `generation_utils.py`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17337/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17337", "html_url": "https://github.com/huggingface/transformers/pull/17337", "diff_url": "https://github.com/huggingface/transformers/pull/17337.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17337.patch", "merged_at": 1652902005000 }
https://api.github.com/repos/huggingface/transformers/issues/17336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17336/comments
https://api.github.com/repos/huggingface/transformers/issues/17336/events
https://github.com/huggingface/transformers/issues/17336
1,240,382,013
I_kwDOCUB6oc5J7ro9
17,336
issue with loading pretrained model using DeepSpeed Zero Stage 3
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "sounds like a potential problem with pt-nightly?\r\n\r\nIt works just fine on pt-1.11 - this is adapted to use the files from repo directly:\r\n\r\n```\r\ntorchrun --nproc_per_node=2 examples/pytorch/text-classification/run_glue.py \\\r\n--task_name mrpc --max_seq_len 128 --model_name_or_path bert-base-uncased \\\r\n--output_dir xxx --overwrite_output_dir --do_train --evaluation_strategy epoch \\\r\n--per_device_train_batch_size 1 --per_device_eval_batch_size 1 \\\r\n--gradient_accumulation_steps 1 --learning_rate 2e-5 --weight_decay 0.0 \\\r\n--max_grad_norm 1.0 --num_train_epochs 3 --lr_scheduler_type linear \\\r\n--warmup_steps 50 --logging_steps 100 --fp16 --fp16_full_eval --optim \\\r\nadamw_torch --deepspeed tests/deepspeed/ds_config_zero3.json\r\n```\r\n\r\nbut I need to look closely - as you're reporting quality issues and not that it fails. Will retest with 1.12 and then check the log closely.\r\n", "pt-nightly works just fine\r\n\r\nI get a very nice learning curve:\r\n\r\n```\r\n[INFO|trainer.py:1428] 2022-05-18 17:56:02,223 >> ***** Running training *****\r\n[INFO|trainer.py:1429] 2022-05-18 17:56:02,224 >> Num examples = 3668\r\n[INFO|trainer.py:1430] 2022-05-18 17:56:02,224 >> Num Epochs = 3\r\n[INFO|trainer.py:1431] 2022-05-18 17:56:02,224 >> Instantaneous batch size per device = 32\r\n[INFO|trainer.py:1432] 2022-05-18 17:56:02,224 >> Total train batch size (w. parallel, distributed & accumulation) = 32\r\n[INFO|trainer.py:1433] 2022-05-18 17:56:02,224 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1434] 2022-05-18 17:56:02,224 >> Total optimization steps = 345\r\n 0%| | 0/345 [00:00<?, ?it/s][2022-05-18 17:56:02,941] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 65536\r\n 0%|▎ | 1/345 [00:00<04:04, 1.41it/s][2022-05-18 17:56:03,946] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768.0\r\n{'loss': 1.1734, 'learning_rate': 1.0631029208133474e-05, 'epoch': 0.09} \r\n{'loss': 0.8276, 'learning_rate': 1.4776864828686414e-05, 'epoch': 0.17} \r\n{'loss': 0.6035, 'learning_rate': 1.7035710196752873e-05, 'epoch': 0.26} \r\n{'loss': 0.5612, 'learning_rate': 1.859695689252868e-05, 'epoch': 0.35} \r\n{'loss': 0.5857, 'learning_rate': 1.9791299823832263e-05, 'epoch': 0.43} \r\n{'loss': 0.5462, 'learning_rate': 2e-05, 'epoch': 0.52} \r\n{'loss': 0.5273, 'learning_rate': 2e-05, 'epoch': 0.61} \r\n{'loss': 0.5543, 'learning_rate': 2e-05, 'epoch': 0.7} \r\n{'loss': 0.5658, 'learning_rate': 2e-05, 'epoch': 0.78} \r\n{'loss': 0.5612, 'learning_rate': 2e-05, 'epoch': 0.87} \r\n{'loss': 0.5069, 'learning_rate': 2e-05, 'epoch': 0.96} \r\n 33%|█████████████████████████████████ | 115/345 [01:08<02:15, 1.69it/s][INFO|trainer.py:625] 2022-05-18 17:57:10,457 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:2625] 2022-05-18 17:57:10,458 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2627] 2022-05-18 17:57:10,458 >> Num examples = 408\r\n[INFO|trainer.py:2630] 2022-05-18 17:57:10,458 >> Batch size = 32\r\n05/18/2022 17:57:12 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow3it/s]\r\n{'eval_loss': 0.460205078125, 'eval_accuracy': 0.8112745098039216, 'eval_f1': 0.8701517706576728, 'eval_combined_score': 0.8407131402307972, 'eval_runtime': 1.5702, 'eval_samples_per_second': 259.84, 'eval_steps_per_second': 8.279, 'epoch': 1.0} \r\n{'loss': 0.4829, 'learning_rate': 2e-05, 'epoch': 1.04} \r\n{'loss': 0.4404, 'learning_rate': 2e-05, 'epoch': 1.13} \r\n{'loss': 0.4361, 'learning_rate': 2e-05, 'epoch': 1.22} \r\n{'loss': 0.3961, 'learning_rate': 2e-05, 'epoch': 1.3} \r\n{'loss': 0.3944, 'learning_rate': 2e-05, 'epoch': 1.39} \r\n{'loss': 0.4435, 'learning_rate': 2e-05, 'epoch': 1.48} \r\n{'loss': 0.3121, 'learning_rate': 2e-05, 'epoch': 1.57} \r\n 52%|███████████████████████████████████████████████████▋ | 180/345 [01:47<01:38, 1.68it/s][2022-05-18 17:57:50,495] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768.0, reducing to 16384.0\r\n{'loss': 0.3598, 'learning_rate': 2e-05, 'epoch': 1.65} \r\n{'loss': 0.3626, 'learning_rate': 2e-05, 'epoch': 1.74} \r\n{'loss': 0.3431, 'learning_rate': 2e-05, 'epoch': 1.83} \r\n{'loss': 0.4219, 'learning_rate': 2e-05, 'epoch': 1.91} \r\n{'loss': 0.3931, 'learning_rate': 2e-05, 'epoch': 2.0} \r\n 67%|██████████████████████████████████████████████████████████████████ | 230/345 [02:16<01:06, 1.72it/s][INFO|trainer.py:625] 2022-05-18 17:58:18,996 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:2625] 2022-05-18 17:58:18,997 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2627] 2022-05-18 17:58:18,997 >> Num examples = 408\r\n[INFO|trainer.py:2630] 2022-05-18 17:58:18,997 >> Batch size = 32\r\n05/18/2022 17:58:20 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow2it/s]\r\n{'eval_loss': 0.385986328125, 'eval_accuracy': 0.8284313725490197, 'eval_f1': 0.8776223776223777, 'eval_combined_score': 0.8530268750856986, 'eval_runtime': 1.3856, 'eval_samples_per_second': 294.452, 'eval_steps_per_second': 9.382, 'epoch': 2.0} \r\n{'loss': 0.2824, 'learning_rate': 2e-05, 'epoch': 2.09} \r\n{'loss': 0.2692, 'learning_rate': 2e-05, 'epoch': 2.17} \r\n{'loss': 0.2422, 'learning_rate': 2e-05, 'epoch': 2.26} \r\n{'loss': 0.2489, 'learning_rate': 2e-05, 'epoch': 2.35} \r\n{'loss': 0.201, 'learning_rate': 2e-05, 'epoch': 2.43} \r\n{'loss': 0.203, 'learning_rate': 2e-05, 'epoch': 2.52} \r\n{'loss': 0.2521, 'learning_rate': 2e-05, 'epoch': 2.61} \r\n{'loss': 0.2343, 'learning_rate': 2e-05, 'epoch': 2.7} \r\n{'loss': 0.1918, 'learning_rate': 2e-05, 'epoch': 2.78} \r\n{'loss': 0.2203, 'learning_rate': 2e-05, 'epoch': 2.87} \r\n 96%|██████████████████████████████████████████████████████████████████████████████████████████████▋ | 330/345 [03:16<00:08, 1.72it/s][2022-05-18 17:59:19,226] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384.0, reducing to 8192.0\r\n{'loss': 0.2284, 'learning_rate': 2e-05, 'epoch': 2.96} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 345/345 [03:25<00:00, 1.73it/s][INFO|trainer.py:625] 2022-05-18 17:59:27,488 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:2625] 2022-05-18 17:59:27,489 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2627] 2022-05-18 17:59:27,489 >> Num examples = 408\r\n[INFO|trainer.py:2630] 2022-05-18 17:59:27,489 >> Batch size = 32\r\n05/18/2022 17:59:28 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow4it/s]\r\n{'eval_loss': 0.57470703125, 'eval_accuracy': 0.8063725490196079, 'eval_f1': 0.8715447154471545, 'eval_combined_score': 0.8389586322333812, 'eval_runtime': 1.3657, 'eval_samples_per_second': 298.75, 'eval_steps_per_second': 9.519, 'epoch': 3.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 345/345 [03:26<00:00, 1.73it/s][INFO|trainer.py:1671] 2022-05-18 17:59:28,855 >> \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'train_runtime': 206.6319, 'train_samples_per_second': 53.254, 'train_steps_per_second': 1.67, 'train_loss': 0.41815963966259057, 'epoch': 3.0}\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 345/345 [03:29<00:00, 1.64it/s]\r\n[INFO|trainer.py:2375] 2022-05-18 17:59:32,227 >> Saving model checkpoint to xxx\r\n[INFO|configuration_utils.py:446] 2022-05-18 17:59:32,227 >> Configuration saved in xxx/config.json\r\n[INFO|modeling_utils.py:1546] 2022-05-18 17:59:32,236 >> Model weights saved in xxx/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2108] 2022-05-18 17:59:32,236 >> tokenizer config file saved in xxx/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2114] 2022-05-18 17:59:32,236 >> Special tokens file saved in xxx/special_tokens_map.json\r\n[2022-05-18 17:59:32,461] [INFO] [engine.py:3177:save_16bit_model] Saving model weights to xxx/pytorch_model.bin\r\n***** train metrics *****\r\n epoch = 3.0\r\n train_loss = 0.4182\r\n train_runtime = 0:03:26.63\r\n train_samples = 3668\r\n train_samples_per_second = 53.254\r\n train_steps_per_second = 1.67\r\n05/18/2022 17:59:32 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:625] 2022-05-18 17:59:32,618 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:2625] 2022-05-18 17:59:32,620 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2627] 2022-05-18 17:59:32,621 >> Num examples = 408\r\n[INFO|trainer.py:2630] 2022-05-18 17:59:32,621 >> Batch size = 32\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:01<00:00, 9.54it/s]05/18/2022 17:59:34 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:01<00:00, 10.07it/s]\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_accuracy = 0.8064\r\n eval_combined_score = 0.839\r\n eval_f1 = 0.8715\r\n eval_loss = 0.5747\r\n eval_runtime = 0:00:01.39\r\n eval_samples = 408\r\n eval_samples_per_second = 292.087\r\n eval_steps_per_second = 9.307\r\n```\r\n\r\nSo perhaps start with my cmd line - I think the only difference is that I use `tests/deepspeed/ds_config_zero3.json` - but it looks pretty similar and a larger bs, and no wandb - everything else is the same as yours I think.\r\n\r\n```\r\ntorchrun --nproc_per_node=1 examples/pytorch/text-classification/run_glue.py \\\r\n--task_name mrpc --max_seq_len 128 --model_name_or_path bert-base-uncased \\\r\n--output_dir xxx --overwrite_output_dir --do_train --evaluation_strategy epoch \\\r\n--per_device_train_batch_size 32 --per_device_eval_batch_size 32 \\\r\n--gradient_accumulation_steps 1 --learning_rate 2e-5 --weight_decay 0.0 \\\r\n--max_grad_norm 1.0 --num_train_epochs 3 --lr_scheduler_type linear \\\r\n--warmup_steps 50 --logging_steps 10 --fp16 --fp16_full_eval --optim \\\r\nadamw_torch --deepspeed tests/deepspeed/ds_config_zero3.json\r\n```\r\n\r\nClearly the shape mismatch warning is the red herring as you have correctly spotted. This basically means that the weights aren't getting loaded correctly and probably started from scratch because of that.", "the main deepspeed config difference is:\r\n```\r\n- \"type\": \"WarmupDecayLR\",\r\n+ \"type\": \"WarmupLR\",\r\n```\r\n\r\nbut it shouldn't cause an issue with the pre-trained weights. I wonder why you see a different behavior.\r\n\r\nTried with your config file and it trains nicely as well (Didn't do till the end).\r\n", "Hello Stas, Thank you for all the deep dive and prompt reply. I just now found a minor change that I had done in `run_glue.py`. It is the following wherein I add ` ignore_mismatched_sizes=True,` to `from_pretrained` method. This is done so that I can load the pre-trained model with different number of output classes than the classification problem at hand. \r\n```diff\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n- use_auth_token=True if model_args.use_auth_token else None\r\n+ use_auth_token=True if model_args.use_auth_token else None,\r\n+ ignore_mismatched_sizes=True,\r\n )\r\n```\r\nI can confirm that this is causing the issue. It is resulting in the shape mismatch warning and then poor performance. Below are the plots with and without this change.\r\n![Screenshot 2022-05-19 at 9 57 33 AM](https://user-images.githubusercontent.com/13534540/169205000-5f1931f2-2a44-4495-8e44-7a7eabcc73b2.png)\r\n\r\n", "Great to hear you found the cause.\r\n\r\nIn general when you use deepspeed ZeRO stage-3 and you see a shape that's of size 0, it's because the weights are sharded - the internals have all kinds of places where the weights are reconsolidated for you at the right places, but if you go on your own you have to do it yourself at times. Just grep for `deepspeed.zero.GatheredParameters` for examples.\r\n\r\nIf you don't need any additional help you can close the Issue at any time.\r\n\r\nIf you have further questions please don't hesitate to ask.\r\n\r\n ", "I think fixing this would be important as many users would use pretrained models to fine-tune on their task which will likely have different number of output classes than the pretrained model. Maybe option/choice/bool flag to not have `deepspeed.zero.init` or the logic in `from_pretrained` to load and partition layers on different GPUs would resolve this for small to medium models.", "Please give me a full setup that I can reproduce your issue with and I will try to come up with a solution.\r\n\r\nAnd also if you write your own trainer loop you definitely aren't forced to go through `deepspeed.zero.init` - it doesn't happen by default, you have to call it. See: https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models\r\n\r\nAlso `deepspeed.zero.Init(enabled=False)` will not pre-shard the model at load time. I wonder if we could ask the Deepspeed developers to add a new ds_config file variable that could control that via the config file - that way the user can easily turn it off at will. What do you think?\r\n", "Exact setup to reproduce the above behaviour:\r\n1. Official `run_glue.py` [script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py) with the following change.\r\n```diff\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n- use_auth_token=True if model_args.use_auth_token else None\r\n+ use_auth_token=True if model_args.use_auth_token else None,\r\n+ ignore_mismatched_sizes=True,\r\n )\r\n```\r\n3. Below ZERO Stage-3 Config `zero3_config.json`:\r\n```json\r\n{\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": 16,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\",\r\n \"betas\": \"auto\",\r\n \"eps\": \"auto\",\r\n \"weight_decay\": \"auto\",\r\n \"torch_adam\": true,\r\n \"adam_w_mode\": true\r\n }\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupDecayLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": \"auto\",\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\",\r\n \"total_num_steps\": \"auto\"\r\n }\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n },\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"steps_per_print\": 2000,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"wall_clock_breakdown\": false\r\n}\r\n```\r\n4. bash script to run the finetuning of `bert-base-uncased` on MRPC dataset using ZERO Stage-3.\r\n```bash\r\n#!/bin/bash\r\n\r\ntime torchrun --nproc_per_node=2 run_glue.py \\\r\n--task_name \"mrpc\" \\\r\n--max_seq_len 128 \\\r\n--model_name_or_path \"bert-base-uncased\" \\\r\n--output_dir \"./glue/mrpc_deepspeed_stage3_trainer\" \\\r\n--overwrite_output_dir \\\r\n--do_train \\\r\n--evaluation_strategy \"epoch\" \\\r\n--per_device_train_batch_size 16 \\\r\n--per_device_eval_batch_size 16 \\\r\n--gradient_accumulation_steps 1 \\\r\n--learning_rate 2e-5 \\\r\n--weight_decay 0.0 \\\r\n--max_grad_norm 1.0 \\\r\n--num_train_epochs 3 \\\r\n--lr_scheduler_type \"linear\" \\\r\n--warmup_steps 50 \\\r\n--logging_steps 100 \\\r\n--fp16 \\\r\n--fp16_full_eval \\\r\n--optim \"adamw_torch\" \\\r\n--report_to \"wandb\" \\\r\n--deepspeed \"zero3_config.json\"\r\n```", "The issue is because of the logic at [modeling_utils.py#L2182](https://github.com/huggingface/transformers/blob/a4386d7e405712fb9e9ad1066828ded3174f6a61/src/transformers/modeling_utils.py#L2182). Here, the zero-3 state dict with partitions are being checked against the pretrained model state_dict, which will result in all keys being mismatched and deleted from pretrained model state_dict. ", "Thank you, @pacman100 \r\n\r\nPlease try this PR https://github.com/huggingface/transformers/pull/17373", "Hello @stas00, yes the above PR solves this issue. Thank you 😄 . Below are the plots finetuning `microsoft/deberta-v2-xlarge-mnli` (pretrained model has 3 labels) on MRPC (this task has 2 labels) dataset. \r\n<img width=\"1157\" alt=\"Screenshot 2022-05-24 at 12 18 30 PM\" src=\"https://user-images.githubusercontent.com/13534540/169966505-98b916d0-579d-4b62-be63-7b61f664ebe4.png\">\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,664
1,664
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.12.0.dev20220505+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes (deepspeed zero stage-3) ``` ### Who can help? @stas00 @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behaviour: 1. Official `run_glue.py` [script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py) 2. Below ZERO Stage-3 Config `zero3_config.json`: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto", "torch_adam": true, "adam_w_mode": true } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` 3. bash script to run the finetuning of `bert-base-uncased` on MRPC dataset using ZERO Stage-3. ```bash #!/bin/bash time torchrun --nproc_per_node=2 run_glue.py \ --task_name "mrpc" \ --max_seq_len 128 \ --model_name_or_path "bert-base-uncased" \ --output_dir "./glue/mrpc_deepspeed_stage3_trainer" \ --overwrite_output_dir \ --do_train \ --evaluation_strategy "epoch" \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --gradient_accumulation_steps 1 \ --learning_rate 2e-5 \ --weight_decay 0.0 \ --max_grad_norm 1.0 \ --num_train_epochs 3 \ --lr_scheduler_type "linear" \ --warmup_steps 50 \ --logging_steps 100 \ --fp16 \ --fp16_full_eval \ --optim "adamw_torch" \ --report_to "wandb" \ --deepspeed "zero3_config.json" ``` 4. Relevant output snippets. The first one shows the weird behaviour wherein the model isn't being properly initialized with the pretrained weights. The second shows the eval metrics showing the random performance. ![model init](https://user-images.githubusercontent.com/13534540/169131572-a1165baa-6713-4fce-a0be-db2e062b605a.png) ![bad performance](https://user-images.githubusercontent.com/13534540/169134622-6970e0ae-a0c5-44f6-bab3-129af3f5b5d2.png) ### Expected behavior Model being properly initialized with the pretrained weights when using DeepSpeed ZERO Stage-3. This should resolve the bad model performance being observed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17336/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17336/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17335/comments
https://api.github.com/repos/huggingface/transformers/issues/17335/events
https://github.com/huggingface/transformers/pull/17335
1,240,366,996
PR_kwDOCUB6oc44DAp-
17,335
Enable pytorch nightly CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@stas00, @LysandreJik \r\n\r\nI run the docker image `transformers-pytorch-deepspeed-latest-gpu` and found it's `PyTroch` has version `1.9.0`.\r\nThis image is based on `nvcr.io/nvidia/pytorch:21.03-py3`, which is used in the job `Test Torch CUDA Extensions` for both daily scheduled CI and push CI.\r\n\r\nWe can discuss about this. But so far I won't include DeepSpeed test with nightly-built PyTorch.\r\n\r\n```\r\n>>> import torch\r\n>>> torch.__version__\r\n'1.9.0a0+df837d0'\r\n>>> exit()\r\n```\r\n\r\n", "\r\n> I run the docker image transformers-pytorch-deepspeed-latest-gpu and found it's PyTroch has version 1.9.0.\r\nThis image is based on nvcr.io/nvidia/pytorch:21.03-py3, which is used in the job Test Torch CUDA Extensions for both daily scheduled CI and push CI.\r\n\r\nBut that's not nightly. I don't think you can rely on any pre-made docker to run nightly. It has to be manually installed since it is a new version every day. You probably want to switch to [22.04](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-04.html#rel_22-04) (latest at the moment) and then update it to the actual nightly. Does it make sense?\r\n\r\nI also don't understand why deepspeed tests were removed. It's critical that we run deepspeed tests on nightly. \r\n ", "@stas00 \r\n\r\nThe **non** `DeepSpeed` tests added in this PR use pytorch-nightly. You can verify in this run page https://github.com/huggingface/transformers/runs/6507879699?check_suite_focus=true and click `Echo versions`, and you will see `Pytorch Version: 1.12.0.dev20220519+cu102`. (It will change everyday).\r\n\r\n## DeepSpeed\r\n\r\nRegarding this, we have something to discuss:\r\n- For the `push` and `scheuled` CIs currently running, the **deepspeed** tests are run with PyTorch `1.9.0.`\r\n - Before going to the `nightly pytorch` with `deepspeed`, it might be better to decide what should we test for `push` and `scheduled` CI for `deepspeed` test. Should we use the latest stable Pytorch instead?\r\n\r\n- I don't mean to remove DeepSpeed tests with nightly PyTorch. The reason is that I am not able to make a docker image with `PyTorch Nightly + DeepSpeed`. I even tried with `PyTorch Stable + DeepSpeed` and the docker image also fails.\r\n\r\n### Details\r\nIn the Dockerfile `transformers-pytorch-deepspeed-latest-gpu`: \r\n\r\n```\r\nRUN python3 -m pip install --no-cache-dir -e ./transformers[deepspeed-testing]\r\n\r\n# This fail with if we install Pytorch Nightly or Stable above.\r\nRUN git clone https://github.com/microsoft/DeepSpeed && cd DeepSpeed && rm -rf build && \\\r\n DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 python3 -m pip install -e . --global-option=\"build_ext\" --global-option=\"-j8\" --no-cache -v --disable-pip-version-check 2>&1\r\n```\r\n", "> Should we use the latest stable Pytorch instead?\r\n\r\nYes, please.\r\n\r\nI think we should use the latest stable pytorch for all our tests unless we explicitly test older pytorch versions every few days or so. And to ensure we update to the new stable once it gets released.\r\n\r\n> I even tried with PyTorch Stable + DeepSpeed and the docker image also fails.\r\n\r\nCould you please point me to the actual issues and I will help to sort them out? \r\n\r\nWe can discuss it on slack if it's easier.", "I don't think we really care about nightly flax right now, so I would keep the following:\r\n- tests with nightly torch + latest TF\r\n- tests with latest torch + nightly TF\r\n\r\nThe rest sounds good to me!", "Looks good to me, @ydshieh! Thank you!", "A full workflow run is [here](https://github.com/huggingface/transformers/actions/runs/2490446459)." ]
1,652
1,655
1,655
COLLABORATOR
null
# What does this PR do? - Make necessary changes to build `huggingface/transformers-pytorch-nightly-gpu` - Update `self-nightly-scheduled.yml` to run PyTorch nightly build CI (almost a copy from scheduled CI) [test workflow run](https://github.com/huggingface/transformers/actions/runs/2354046716) here. [docker build run](https://github.com/huggingface/transformers/actions/runs/2353866659) #### Print versions <img width="389" alt="Screenshot 2022-05-19 214304" src="https://user-images.githubusercontent.com/2521628/169389521-328f6972-6ae4-40dc-9356-e4eb59319c6b.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17335/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17335", "html_url": "https://github.com/huggingface/transformers/pull/17335", "diff_url": "https://github.com/huggingface/transformers/pull/17335.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17335.patch", "merged_at": 1655476948000 }
https://api.github.com/repos/huggingface/transformers/issues/17334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17334/comments
https://api.github.com/repos/huggingface/transformers/issues/17334/events
https://github.com/huggingface/transformers/pull/17334
1,240,328,283
PR_kwDOCUB6oc44C49s
17,334
Fixing docstrings for cvt
{ "login": "AnugunjNaman", "id": 42839570, "node_id": "MDQ6VXNlcjQyODM5NTcw", "avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnugunjNaman", "html_url": "https://github.com/AnugunjNaman", "followers_url": "https://api.github.com/users/AnugunjNaman/followers", "following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}", "gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions", "organizations_url": "https://api.github.com/users/AnugunjNaman/orgs", "repos_url": "https://api.github.com/users/AnugunjNaman/repos", "events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}", "received_events_url": "https://api.github.com/users/AnugunjNaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,653
1,653
CONTRIBUTOR
null
# What does this PR do? This PR does the following: 1. Remove the error in `README.md` where `CvT` description was copy of `CTRL` 2. Fix `size` of image for `feature extractor` which was set to `224`. 3. fix the input docstring for forward classes of `CvtModel` and `CvtForImageClassification` (head mask etc not needed). 4. Adding the largest `CvT` model. TODO: **Transfer the largest sized (1GB) model to Microsoft from anugunj/cvt-w24-384-22k** @NielsRogge @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17334/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17334", "html_url": "https://github.com/huggingface/transformers/pull/17334", "diff_url": "https://github.com/huggingface/transformers/pull/17334.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17334.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17333/comments
https://api.github.com/repos/huggingface/transformers/issues/17333/events
https://github.com/huggingface/transformers/pull/17333
1,240,327,915
PR_kwDOCUB6oc44C44w
17,333
Enable PyTorch nightly build CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,654
1,652
COLLABORATOR
null
# What does this PR do? - Make necessary changes to build `huggingface/transformers-pytorch-nightly-gpu` - Update `self-nightly-scheduled.yml` to run PyTorch nightly build CI (almost a copy from scheduled CI) [A run](https://github.com/huggingface/transformers/actions/runs/2347272872) here. <img width="280" alt="Screenshot 2022-05-18 200718" src="https://user-images.githubusercontent.com/2521628/169114866-cba45123-e388-401d-ab1e-89d04b94b0e9.png"> **Some issue** The job `run_tests_torch_cuda_extensions_gpu` job couldn't use the image `huggingface/transformers-pytorch-nightly-gpu`. In scheduled CI, that job uses `huggingface/transformers-pytorch-deepspeed-latest-gpu`. But so far I haven't do an equivalence to that image for nightly PyTorch. Would like to hear @LysandreJik regarding this part. Maybe update `transformers-pytorch-deepspeed-latest-gpu/Dockerfile` to have an argument, or just a new Docker file (probably easier)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17333/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17333", "html_url": "https://github.com/huggingface/transformers/pull/17333", "diff_url": "https://github.com/huggingface/transformers/pull/17333.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17333.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17332/comments
https://api.github.com/repos/huggingface/transformers/issues/17332/events
https://github.com/huggingface/transformers/pull/17332
1,240,313,324
PR_kwDOCUB6oc44C10G
17,332
Fix ci_url might be None
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? In `notification_service.py`, we have ``` ci_url = os.environ.get("CI_COMMIT_URL") commit_number = ci_url.split("/")[-1] ``` but `ci_url` might be `None`, for example, for scheduled CI. This PR moves the involved block inside ``` if ci_title is not None: assert ci_url is not None ``` (i.e. only for push CI)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17332/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17332", "html_url": "https://github.com/huggingface/transformers/pull/17332", "diff_url": "https://github.com/huggingface/transformers/pull/17332.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17332.patch", "merged_at": 1652903348000 }
https://api.github.com/repos/huggingface/transformers/issues/17331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17331/comments
https://api.github.com/repos/huggingface/transformers/issues/17331/events
https://github.com/huggingface/transformers/pull/17331
1,240,307,432
PR_kwDOCUB6oc44C0kS
17,331
Fix metric calculation in examples and setup tests to run on multi-gpu for no_trainer scripts
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? This PR fixes all failing tests in a multi-gpu setting for all `no_trainer` example scripts. This includes an issue with when the logger was called before `Accelerator()` was created, adjusting when the conditional to add to the `samples_seen`, and adjusting `samples_seen` to use the length when the labels are just a list instead of a torch tensor. Because the tests were rewritten to use `accelerate launch` and the new `write_basic_config`, these tests will automatically allocate themselves properly to test on multigpu, gpu, or cpu depending on what environment is available Fixes # (issue): Closes https://github.com/huggingface/transformers/issues/17214, https://github.com/huggingface/transformers/issues/17200 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17331/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17331", "html_url": "https://github.com/huggingface/transformers/pull/17331", "diff_url": "https://github.com/huggingface/transformers/pull/17331.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17331.patch", "merged_at": 1652897860000 }
https://api.github.com/repos/huggingface/transformers/issues/17330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17330/comments
https://api.github.com/repos/huggingface/transformers/issues/17330/events
https://github.com/huggingface/transformers/pull/17330
1,240,233,076
PR_kwDOCUB6oc44CmWo
17,330
Adding `batch_size` test to QA pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? Just adds a test on the `batch_size` argument of the pipeline (which shouldn't affect returned results but can sometimes break because automatic batching can fail on some specific models/architectures) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17330/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17330", "html_url": "https://github.com/huggingface/transformers/pull/17330", "diff_url": "https://github.com/huggingface/transformers/pull/17330.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17330.patch", "merged_at": 1652984892000 }
https://api.github.com/repos/huggingface/transformers/issues/17329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17329/comments
https://api.github.com/repos/huggingface/transformers/issues/17329/events
https://github.com/huggingface/transformers/pull/17329
1,240,186,298
PR_kwDOCUB6oc44CcXL
17,329
Not send successful report
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There is duplication. Forgot to remove one block. Don't merge for now 🙏 ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? Here we go. Bye, successful run report 😢 ~ [workflow run](https://github.com/huggingface/transformers/runs/6492878791?check_suite_focus=true) but no report on Slack.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17329/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17329", "html_url": "https://github.com/huggingface/transformers/pull/17329", "diff_url": "https://github.com/huggingface/transformers/pull/17329.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17329.patch", "merged_at": 1652893669000 }
https://api.github.com/repos/huggingface/transformers/issues/17328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17328/comments
https://api.github.com/repos/huggingface/transformers/issues/17328/events
https://github.com/huggingface/transformers/pull/17328
1,240,045,788
PR_kwDOCUB6oc44B-_X
17,328
Fix typo
{ "login": "kamalkraj", "id": 17096858, "node_id": "MDQ6VXNlcjE3MDk2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamalkraj", "html_url": "https://github.com/kamalkraj", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "repos_url": "https://api.github.com/users/kamalkraj/repos", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17328). All of your documentation changes will be reflected on that endpoint." ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? Fix typo in Readme @sgugger <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17328/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17328", "html_url": "https://github.com/huggingface/transformers/pull/17328", "diff_url": "https://github.com/huggingface/transformers/pull/17328.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17328.patch", "merged_at": 1652884193000 }
https://api.github.com/repos/huggingface/transformers/issues/17327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17327/comments
https://api.github.com/repos/huggingface/transformers/issues/17327/events
https://github.com/huggingface/transformers/issues/17327
1,240,043,832
I_kwDOCUB6oc5J6ZE4
17,327
Shape mismatch with documentation for cross attentions tensor when performing sequence generation with encoder-decoder model
{ "login": "fgbelidji", "id": 32633752, "node_id": "MDQ6VXNlcjMyNjMzNzUy", "avatar_url": "https://avatars.githubusercontent.com/u/32633752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fgbelidji", "html_url": "https://github.com/fgbelidji", "followers_url": "https://api.github.com/users/fgbelidji/followers", "following_url": "https://api.github.com/users/fgbelidji/following{/other_user}", "gists_url": "https://api.github.com/users/fgbelidji/gists{/gist_id}", "starred_url": "https://api.github.com/users/fgbelidji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fgbelidji/subscriptions", "organizations_url": "https://api.github.com/users/fgbelidji/orgs", "repos_url": "https://api.github.com/users/fgbelidji/repos", "events_url": "https://api.github.com/users/fgbelidji/events{/privacy}", "received_events_url": "https://api.github.com/users/fgbelidji/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Thanks a lot @fgbelidji ! @patil-suraj do you want to give it a try? :-)", "Yes, will look into it.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Have this issue been fixed?" ]
1,652
1,663
1,662
MEMBER
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to get the cross attentions weights of the decoder from Pegasus, but the tensor I get has a shape different from what the documentation states. I use the following code: ``` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_name = "google/pegasus-cnn_dailymail" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) prompt = "I use Hugging Face transformers everyday" inputs = tokenizer([prompt, prompt], add_special_tokens=True, return_tensors="pt") batch_size = len(inputs.input_ids) # batch_size is 2 num_beams = 5 num_return_sequences = 3 outputs = model.generate(**inputs, num_beams=num_beams, output_attentions=True, return_dict_in_generate=True, num_return_sequences=num_return_sequences, use_cache=True ) ``` Size of inputs and outputs: ``` >>> inputs.input_ids.shape torch.Size([2, 8]) # tensor of size (batch_size, input_sequence_length) >>> outputs.sequences.shape torch.Size([6, 47]) # tensor of size (batch_size*num_return_sequences, generated_length) ``` This is what the documentation says regarding `cross_attentions` tensor of a `BeamSearchEncoderDecoderOutput`: https://github.com/huggingface/transformers/blob/60ad73448c7fc0149082b539ce8c223e42783a35/src/transformers/generation_utils.py#L351-L356 When inspecting the `cross_attentions`: ``` >>>len(outputs.cross_attentions) 48 # it corresponds to the longest generated length of the 3 sequences -> OK with the doc ``` ``` >>>len(outputs.cross_attentions[0]) 16 # it corresponds to the number of layer of the decoder-> OK with the doc ``` ``` >>>len(outputs.cross_attentions[0]) 16 # it corresponds to the number of layer of the decoder-> OK with the doc ``` This is where it differs from the doc: ``` >>>cross_attentions[0][0].shape #cross attention for the 1st generated token and 1st decoder layer torch.Size([10, 16, 1, 8]) # tensor of size (batch_size*beam_size, num_heads, 1, sequence_length) but doc says it should be # tensor of size (batch_size, num_heads, generated_length, sequence_length) ``` ``` >>>cross_attentions[1][0].shape #cross attention for the 2nd generated token and 1st decoder layer torch.Size([10, 16, 1, 8]) # tensor of size (batch_size*beam_size, num_heads, 1, sequence_length) but doc says it should be # tensor of size (batch_size, num_heads, generated_length, sequence_length) ``` Both the 1st and 3rd dimensions of the `cross_attentions` differ from the doc. However when setting `use_cache=False` in `generate()` I obtain: ``` >>>cross_attentions[0][0].shape #cross attention for the 1st generated token and 1st decoder layer torch.Size([10, 16, 1, 8]) # tensor of size (batch_size*beam_size, num_heads, generated, sequence_length) ``` ``` >>>cross_attentions[1][0].shape #cross attention for the 2nd generated token and 1st decoder layer torch.Size([10, 16, 2, 8]) # tensor of size (batch_size*beam_size, num_heads, generated_length, sequence_length) ``` Only the 1st dimension now differs from the doc, so it looks like using the cache enforces `generated_length` to 1 Is it possible to clarify this behavior? I assume the `cross_attentions` tensor and `decoder_attentions` should have the same specifications. Also, I think it would make sense to rename `sequence_length` to something like `encoder_input_sequence_length` for the `cross_attentions` spec and something like `decoder_input_sequence_length` for the `decoder_attentions` specs. At the moment, the doc implies that this is the same for both tensors. ### Expected behavior ```shell Documentation should reflect the actual shape of cross_attentions tensor ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17327/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17326/comments
https://api.github.com/repos/huggingface/transformers/issues/17326/events
https://github.com/huggingface/transformers/pull/17326
1,240,000,660
PR_kwDOCUB6oc44B1kP
17,326
Fix bug in Wav2Vec2 pretrain example
{ "login": "ddobokki", "id": 44228269, "node_id": "MDQ6VXNlcjQ0MjI4MjY5", "avatar_url": "https://avatars.githubusercontent.com/u/44228269?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddobokki", "html_url": "https://github.com/ddobokki", "followers_url": "https://api.github.com/users/ddobokki/followers", "following_url": "https://api.github.com/users/ddobokki/following{/other_user}", "gists_url": "https://api.github.com/users/ddobokki/gists{/gist_id}", "starred_url": "https://api.github.com/users/ddobokki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddobokki/subscriptions", "organizations_url": "https://api.github.com/users/ddobokki/orgs", "repos_url": "https://api.github.com/users/ddobokki/repos", "events_url": "https://api.github.com/users/ddobokki/events{/privacy}", "received_events_url": "https://api.github.com/users/ddobokki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks!" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> I fix a minor bug in [run_pretrain.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/wav2vec2/run_pretrain.py) DataCollatorForWav2Vec2Pretraining in this script call **_compute_mask_indices** method, but give to wrong parameter(device) so i edit from ```python batch["mask_time_indices"] = _compute_mask_indices( (batch_size, mask_indices_seq_length), self.model.config.mask_time_prob, self.model.config.mask_time_length, device=batch["input_values"].device, attention_mask=attention_mask, min_masks=2, ) ``` to ```python batch["mask_time_indices"] = _compute_mask_indices( (batch_size, mask_indices_seq_length), self.model.config.mask_time_prob, self.model.config.mask_time_length, attention_mask=attention_mask, min_masks=2, ) ``` Fixes #17323 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17326/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17326", "html_url": "https://github.com/huggingface/transformers/pull/17326", "diff_url": "https://github.com/huggingface/transformers/pull/17326.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17326.patch", "merged_at": 1652992964000 }
https://api.github.com/repos/huggingface/transformers/issues/17325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17325/comments
https://api.github.com/repos/huggingface/transformers/issues/17325/events
https://github.com/huggingface/transformers/pull/17325
1,239,940,250
PR_kwDOCUB6oc44Bor3
17,325
Remove notification_service_deprecated.py
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,662
1,652
COLLABORATOR
null
# What does this PR do? Remove `notification_service_deprecated.py`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17325", "html_url": "https://github.com/huggingface/transformers/pull/17325", "diff_url": "https://github.com/huggingface/transformers/pull/17325.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17325.patch", "merged_at": 1652882801000 }
https://api.github.com/repos/huggingface/transformers/issues/17324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17324/comments
https://api.github.com/repos/huggingface/transformers/issues/17324/events
https://github.com/huggingface/transformers/pull/17324
1,239,905,641
PR_kwDOCUB6oc44Bg7S
17,324
[BC] Fixing usage of text pairs
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging two persons, since this contains a backward breaking change (even if it was bugged usage) I would rather have more eyes than not here.\r\n\r\nThe linked issue contains more information that could be helpful too.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# This contains a BC please read full description # What does this PR do? The BC is actually preventing users from misusing the pipeline since users could have been willing to send text pairs and the pipeline would instead understand the thing as a batch returning bogus results. The correct usage of text pairs is preserved in this PR even when that makes the code clunky. Adds support for `{"text":..,, "text_pair": ...}` inputs for both dataset iteration and more explicit usage to pairs. Fixes #17305 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17324/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17324", "html_url": "https://github.com/huggingface/transformers/pull/17324", "diff_url": "https://github.com/huggingface/transformers/pull/17324.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17324.patch", "merged_at": 1652948956000 }
https://api.github.com/repos/huggingface/transformers/issues/17323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17323/comments
https://api.github.com/repos/huggingface/transformers/issues/17323/events
https://github.com/huggingface/transformers/issues/17323
1,239,901,047
I_kwDOCUB6oc5J52N3
17,323
There is a minor bug in run_pretrain.py for Wav2Vec2 example
{ "login": "ddobokki", "id": 44228269, "node_id": "MDQ6VXNlcjQ0MjI4MjY5", "avatar_url": "https://avatars.githubusercontent.com/u/44228269?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddobokki", "html_url": "https://github.com/ddobokki", "followers_url": "https://api.github.com/users/ddobokki/followers", "following_url": "https://api.github.com/users/ddobokki/following{/other_user}", "gists_url": "https://api.github.com/users/ddobokki/gists{/gist_id}", "starred_url": "https://api.github.com/users/ddobokki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddobokki/subscriptions", "organizations_url": "https://api.github.com/users/ddobokki/orgs", "repos_url": "https://api.github.com/users/ddobokki/repos", "events_url": "https://api.github.com/users/ddobokki/events{/privacy}", "received_events_url": "https://api.github.com/users/ddobokki/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @ddobokki,\r\n\r\ngood observation! Would you like to open a PR to fix it? :-)", "Ok! I will 😁" ]
1,652
1,652
1,652
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.27 - Python version: 3.9.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patrickvonplaten I found a minor bug in [run_pretrain.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/wav2vec2/run_pretrain.py) ```python from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices . . . class DataCollatorForWav2Vec2Pretraining: ... # sample randomly masked indices batch["mask_time_indices"] = _compute_mask_indices( (batch_size, mask_indices_seq_length), self.model.config.mask_time_prob, self.model.config.mask_time_length, device=batch["input_values"].device, # this! attention_mask=attention_mask, min_masks=2, ) return batch ``` _compute_mask_indices take device parameter, but in [modeling_wav2vec2.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_wav2vec2.py) ```python def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.LongTensor] = None, min_masks: int = 0, ) -> np.ndarray: ``` device parameter does not exist, I think this hasn't been modified yet. And thank you for the huggingface that makes good libraries! ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python @dataclass class DataCollatorForWav2Vec2Pretraining: ... # sample randomly masked indices batch["mask_time_indices"] = _compute_mask_indices( (batch_size, mask_indices_seq_length), self.model.config.mask_time_prob, self.model.config.mask_time_length, device=batch["input_values"].device, attention_mask=attention_mask, min_masks=2, ) return batch ``` ### Expected behavior ```shell @dataclass class DataCollatorForWav2Vec2Pretraining: ... # sample randomly masked indices batch["mask_time_indices"] = _compute_mask_indices( (batch_size, mask_indices_seq_length), self.model.config.mask_time_prob, self.model.config.mask_time_length, attention_mask=attention_mask, min_masks=2, ) return batch ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17323/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17322/comments
https://api.github.com/repos/huggingface/transformers/issues/17322/events
https://github.com/huggingface/transformers/pull/17322
1,239,679,033
PR_kwDOCUB6oc44Av98
17,322
Keep only Roberta's position_embeddings initialisation in modeling_roberta.py
{ "login": "AlexandreNap", "id": 85485132, "node_id": "MDQ6VXNlcjg1NDg1MTMy", "avatar_url": "https://avatars.githubusercontent.com/u/85485132?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexandreNap", "html_url": "https://github.com/AlexandreNap", "followers_url": "https://api.github.com/users/AlexandreNap/followers", "following_url": "https://api.github.com/users/AlexandreNap/following{/other_user}", "gists_url": "https://api.github.com/users/AlexandreNap/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexandreNap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexandreNap/subscriptions", "organizations_url": "https://api.github.com/users/AlexandreNap/orgs", "repos_url": "https://api.github.com/users/AlexandreNap/repos", "events_url": "https://api.github.com/users/AlexandreNap/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexandreNap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,653
1,653
NONE
null
# What does this PR do? There was two position_embeddings initialisations, first one was left from Bert's code and second one is Roberta's tweaked version. Remove first one. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17322/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17322", "html_url": "https://github.com/huggingface/transformers/pull/17322", "diff_url": "https://github.com/huggingface/transformers/pull/17322.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17322.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17321/comments
https://api.github.com/repos/huggingface/transformers/issues/17321/events
https://github.com/huggingface/transformers/pull/17321
1,239,671,405
PR_kwDOCUB6oc44AuXZ
17,321
Remove Bert's position_embeddings init in modeling_roberta.py
{ "login": "AlexandreNap", "id": 85485132, "node_id": "MDQ6VXNlcjg1NDg1MTMy", "avatar_url": "https://avatars.githubusercontent.com/u/85485132?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexandreNap", "html_url": "https://github.com/AlexandreNap", "followers_url": "https://api.github.com/users/AlexandreNap/followers", "following_url": "https://api.github.com/users/AlexandreNap/following{/other_user}", "gists_url": "https://api.github.com/users/AlexandreNap/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexandreNap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexandreNap/subscriptions", "organizations_url": "https://api.github.com/users/AlexandreNap/orgs", "repos_url": "https://api.github.com/users/AlexandreNap/repos", "events_url": "https://api.github.com/users/AlexandreNap/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexandreNap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,652
1,652
1,652
NONE
null
# What does this PR do? Remove useless position_embeddings initialisation in modeling_roberta. First one was from Bert's code and second one is the tweaked Roberta's one. Removed first one. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17321/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17321", "html_url": "https://github.com/huggingface/transformers/pull/17321", "diff_url": "https://github.com/huggingface/transformers/pull/17321.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17321.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17320/comments
https://api.github.com/repos/huggingface/transformers/issues/17320/events
https://github.com/huggingface/transformers/pull/17320
1,239,627,790
PR_kwDOCUB6oc44AlIY
17,320
Fix a TF-T5 test
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? `test_t5_decoder_model_past_large_inputs` pass incorrect args to `create_and_check_t5_decoder_model_past_large_inputs`. Don't know why it works so far, but I got errors when working on another PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17320/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17320", "html_url": "https://github.com/huggingface/transformers/pull/17320", "diff_url": "https://github.com/huggingface/transformers/pull/17320.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17320.patch", "merged_at": 1652889444000 }
https://api.github.com/repos/huggingface/transformers/issues/17319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17319/comments
https://api.github.com/repos/huggingface/transformers/issues/17319/events
https://github.com/huggingface/transformers/issues/17319
1,239,604,148
I_kwDOCUB6oc5J4tu0
17,319
automatically update config file for models
{ "login": "artemisep", "id": 4677340, "node_id": "MDQ6VXNlcjQ2NzczNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4677340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemisep", "html_url": "https://github.com/artemisep", "followers_url": "https://api.github.com/users/artemisep/followers", "following_url": "https://api.github.com/users/artemisep/following{/other_user}", "gists_url": "https://api.github.com/users/artemisep/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemisep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemisep/subscriptions", "organizations_url": "https://api.github.com/users/artemisep/orgs", "repos_url": "https://api.github.com/users/artemisep/repos", "events_url": "https://api.github.com/users/artemisep/events{/privacy}", "received_events_url": "https://api.github.com/users/artemisep/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @lewtun @michaelbenayoun ", "Hi @Ruihua-Fang, \r\nDue to the variety of models we support, I think it would be hard to have a very generic class that handles all cases. \r\nThat being said we can still work on making the base classes the most generic possible so that adding model specific ONNX configs becomes easier.", "Hi Michael, thanks for the clarification and I am closing this for now." ]
1,652
1,654
1,654
CONTRIBUTOR
null
### Feature request automatically update config files for large number of models ### Motivation For the ONNXConfig project https://github.com/huggingface/transformers/issues/16308 where onnxConfig need to be added to 90 models. Currently it is done manually, mostly copy paste. Can this be done automatically? Does this kind of config updates happen often and could this be done in a more automatic way? ### Your contribution I'm interested in helping with the implementation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17319/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17318/comments
https://api.github.com/repos/huggingface/transformers/issues/17318/events
https://github.com/huggingface/transformers/pull/17318
1,239,560,598
PR_kwDOCUB6oc44AW6d
17,318
Accepting real pytorch device as arguments.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? Fix #17290 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17318", "html_url": "https://github.com/huggingface/transformers/pull/17318", "diff_url": "https://github.com/huggingface/transformers/pull/17318.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17318.patch", "merged_at": 1652882784000 }
https://api.github.com/repos/huggingface/transformers/issues/17317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17317/comments
https://api.github.com/repos/huggingface/transformers/issues/17317/events
https://github.com/huggingface/transformers/issues/17317
1,239,560,010
I_kwDOCUB6oc5J4i9K
17,317
M1 MacOS: "OSError: Can't load config for 'bert-base-uncased'"
{ "login": "ugm2", "id": 25923343, "node_id": "MDQ6VXNlcjI1OTIzMzQz", "avatar_url": "https://avatars.githubusercontent.com/u/25923343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ugm2", "html_url": "https://github.com/ugm2", "followers_url": "https://api.github.com/users/ugm2/followers", "following_url": "https://api.github.com/users/ugm2/following{/other_user}", "gists_url": "https://api.github.com/users/ugm2/gists{/gist_id}", "starred_url": "https://api.github.com/users/ugm2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ugm2/subscriptions", "organizations_url": "https://api.github.com/users/ugm2/orgs", "repos_url": "https://api.github.com/users/ugm2/repos", "events_url": "https://api.github.com/users/ugm2/events{/privacy}", "received_events_url": "https://api.github.com/users/ugm2/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I got this error because of the way I installed python in MacOS. **It seems that if you don't execute the script with sudo powers then the script is not able to write to disk and thus it fails at downloading the model.**\r\n\r\nI couldn't find a better way than using sudo, since giving full access to disk to Python in the Security & Privacy settings didn't work" ]
1,652
1,652
1,652
NONE
null
### System Info ```shell WARNING:tensorflow:From /Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. Metal device set to: Apple M1 Pro systemMemory: 16.00 GB maxCacheSize: 5.33 GB 2022-05-18 09:30:52.149441: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-05-18 09:30:52.149855: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) - `transformers` version: 4.19.2 - Platform: macOS-12.3-arm64-arm-64bit - Python version: 3.9.12 - Huggingface_hub version: 0.6.0 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ## Setup steps * Following MacOS installation of tensorflow for M1 chips: https://developer.apple.com/metal/tensorflow-plugin/ * Create python=3.9 environment: `conda create -n test python=3.9` * Install `tensorflow-deps` using conda: `conda install -c apple tensorflow-deps` * Install `tensorflow-macos` using pip: `python -m pip install tensorflow-macos` * Output from installing `tensorflow-macos`: ``` Successfully installed absl-py-1.0.0 astunparse-1.6.3 cachetools-5.1.0 certifi-2021.10.8 charset-normalizer-2.0.12 flatbuffers-2.0 gast-0.5.3 google-auth-2.6.6 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 idna-3.3 importlib-metadata-4.11.3 keras-2.8.0 keras-preprocessing-1.1.2 libclang-14.0.1 markdown-3.3.7 oauthlib-3.2.0 opt-einsum-3.3.0 protobuf-3.20.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.27.1 requests-oauthlib-1.3.1 rsa-4.8 tensorboard-2.8.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-macos-2.8.0 termcolor-1.1.0 tf-estimator-nightly-2.8.0.dev2021122109 typing-extensions-4.2.0 urllib3-1.26.9 werkzeug-2.1.2 wrapt-1.14.1 zipp-3.8.0 ``` * Install `tensorflow-metal` using pip: `python -m pip install tensorflow-metal` * Output from installing tensorflow-metal: `Successfully installed six-1.15.0 tensorflow-metal-0.4.0` And then: `pip install transformers` Output: `Successfully installed filelock-3.7.0 huggingface-hub-0.6.0 packaging-21.3 pyparsing-3.0.9 pyyaml-6.0 regex-2022.4.24 tokenizers-0.12.1 tqdm-4.64.0 transformers-4.19.2` ## Script to reproduce I used the most basic example in Huggingface (mentioned [here](https://huggingface.co/bert-base-uncased)): ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='bert-base-uncased') ``` That throws the following error: ``` Traceback (most recent call last): File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/configuration_utils.py", line 601, in _get_config_dict resolved_config_file = cached_path( File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/utils/hub.py", line 282, in cached_path output_path = get_from_cache( File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/utils/hub.py", line 470, in get_from_cache os.makedirs(cache_dir, exist_ok=True) File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/os.py", line 215, in makedirs makedirs(head, exist_ok=exist_ok) File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/os.py", line 225, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/Users/unaigaraymaestre/.cache/huggingface' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 541, in pipeline config = AutoConfig.from_pretrained(model, revision=revision, _from_pipeline=task, **model_kwargs) File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 680, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/configuration_utils.py", line 553, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/configuration_utils.py", line 641, in _get_config_dict raise EnvironmentError( OSError: Can't load config for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing a config.json file ``` **Note that I also tried installing Pytorch with CPU setup and the same happened, it's not related to Tensorflow for M1 chips.** ### Expected behavior ```shell Model (pipeline) should load properly by automatically downloading the `bert-base-uncased`. I've tried with many models, none work Thank you in advance! ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17317/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17316/comments
https://api.github.com/repos/huggingface/transformers/issues/17316/events
https://github.com/huggingface/transformers/pull/17316
1,239,555,046
PR_kwDOCUB6oc44AVvj
17,316
Updating the docs for `max_seq_len` in QA pipeline
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? Fixes #17241 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17316", "html_url": "https://github.com/huggingface/transformers/pull/17316", "diff_url": "https://github.com/huggingface/transformers/pull/17316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17316.patch", "merged_at": 1652881572000 }
https://api.github.com/repos/huggingface/transformers/issues/17315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17315/comments
https://api.github.com/repos/huggingface/transformers/issues/17315/events
https://github.com/huggingface/transformers/pull/17315
1,239,447,522
PR_kwDOCUB6oc43__QX
17,315
Add OnnxConfig for SqueezeBert iss17314
{ "login": "artemisep", "id": 4677340, "node_id": "MDQ6VXNlcjQ2NzczNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4677340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemisep", "html_url": "https://github.com/artemisep", "followers_url": "https://api.github.com/users/artemisep/followers", "following_url": "https://api.github.com/users/artemisep/following{/other_user}", "gists_url": "https://api.github.com/users/artemisep/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemisep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemisep/subscriptions", "organizations_url": "https://api.github.com/users/artemisep/orgs", "repos_url": "https://api.github.com/users/artemisep/repos", "events_url": "https://api.github.com/users/artemisep/events{/privacy}", "received_events_url": "https://api.github.com/users/artemisep/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I run pytest on those 4 failed tests in CircleCI shown above on my local dev, no error message was found, all says skipped except the following one says passed\r\ntests/models/pegasus/test_modeling_pegasus.py::PegasusStandaloneDecoderModelTest::test_sample_generate\r\n\r\n\r\n", "Hi @lewtun,\r\nrunning the following test did not show any errors, it output the following; 6 skipped, 200 deselected, 7 warnings\r\nRUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k \"squeezebert\"\r\n\r\n" ]
1,652
1,654
1,654
CONTRIBUTOR
null
# What does this PR do? <!-- As part of #16308, this PR adds OnnxConfig for SqueezeBert. --> ## Who can review? @lewtun @LysandreJik Anyone in the community is free to review the PR once the tests have passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17315", "html_url": "https://github.com/huggingface/transformers/pull/17315", "diff_url": "https://github.com/huggingface/transformers/pull/17315.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17315.patch", "merged_at": 1654078575000 }
https://api.github.com/repos/huggingface/transformers/issues/17314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17314/comments
https://api.github.com/repos/huggingface/transformers/issues/17314/events
https://github.com/huggingface/transformers/issues/17314
1,239,333,868
I_kwDOCUB6oc5J3rvs
17,314
implement onnx config for SqueezeBert
{ "login": "artemisep", "id": 4677340, "node_id": "MDQ6VXNlcjQ2NzczNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4677340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemisep", "html_url": "https://github.com/artemisep", "followers_url": "https://api.github.com/users/artemisep/followers", "following_url": "https://api.github.com/users/artemisep/following{/other_user}", "gists_url": "https://api.github.com/users/artemisep/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemisep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemisep/subscriptions", "organizations_url": "https://api.github.com/users/artemisep/orgs", "repos_url": "https://api.github.com/users/artemisep/repos", "events_url": "https://api.github.com/users/artemisep/events{/privacy}", "received_events_url": "https://api.github.com/users/artemisep/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lewtun , @LysandreJik here are a few things about the testing results\r\n1. pytest test_onnx_v2.py:\r\n 3 passes, 204 skipped, 8 warnings, no errors\r\n\r\n2. RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k \"squeezebert\" \r\n gave warning \"No GPU/TPU found, falling back to CPU\"\r\n then complained \"Could not load dynamic library 'libcuda.so.1' \"\r\n and final message: \"7 failed, 200 deselected, 7 warnings\"\r\n \r\nsystem info: ubuntu 20.04, pytorch: 1.11.0+cu102\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,655
1,655
CONTRIBUTOR
null
### Feature request see https://github.com/huggingface/transformers/issues/16308 ### Motivation see https://github.com/huggingface/transformers/issues/16308 ### Your contribution added onnx config for SqueezeBert
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17314/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17313/comments
https://api.github.com/repos/huggingface/transformers/issues/17313/events
https://github.com/huggingface/transformers/pull/17313
1,239,274,674
PR_kwDOCUB6oc43_bct
17,313
Adding GroupViT Models
{ "login": "xvjiarui", "id": 18479688, "node_id": "MDQ6VXNlcjE4NDc5Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/18479688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xvjiarui", "html_url": "https://github.com/xvjiarui", "followers_url": "https://api.github.com/users/xvjiarui/followers", "following_url": "https://api.github.com/users/xvjiarui/following{/other_user}", "gists_url": "https://api.github.com/users/xvjiarui/gists{/gist_id}", "starred_url": "https://api.github.com/users/xvjiarui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xvjiarui/subscriptions", "organizations_url": "https://api.github.com/users/xvjiarui/orgs", "repos_url": "https://api.github.com/users/xvjiarui/repos", "events_url": "https://api.github.com/users/xvjiarui/events{/privacy}", "received_events_url": "https://api.github.com/users/xvjiarui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI somehow failed due to connection issue. Could you please restart it?", "Hi @NielsRogge \r\nI have resolved all the comments. \r\n\r\nRegarding your concern, I think it's a little bit hard to combine them since `GroupViTVisionLayer` follows `ViT` and `GroupViTTextEncoderLayer` follows `CLIP`. ", "Btw, I still don't know why PR doc build failed. Any idea?", "@patil-suraj are you fine with the fact that the vision and text encoder use separate classes (`GroupViTVisionLayer` and `GroupViTTextEncoderLayer` respectively), even though they do the same thing? The only reason we keep `GroupViTVisionLayer` is because the vision encoder implementation is copied from ViT.\r\n\r\nSo either we keep the two separate classes with `# Copied from` statements, or we remove the copied from and just create a single `GroupViTEncoderLayer` class (as is done in CLIP).\r\n\r\nPinging @mishig25 regarding the build_documentation CI issue\r\n", "Sorry to only reply now.\r\n\r\n> are you fine with the fact that the vision and text encoder use separate classes (GroupViTVisionLayer and GroupViTTextEncoderLayer respectively), even though they do the same thing? The only reason we keep GroupViTVisionLayer is because the vision encoder implementation is copied from ViT.\r\n\r\nIf `GroupViTVisionLayer` and `GroupViTTextEncoderLayer` are exactly similar then IMO it's better to just have one `GroupViTEncoderLayer`, this will make the code much more readable. I am not in favor of adding extra modules just for `#copied from..` statememts.", "Yes same, so @xvjiarui you can remove the copied from statements from the vision encoder in favor of simpler code. ", "Awesome work, thanks a lot!! Merging :)" ]
1,652
1,656
1,656
CONTRIBUTOR
null
# What does this PR do? This PR implements paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094), the model is converted from [official implementation](https://github.com/NVlabs/GroupViT). - [x] Inference accuracy matched - [x] Complete docstring and model cards <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17313", "html_url": "https://github.com/huggingface/transformers/pull/17313", "diff_url": "https://github.com/huggingface/transformers/pull/17313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17313.patch", "merged_at": 1656442308000 }
https://api.github.com/repos/huggingface/transformers/issues/17312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17312/comments
https://api.github.com/repos/huggingface/transformers/issues/17312/events
https://github.com/huggingface/transformers/pull/17312
1,239,274,275
PR_kwDOCUB6oc43_bXL
17,312
[tests] fix copy-n-paste error
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
I noticed a wrong comment in a few tests - bad copy-n-paste - fixing it. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17312", "html_url": "https://github.com/huggingface/transformers/pull/17312", "diff_url": "https://github.com/huggingface/transformers/pull/17312.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17312.patch", "merged_at": 1652914847000 }
https://api.github.com/repos/huggingface/transformers/issues/17311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17311/comments
https://api.github.com/repos/huggingface/transformers/issues/17311/events
https://github.com/huggingface/transformers/pull/17311
1,239,212,454
PR_kwDOCUB6oc43_OiD
17,311
[Generation] Fix Transition probs
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'll talk about this feature in more-detail in a tutorial", "Hello,\r\nRegarding the follow up to these 3 posts: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/23, https://discuss.huggingface.co/t/retrieving-probability-over-tokens-during-beam-search/21217, https://discuss.huggingface.co/t/get-top-k-tokens-for-each-time-step-instead-of-the-highest-probability-token/14032:\r\n\r\nCan you show us, step by step, how to retrieve the probability for each token generated for each prediction in a `transformers.generation_utils.BeamSearchEncoderDecoderOutput`?\r\n\r\nMuch appreciated.", "Hi @adamkhakhar -- it is in our plans to write a tutorial for that :) It probably won't happen in the next 1 or 2 months, but stay tuned 🙏 ", "Hi @patrickvonplaten and @gante; what is the perspective on the tutorial?", "No news yet :)", "Thanks for the quick response @gante. I will stay tuned. Can you tell me whether you can expect the first generated sequence to always have the highest probability?", "@MotzWanted yeah, if you use `num_return_sequences >1` with beam search, the output sequences are sorted by their score (from highest to lowest)", "But if I replicate this [showcase](https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/26) of @patrickvonplaten to compute the sequence probabilities of the output scores, I don't get that output sequences are sorted by their probability (from highest to lowest). Can you explain to me why this is so @gante?", "@MotzWanted my reply was given in the context of this thread, which is related to `beam_search`. Any call with `do_sample=True`, as in the example you linked, will lose the sorting property.\r\n\r\nA working example:\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\nfrom transformers import AutoTokenizer\r\n\r\n\r\ngpt2 = AutoModelForCausalLM.from_pretrained(\"gpt2\", return_dict_in_generate=True)\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\n\r\ninput_ids = tokenizer(\"Today is a nice day\", return_tensors=\"pt\").input_ids\r\n\r\ngenerated_outputs = gpt2.generate(input_ids, num_return_sequences=3, num_beams=3, output_scores=True)\r\nprint(generated_outputs.sequences_scores)\r\n```" ]
1,652
1,664
1,652
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #15869 This PR fixes incorrectly computed transition probabilties for beam search. In beam search it's common to want to know the transition probability between tokens -> see: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/15?u=patrickvonplaten Currently there is a bug for every beam search that ends before `max_length` as discovered by #15869 . This PR fixes this behavior and adds a couple of tests. 🚨 **There is a tiny breaking change** in the output format of `beam_indices`. Instead of returning a `tuple(tuple(int))` a `torch.LongTensor` is returned. Since the feature was broken before (`beam_indices` were incorrect), this is hardly a breaking change and necessary to make the functionality more userfriendly and robustu🚨 @patil-suraj @gante , it would be super nice if you could do an in-depth review here also to understand this feature. **Note:** people do seem to be interested in output scores for generation as can be seen by the large number of views here: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175 Feel free to drop any question in the PR if you don't understand something or something is unclear - thanks :pray: ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17311/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17311", "html_url": "https://github.com/huggingface/transformers/pull/17311", "diff_url": "https://github.com/huggingface/transformers/pull/17311.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17311.patch", "merged_at": 1652991422000 }
https://api.github.com/repos/huggingface/transformers/issues/17310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17310/comments
https://api.github.com/repos/huggingface/transformers/issues/17310/events
https://github.com/huggingface/transformers/pull/17310
1,239,208,166
PR_kwDOCUB6oc43_Nls
17,310
[Fix-copies] Correct main fix-copies
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixes https://app.circleci.com/pipelines/github/huggingface/transformers/40462/workflows/c13c81f0-5db9-4556-8340-f64dc8871e26/jobs/458163\r\n\r\nNot sure why it didn't show up on the PR: https://github.com/huggingface/transformers/pull/16441", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Copies are not correct on main at the moment ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17310/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17310", "html_url": "https://github.com/huggingface/transformers/pull/17310", "diff_url": "https://github.com/huggingface/transformers/pull/17310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17310.patch", "merged_at": 1652826872000 }
https://api.github.com/repos/huggingface/transformers/issues/17309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17309/comments
https://api.github.com/repos/huggingface/transformers/issues/17309/events
https://github.com/huggingface/transformers/issues/17309
1,239,044,749
I_kwDOCUB6oc5J2lKN
17,309
UNETR: Transformers for 3D Medical Image Segmentation
{ "login": "pri1311", "id": 64613009, "node_id": "MDQ6VXNlcjY0NjEzMDA5", "avatar_url": "https://avatars.githubusercontent.com/u/64613009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pri1311", "html_url": "https://github.com/pri1311", "followers_url": "https://api.github.com/users/pri1311/followers", "following_url": "https://api.github.com/users/pri1311/following{/other_user}", "gists_url": "https://api.github.com/users/pri1311/gists{/gist_id}", "starred_url": "https://api.github.com/users/pri1311/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pri1311/subscriptions", "organizations_url": "https://api.github.com/users/pri1311/orgs", "repos_url": "https://api.github.com/users/pri1311/repos", "events_url": "https://api.github.com/users/pri1311/events{/privacy}", "received_events_url": "https://api.github.com/users/pri1311/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hello. What is the status of the implementation? I would like to contribute to it.", "Hey @Puranjay-del-Mishra, to the best of my knowledge nobody has started working on it. We'd be very happy for you to take a stab at adding it!\r\n\r\nYou can follow the tutorial here: [adding a new model](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model).\r\n\r\nWe especially recommend following the [`add-new-model-like`](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command) command and guide.\r\n\r\nIf you have not contributed to transformers yet, we also recommend reading the [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).", "Sure! @LysandreJik \r\n I'll go through it and give it a shot. Thanks.\r\n\r\n", "Hey @Puranjay-del-Mishra @LysandreJik I was supposed to submit a PR last week but I came down with health problems.\nI will be sending a PR by the weekend.", "Hey @pri1311 , go ahead with the PR. All the best.", "I'm gonna try this out. Appreciate it.", "Hi @NielsRogge,\r\nCan I have a shot at implementing this model?", "Yes, sure! Do you need some help?", "Thanks! I'll get back to you if I have queries", "Hello @NielsRogge. I have been following all the steps depicted in the guide https://huggingface.co/docs/transformers/add_new_model. I have already done all previous step to create a PR. At this moment I have a fork on my github of the whole transformer-HuggingFace project and I have created my \"draft\" copying VIT by using the command \"transformers-cli add-new-model-like\". After that, I created a draft pull request from my dev-fork-branch to my main-fork-branch and I tried to include you as a reviewer, but It was not possible. Am I missing some steps? Should the pull request be done directly from my dev-fork-brach to some branch in the real repository?\r\n\r\nAttaching snapshot of the problem:\r\n![error_adding_reviewers](https://user-images.githubusercontent.com/71934065/200169815-85d9c1ee-0aa3-4e71-a302-44f0d27cc020.png)\r\n", "Hi @NielsRogge and @LysandreJik, \r\n\r\nI have been working on this task for the last few weeks and my code is doing the forward pass properly. Now I am implementing the tokenizer but I have a doubt. In the original repository they have created many functions to transform input images. Can I include this function/library as a requirement for the HuggingFace tokenizer or they must be implemented from scratch?\r\n\r\nMany thanks", "Hi,\r\n\r\nUNETR is a vision model so it probably doesn't require a tokenizer? You probably want to create an image processor for this model, is that right?\r\n\r\nIn that case, image processors should be implemented to support minimal inference. They should perform the exact same transformations as the original implementation to prepare data for the model for inference. For computer vision models, this typically involves resizing to a particular size + normalization.\r\n\r\nAn example of an image processor can be found here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py", "Thank you for the answer @NielsRogge. \r\n\r\nWhen I was talking about the tokenizer I was meaning in fact the image_processor. When you check how the original repository implements the model, you realize they are using some transformations not implemented in Hugging Face library. These transormations normalize, filter and resize the 3d image in particular ways, with an slightly complex hierarchy of functions that can not be implemented with the current functions you can find in the \"image_processing_utils.py\"\r\n![image](https://user-images.githubusercontent.com/71934065/207093460-25fbab32-d48f-4dcf-a609-ccdf9e7f651f.png)\r\n\r\n![image](https://user-images.githubusercontent.com/71934065/207092569-18b3d3b0-e201-42cf-8675-8cb553b04bce.png)\r\n\r\nAs far as I can see there are three options to implement this part in the Hugging Face code: \r\n\r\n- Use exactly the same functions they use in the original project (importing libraries of the monai project) https://github.com/Project-MONAI/MONAI\r\n- Copy/paste the code (of the monai project) in the image_processing_utils.py and addapt the style and names to make it more legible.\r\n- Implement from scratch the whole code. This could be time-consuming and pretty hard to obtain same results as in the original code.\r\n\r\nWhat is the recommended option?", "Thanks for the nice suggestions! I'll ping @amyeroberts for this, as she's currently working on refactoring our image processing pipelines.", "Thank you Niels.\r\n\r\nPlease let me know when you have some info. I'll be working in the refactor of the UNETR decoder since the forward pass is using currrently a dependency of the monai project (original project) as well. ", "Discussed this offline with @amyeroberts, here's what she responded:\r\n\r\nI’d use the third party for now (with usual `xxx_is_available` checks) and wrap inside the image processor e.g.\r\nimport thirdparty\r\n\r\n```\r\nclass MyImageProcessor:\r\n def transform_1(self, image, *args, **kwargs):\r\n image = thirdparty.transform_1(image, *args, **kwargs)\r\n ...\r\n```\r\nso that we can remove easily if needs be.\r\nLooking at the MONAI library:\r\nTorch is required. This is fine for implementing the first model, but shouldn’t be necessary for our TF model users. If the model turns out to be popular it would be good to remove this dependancy so we can port easily. Most of the transforms listed are compositions of standard logic we already have e.g. CropForeground would only require us implementing logic to calculate the bounding box.", "@caleb-vicente Thanks for all your work so far adding this model ❤️ \r\n\r\nAdding to Niels comment above: \r\n\r\nRegarding your suggestions, option 1 is the one I would go for: importing specific functionality from the MONAI project. I completely agree we don't want to reinvent the wheel! We already use third party packages for certain processing e.g. [pytesseract for the LayoutLM models](https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlmv2/image_processing_layoutlmv2.py). Like the LayoutLM models, [we can add MONAI as an optional dependency](https://github.com/huggingface/transformers/blob/722bf7efcce72e60412f75d6775af7b03041d8c8/src/transformers/models/layoutlmv2/image_processing_layoutlmv2.py#L42). \r\n\r\nRegarding transforms in the screenshot above, one thing to consider is the image processors don't perform augmentation, they are responsible for transforming the data so that it can be fed into the model i.e. the `UterImageProcessor` shouldn't have the random operations like `RandFlipd`. \r\n\r\nIn the snippet:\r\n\r\n```\r\nclass MyImageProcessor:\r\n def transform_1(self, image, *args, **kwargs):\r\n image = thirdparty.transform_1(image, *args, **kwargs)\r\n ...\r\n```\r\n\r\nthere's also the consideration about input types. All of the current functions take in and return numpy arrays and it should be possible to disable any of the transforms e.g. `do_resize=False`. As far as I can tell, MONAI will accept both torch and numpy, but always returns torch arrays. This is OK for a first implementation before removing the torch dependency as long as the ability to disable any of the transforms still applies. \r\n\r\nLet me know if there's any other questions you have regarding this :) ", "Hello @NielsRogge and @amyeroberts, \r\n\r\nThank you so much for the answers. Please find a few comments below: \r\n\r\n- I will implement the optional dependency with the monai library.\r\n- For the first implementation I will use functions as they are in the library. For next iterations I could simplify some of them using work already done in the Hugging Face library.\r\n- About data augmentation I will review it again to see if I can find any of those in MONAI inference phase. In this case the function RandFlipd is used only in training mode in the Notebook from which I took the snapshot (Sorry for the confusion).\r\n- I will add a layer on top of MONAI's dependencies so that everything works with numpy arrays if necessary. Additionally the possibility to decide will be included. \r\n\r\nI will keep you updated about the progess or any doubt :)" ]
1,652
1,679
null
NONE
null
### Model description I would like to add a new model: Proposed in the paper: [UNETR: Transformers for 3D Medical Image Segmentation](https://arxiv.org/abs/2103.10504) UNEt TRansformers (UNETR) utilize a transformer as the encoder to learn sequence representations of the input volume and effectively capture the global multi-scale information, while also following the successful "U-shaped" network design for the encoder and decoder. The transformer encoder is directly connected to a decoder via skip connections at different resolutions to compute the final semantic segmentation output. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Model Implementation: https://github.com/Project-MONAI/research-contributions/tree/master/UNETR Pretrained Model: https://drive.google.com/file/d/1kR5QuRAuooYcTNLMnMj80Z9IgSs8jtLO/view?usp=sharing (Based on BTCV dataset)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17309/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/17308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17308/comments
https://api.github.com/repos/huggingface/transformers/issues/17308/events
https://github.com/huggingface/transformers/pull/17308
1,239,027,536
PR_kwDOCUB6oc43-muq
17,308
Support compilation via Torchdynamo, AOT Autograd, NVFuser
{ "login": "anijain2305", "id": 13822661, "node_id": "MDQ6VXNlcjEzODIyNjYx", "avatar_url": "https://avatars.githubusercontent.com/u/13822661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anijain2305", "html_url": "https://github.com/anijain2305", "followers_url": "https://api.github.com/users/anijain2305/followers", "following_url": "https://api.github.com/users/anijain2305/following{/other_user}", "gists_url": "https://api.github.com/users/anijain2305/gists{/gist_id}", "starred_url": "https://api.github.com/users/anijain2305/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anijain2305/subscriptions", "organizations_url": "https://api.github.com/users/anijain2305/orgs", "repos_url": "https://api.github.com/users/anijain2305/repos", "events_url": "https://api.github.com/users/anijain2305/events{/privacy}", "received_events_url": "https://api.github.com/users/anijain2305/received_events", "type": "User", "site_admin": false }
[ { "id": 2690307185, "node_id": "MDU6TGFiZWwyNjkwMzA3MTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Performance", "name": "Performance", "color": "207F32", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@kevinstephano could you please take a quick look at this PR? Thanks!", "> LGTM, thanks for iterating! @stas00 are you okay with the changes?\r\n\r\nI primarily would like to hold off on merging this just yet to hear from @Chillee (PyTorch) and may be @csarofeen (NVIDIA) to think what other options we might want down the road and design the key/values better.\r\n\r\ne.g. questions\r\n- do we have to go through TorchDynamo, or can we go directly through aot Autograd see: https://github.com/huggingface/transformers/pull/15264\r\n- should we give users an option to choose other fusers besides nvfuser?\r\n- is it always \"driver -> fuser\" combo so perhaps the value should have 2 parts: `--compile torchdynamo:nvfuser`, `--compile aotautograd:fuserxxx` (and then the key needs to be renamed and the driver moved into the value - and that way we end up with just one entry point and lots of flexibility on the different combos. not sure on the best key name.\r\n\r\nSo ideally collect all the possible combos and then we could see how to best organize those.\r\n\r\nbut I added that the current API proposed in this PR is experimental, so we could go with it and change it at will later.", "> @kevinstephano could you please take a quick look at this PR? Thanks!\r\n\r\nLooks good to me.", "> * do we have to go through TorchDynamo, or can we go directly through aot Autograd see: [[Kernel Fusion] training benchmarks of AOTAutograd (multiple models) #15264](https://github.com/huggingface/transformers/pull/15264)\r\n> * should we give users an option to choose other fusers besides nvfuser?\r\n> * is it always \"driver -> fuser\" combo so perhaps the value should have 2 parts: `--compile torchdynamo:nvfuser`, `--compile aotautograd:fuserxxx` (and then the key needs to be renamed and the driver moved into the value - and that way we end up with just one entry point and lots of flexibility on the different combos. not sure on the best key name.\r\n\r\nFor the first question on TorchDynamo I'll leave @chillee to give an opinion here.\r\n\r\nSecond point: I personally think nvFuser is going to be your best bet. We're trying to move torch script to be nvFuser by default: https://github.com/pytorch/pytorch/pull/77579 so likely nvFuser is a good bet. Dynamo is looking at supporting multiple backends but I believe that will be more automated of a thing and shouldn't require you worrying about it.\r\n\r\nFor the last point I think again @chillee is the one to ask. I think AOTAutograd is moving or has moved to nvFuser by default? Don't know for sure here, don't know what Dynamo is planning/looking for as options.", "> do we have to go through TorchDynamo, or can we go directly through aot Autograd\r\n\r\nIMO, going through TorchDynamo is the right option here. As mentioned in the previous PR, using AOTAutograd is somewhat risky, since we can't guarantee correctness. So, I don't think it's the right option to provide as a \"default\" trainer.\r\n\r\nIf users want to apply AOTAutograd by themselves then I think they should feel free to do so, but I'm not convinced we should provide it as an option integrated into HF.\r\n\r\n> should we give users an option to choose other fusers besides nvfuser?\r\n\r\nYes, I think it's reasonable. For example, we also have a TensorRT integration with TorchDynamo that has some fairly good numbers. However, as @csarofeen says, NVFuser is definitely what I'd recommend as the \"default\" for this PR - if we have other backends it'll just be a different flag.\r\n\r\n> is it always \"driver -> fuser\" combo so perhaps the value should have 2 parts\r\n\r\nI think TorchDynamo + AOTAutograd are essentially \"constants\" here. It's plausible that in the future there will be other graph-capture paths (such as if we want to export models), but the UX for that will be significantly different (i.e. it won't be a seamless \"always work\" thing).\r\n\r\nSo I think it's fine to have fuser be the only thing that changes.", "Thank you for your commentary, @Chillee and @csarofeen \r\n\r\n> if we have other backends it'll just be a different flag.\r\n\r\nThat's the whole point of me starting this discussion - we don't want to have additional flags. We have too many already. That's why I was thinking that perhaps the flag should indicate some sort of non-implementation specific name like `--fusion` or `--compiler` or ??? and then the value(s) can define the specific path, so perhaps this PR's original cmd arg can be converted to:\r\n\r\n```\r\n--fusion torchdynamo:nvfuser\r\n--fusion torchdynamo:eager\r\n```\r\n\r\nwhich makes it easy to add other combos in the future w/o needing to change the cmd arg api. \r\n\r\nwhich fits the current code of this PR:\r\n\r\nhttps://github.com/huggingface/transformers/blob/28f80ec046ece4fe01e7936c6c2f861d532c7d90/src/transformers/trainer.py#L2197-L2200\r\n\r\nDoes any of this resonate at all? And if it does what would be the apt generic naming for the example I used `--fusion` (currently `--torchdynamo` key) - perhaps `--autofusion`, `--autooptimize`, else?\r\n", "@stas00 ah sorry, I mispoke - by \"another flag\", I meant \"another value for the config option\". I think something like this would be better.\r\n```\r\n--fusion nvfuser\r\n--fusion eager\r\n```\r\n\r\n(btw, I think \"debug\" might be a better name than \"eager\"? I think it's kinda confusing to have a fusion option called \"eager\" haha. Or perhaps we should just remove it as an option - it's only useful for debugging bugs).\r\n\r\nFrom our side, I think the main option is just going to be torchdynamo. So I think `--fusion nvfuser` and `--fusion eager` is probably sufficient.", "> (btw, I think \"debug\" might be a better name than \"eager\"? I think it's kinda confusing to have a fusion option called \"eager\" haha. Or perhaps we should just remove it as an option - it's only useful for debugging bugs).\r\n\r\nI think \"eager\" is good because that's what you pass to `torchdynamo` - it'd be easy to document that it doesn't do any fusing and just provides the default behavior.\r\n\r\n> From our side, I think the main option is just going to be torchdynamo. So I think `--fusion nvfuser` and `--fusion eager` is probably sufficient.\r\n\r\nso what you're proposing is that `torchdynamo` is going to be implied as the driver for `nvfuser` or `eager` and then in the future there might be other drivers besides `torchdynamo`?\r\n\r\nSo currently then we are discussing 2 options:\r\n```\r\n--fusion nvfuser\r\n--fusion eager\r\n```\r\n\r\nwhich imply:\r\n\r\n```\r\n--fusion torchdynamo:nvfuser\r\n--fusion torchdynamo:eager\r\n```\r\n\r\nperhaps I should not bother to future proof this flag? ", "> so what you're proposing is that torchdynamo is going to be implied as the driver for nvfuser or eager and then in the future there might be other drivers besides torchdynamo?\r\n\r\nI think it's unlikely that in the (foreseeable) future there will be other drivers besides torchdynamo with a similar UX. So imo, there's not a significant reason to try to future proof this flag now - i don't think it'd be that hard to change while preserving BC in the future either.", "ok, so then let's keep the original proposal `--torchdynamo <nfuser|eager>`, right?", "@stas00 This PR is ready for another round of review. Let me know what you think.", "1. The memory test consistently hangs for me:\r\n\r\n```\r\n$ pytest tests/trainer/test_trainer.py -k torchdynamo_memory -sv\r\n```\r\nnothing useful in the output.\r\n\r\nTraceback:\r\n```\r\n$ py-spy dump --pid 530235\r\nThread 530235 (idle): \"MainThread\"\r\n backward (torch/autograd/__init__.py:173)\r\n backward (torch/_tensor.py:399)\r\n _backward (functorch/_src/monkey_patching.py:97)\r\n training_step (transformers/trainer.py:2263)\r\n test_torchdynamo_memory (tests/trainer/test_trainer.py:1668)\r\n _callTestMethod (unittest/case.py:633)\r\n run (unittest/case.py:676)\r\n __call__ (unittest/case.py:736)\r\n runtest (_pytest/unittest.py:327)\r\n pytest_runtest_call (_pytest/runner.py:166)\r\n _multicall (pluggy/_callers.py:39)\r\n _hookexec (pluggy/_manager.py:80)\r\n __call__ (pluggy/_hooks.py:265)\r\n <lambda> (_pytest/runner.py:259)\r\n from_call (_pytest/runner.py:338)\r\n call_runtest_hook (_pytest/runner.py:258)\r\n call_and_report (_pytest/runner.py:219)\r\n runtestprotocol (_pytest/runner.py:130)\r\n pytest_runtest_protocol (_pytest/runner.py:111)\r\n _multicall (pluggy/_callers.py:39)\r\n _hookexec (pluggy/_manager.py:80)\r\n __call__ (pluggy/_hooks.py:265)\r\n pytest_runtestloop (_pytest/main.py:347)\r\n _multicall (pluggy/_callers.py:39)\r\n _hookexec (pluggy/_manager.py:80)\r\n __call__ (pluggy/_hooks.py:265)\r\n _main (_pytest/main.py:322)\r\n wrap_session (_pytest/main.py:268)\r\n pytest_cmdline_main (_pytest/main.py:315)\r\n _multicall (pluggy/_callers.py:39)\r\n _hookexec (pluggy/_manager.py:80)\r\n __call__ (pluggy/_hooks.py:265)\r\n main (_pytest/config/__init__.py:164)\r\n console_main (_pytest/config/__init__.py:187)\r\n <module> (pytest:8)\r\nThread 530372 (idle): \"Thread-4\"\r\n wait (threading.py:306)\r\n wait (threading.py:558)\r\n run (tqdm/_monitor.py:60)\r\n _bootstrap_inner (threading.py:932)\r\n _bootstrap (threading.py:890)\r\nThread 530390 (active)\r\n _call_impl (torch/nn/modules/module.py:1130)\r\n _fn (torchdynamo/eval_frame.py:74)\r\n backward (functorch/_src/aot_autograd.py:185)\r\n _fn (torchdynamo/eval_frame.py:74)\r\n apply (torch/autograd/function.py:253)\r\n```\r\n\r\nI tried rebuilding everything and it still hangs. env details below\r\n\r\nI can't even Ctrl-C `pytest` - have to `kill` it\r\n\r\n2. Once we figure out how to make the test work I need to see how fast it runs to potentially `@slow` decorate it - which we do for slow tests.\r\n\r\n3. We need to instrument the nightly CI to install all the requirements to run this test. I'm just waiting to confirm how to best approach it.\r\n\r\n-----------------\r\n\r\nbuild env:\r\n\r\nPyTorch version: 1.12.0.dev20220518\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.3\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 21.10 (x86_64)\r\nGCC version: (Ubuntu 10.3.0-11ubuntu1) 10.3.0\r\nClang version: 13.0.0-2\r\nCMake version: version 3.21.3\r\nLibc version: glibc-2.34\r\n\r\nPython version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.15.32-051532-generic-x86_64-with-glibc2.17\r\nIs CUDA available: True\r\nCUDA runtime version: 11.6.124\r\nGPU models and configuration: \r\nGPU 0: NVIDIA A100 80GB PCIe\r\nGPU 1: NVIDIA GeForce GTX 1070 Ti\r\n\r\nNvidia driver version: 510.47.03\r\ncuDNN version: Probably one of the following:\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5\r\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0\r\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] functorch==0.3.0a0+76976db\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.21.2\r\n[pip3] torch==1.12.0.dev20220518\r\n[pip3] torchaudio==0.12.0.dev20220518\r\n[pip3] torchdynamo==0.2.0\r\n[pip3] torchvision==0.13.0.dev20220518\r\n[conda] blas 1.0 mkl\r\n[conda] cudatoolkit 11.3.1 h2bc3f7f_2\r\n[conda] functorch 0.3.0a0+76976db dev_0 <develop>\r\n[conda] mkl 2021.4.0 h06a4308_640\r\n[conda] mkl-service 2.4.0 py38h7f8727e_0\r\n[conda] mkl_fft 1.3.1 py38hd3c417c_0\r\n[conda] mkl_random 1.2.2 py38h51133e4_0\r\n[conda] numpy 1.21.2 pypi_0 pypi\r\n[conda] pytorch 1.12.0.dev20220518 py3.8_cuda11.3_cudnn8.3.2_0 pytorch-nightly\r\n[conda] pytorch-mutex 1.0 cuda pytorch-nightly\r\n[conda] torch 1.12.0.dev20220404+cu115 pypi_0 pypi\r\n[conda] torchaudio 0.12.0.dev20220404+cu115 pypi_0 pypi\r\n[conda] torchdynamo 0.2.0 dev_0 <develop>\r\n[conda] torchvision 0.13.0.dev20220404+cu115 pypi_0 pypi\r\n", "@anijain2305, are you up to doing one more PR with docs? https://huggingface.co/docs/transformers/main/en/performance\r\n\r\n1. add HF Trainer usage example\r\n2. add examples of how a user can do it directly\r\n\r\nI guess with the new layout the docs would go here:\r\nhttps://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_one.mdx", "@stas00 Yes, I can do docs as well. Let me take a look and I will come back where to put the section.", "Also as I updated in the OP, @ydshieh is instrumenting the nightly CI to install the prerequisites for this test in this PR: https://github.com/huggingface/transformers/pull/17335 in commit: https://github.com/huggingface/transformers/pull/17335/commits/52e7021c6a1c8e2b2f749c6ce8daf078c6785c3e\r\n", "pinging about the docs, @anijain2305 - thank you!\r\n\r\nalmost nobody will use your work unless you document it in user-facing docs. so you're the ones who really want to add these docs, I'd think..." ]
1,652
1,655
1,653
CONTRIBUTOR
null
# What does this PR do? Adding support for TorchDynamo compilation with AOT Autograd and nvfuser backends. Detailed context available at - https://github.com/huggingface/transformers/pull/17204 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? -------------------- ## TODO: setup pt-nightly CI to run the tests in this PR, instructions: ``` # install torch-nightly conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch-nightly # install functorch (and reinstall after `git pull` later if need to sync up) git clone https://github.com/pytorch/functorch cd functorch rm -rf build pip install -e .[aot] cd .. git clone https://github.com/pytorch/torchdynamo cd torchdynamo pip install -r requirements.txt python setup.py develop ``` @ydshieh is adding this in this PR: https://github.com/huggingface/transformers/pull/17335 in commit: https://github.com/huggingface/transformers/pull/17335/commits/52e7021c6a1c8e2b2f749c6ce8daf078c6785c3e
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17308/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17308/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17308", "html_url": "https://github.com/huggingface/transformers/pull/17308", "diff_url": "https://github.com/huggingface/transformers/pull/17308.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17308.patch", "merged_at": 1653491769000 }
https://api.github.com/repos/huggingface/transformers/issues/17307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17307/comments
https://api.github.com/repos/huggingface/transformers/issues/17307/events
https://github.com/huggingface/transformers/issues/17307
1,239,024,117
I_kwDOCUB6oc5J2gH1
17,307
404 Errors on Loading Artifacts due to mispellings could suggest a model/tokenizer/dataset to the API user in the error message.
{ "login": "domenicrosati", "id": 15069938, "node_id": "MDQ6VXNlcjE1MDY5OTM4", "avatar_url": "https://avatars.githubusercontent.com/u/15069938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/domenicrosati", "html_url": "https://github.com/domenicrosati", "followers_url": "https://api.github.com/users/domenicrosati/followers", "following_url": "https://api.github.com/users/domenicrosati/following{/other_user}", "gists_url": "https://api.github.com/users/domenicrosati/gists{/gist_id}", "starred_url": "https://api.github.com/users/domenicrosati/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/domenicrosati/subscriptions", "organizations_url": "https://api.github.com/users/domenicrosati/orgs", "repos_url": "https://api.github.com/users/domenicrosati/repos", "events_url": "https://api.github.com/users/domenicrosati/events{/privacy}", "received_events_url": "https://api.github.com/users/domenicrosati/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,656
1,656
CONTRIBUTOR
null
### Feature request When loading a model/tokenizer/config/Datasets with Auto* (or even other loader classes). It would be fun to autosuggest a model from HF hub if you make a typo in the 404 error message from the API. Feel free to close this if you think its silly and freviolous. An implementation could use edit distance of input string to models or tokenizer... or perform a search with that string using the model hub search API. ### Motivation Sometimes folks misspell or forget the name of a model/tokenizer/config, it might be nice to suggest the correction in the error message! ### Your contribution Sure thing, yes I could make a PR, if this is something other people would want.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17307/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17306/comments
https://api.github.com/repos/huggingface/transformers/issues/17306/events
https://github.com/huggingface/transformers/pull/17306
1,238,909,543
PR_kwDOCUB6oc43-NR_
17,306
Fix -1e4 as attn mask
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Generally, this looks good to me. I'd prefer though to not factor out a one-liner into a function (even if we have to add the one-liner 100+ times). It's not good for readability to have to jump to `modeling_utils.py` and the code saved is not worth it for a one-liner.\r\n\r\nAlso, I'd advocate to make three separate PRs (one for PT, one for TF, one for Flax). Think it should be both easier to maintain the PRs as well as review them. \r\n\r\nA first test should then be that all slow tests pass. After that it would indeed be nice if we could run some fine-tuning for the most important models (BERT on GLUE, GPT2 on causal LM, T5 on translation maybe). Maybe also not even necessary to verify that everything is correct with a training run if the slow tests all pass ", "Hi,\r\n\r\n@patrickvonplaten:\r\n - I removed the new function.\r\n - I have to modify `FlaxT5Attention` otherwise the PT/Flax T5 equivalence tests will fail.\r\n\r\n@sgugger:\r\n - since there is no more new function `mask_value()`, so no more `device` issue. There is one place I need to use tensor and device though:\r\n\r\n https://github.com/huggingface/transformers/blob/195ef42e0be974e8c019e9d5f03070f65365c721/src/transformers/models/gpt2/modeling_gpt2.py#L202-L205\r\n\r\nWould this be a problem for model parallelism for big model inference? It is `attn_weights.device` instead of `self.dtype` though.", "@ydshieh Using the weight device is perfectly fine, thanks for checking!", "Cool, exciting!", "Hi, @patrickvonplaten @patil-suraj @sgugger @LysandreJik \r\n\r\nThis PR is ready for review. \r\n\r\n- Only dealing with PyTorch models: but need to change `FlaxT5` too to make the test pass.\r\n- In general, change to `torch.finfo(correct-dtype).min` instead of `-10000`, `-1e9` etc.\r\n- In particular, changes in `modeling_utils.py`\r\n- Verified the change by training a T5 from scratch as well as finetuning the `t5-small` checkpoint", "@sgugger @patrickvonplaten: sorry to call you after the approval, there is something more to discuss after I saw #17437 by @younesbelkada.\r\n\r\n- First I need to remove the use of `float(\"-inf\")` as done in [this commit](https://github.com/huggingface/transformers/pull/17306/commits/9fa9d9e8ac4aec9df3c42b3c6e6b178271230247)\r\n - It's better not to use `inf`: it is hard to track if there is any arithmetic ops that will produce `NaN`.\r\n\r\n- Second, there are still some issues when using `torch.finfo(dtype).min`, especially when running in `fp16`, see the code snippet below.\r\n\r\n Basically: \r\n - `torch.finfo(torch.float16).min + (-16.0) = -inf`. (or anything < `-16.0`)\r\n - In some cases, we get all 0s as attention mask:\r\n - for example, `[pad_token, token_1]` will give an attention mask `[[0, 0], [0, 1]]` (due to causal mask + padding).\r\n - this mask becomes `[[-65504, -65504], ...]` after using `torch.finfo(dtype).min` (for `fp16`)\r\n - This might give `[[-inf, -inf], ...]` when combining an `atten_scores` with values < -16.0 \r\n - Then, we get `NaN` after softmax. \r\n\r\n~~- `torch`'s `softmax` can't work with fp16 input on CPU~~\r\n - ~~RuntimeError: \"softmax_lastdim_kernel_impl\" not implemented for 'Half'~~\r\n\r\n### Suggestions\r\n\r\n~~- Cast `fp16` to `fp32` just before `softmax`, so it can run on CPU~~\r\n- Perform some other processing before `softmax` to avoid `NaN` and non-sense output, especially avoid `[-inf, -inf]` or `[-inf, - large_value]` as input to `softmax`\r\n- Change attn probability to `[0, 0, .. 0]` if the input is all large negative value `torch.finfo(dtype).min` \r\n\r\n### Code Snippet\r\n```\r\nimport torch\r\nfrom torch import nn\r\n\r\n# device = \"cpu\" --> not working with softmax on float16 (RuntimeError: \"softmax_lastdim_kernel_impl\" not implemented for 'Half')\r\ndevice = \"cuda\"\r\n\r\ndtype = torch.float16\r\n\r\nmask_value = torch.finfo(dtype).min\r\nattn_mask = torch.tensor([mask_value, mask_value], dtype=dtype)\r\n\r\nattn_scores_0 = torch.tensor([-0, -0], dtype=dtype)\r\nattn_scores_1 = torch.tensor([-4, -16], dtype=dtype)\r\nattn_scores_2 = torch.tensor([-18, -16], dtype=dtype)\r\n\r\nfinal_attn_scores_0 = attn_scores_0 + attn_mask # --> [-65504, -65504]\r\nfinal_attn_scores_1 = attn_scores_1 + attn_mask # --> [-65504, -inf]\r\nfinal_attn_scores_2 = attn_scores_2 + attn_mask # --> [-inf, -inf]\r\n\r\nattn_prob_0 = nn.functional.softmax(final_attn_scores_0.to(device), dim=-1) # --> [0.5, 0.5], but non-sense!!\r\nattn_prob_1 = nn.functional.softmax(final_attn_scores_1.to(device), dim=-1) # --> [1, 0], but non-sense!!\r\nattn_prob_2 = nn.functional.softmax(final_attn_scores_2.to(device), dim=-1) # --> [nan, nan], very bad!\r\n\r\nprint(final_attn_scores_0)\r\nprint(final_attn_scores_1)\r\nprint(final_attn_scores_2)\r\nprint(attn_prob_0)\r\nprint(attn_prob_1)\r\nprint(attn_prob_2)\r\n```", "Nothing should be done in FP16 on CPU, softmax is not the only operation that is not implemented in Half on CPU.", "> Nothing should be done in FP16 on CPU, softmax is not the only operation that is not implemented in Half on CPU.\r\n\r\nOK, thank you! Still want to hear your opinion on other points when you have more time", "I'm personally leaning toward implementing an util that preprocesses the attention mask before the softmax as you suggested @ydshieh, but curious to see others opinion.", "I think we can leave the discussion regarding `softmax` and `attention score processing` to another thread and PR.\r\n\r\nAfter removing `float(\"-inf\")` in this PR, it's less likely to get all `-inf` in a particular sequence.\r\n(It's sill likely to happen during training, so better to have a process done after this PR).\r\n\r\nSo I would prefer to merge the current version once @LysandreJik is happy with this PR.", "@michaelbenayoun \r\n\r\nI have tried to keep your change \r\n```\r\nmask = torch.full((tgt_len, tgt_len), torch.tensor(float(\"-inf\")))\r\n```\r\nwith my own, so it becomes\r\n```\r\nmask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))\r\n```\r\n\r\nIf you encounter any problem after this PR is merged, don't hesitate to ping me.", "@michaelbenayoun I need you help 🙏 \r\n\r\n\r\nhttps://github.com/huggingface/transformers/blob/575a8c0a8e2bf6491d7ef0f932fb6caf8a7712b1/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L135\r\n\r\nI need to change `torch.tensor(float(\"-inf\")))` to `torch.tensor(torch.finfo(dtype of an input).min))`\r\n\r\nbut I can't understand what the input it is here. Is it `g` or is it `self`.\r\n\r\nFrom the line below\r\n```\r\nmasked_fill(g, output, r_mask, g.op(\"Constant\", value_t=torch.tensor(0, dtype=torch.uint8)))\r\n```\r\nI guess it might be the second argument, so I should use `self`, but it looks strange to me 😢 ", "@ydshieh About the first change you mention, I don't think it will break anything, and I will try to apply the same changes when I add support for new model architectures.\r\n\r\nAbout the symbolic function, you are right I think. Basically `self` is `input` here, maybe we should change the name of the parameter to make things clearer?", "> @ydshieh About the first change you mention, I don't think it will break anything, and I will try to apply the same changes when I add support for new model architectures.\r\n> \r\n> About the symbolic function, you are right I think. Basically `self` is `input` here, maybe we should change the name of the parameter to make things clearer?\r\n\r\nGreat, thanks for the answer! We can change the name, but not very urgent.\r\n", "Ran GPU non-slow tests (single/multi GPU) - results are fine.", "Will merge today (as been approved by 3 core maintainers), if there is no further comment on\r\n\r\nhttps://github.com/huggingface/transformers/pull/17306#pullrequestreview-994918858", "As discussed on Slack, will wait the release of Bloom, and not merge now." ]
1,652
1,656
1,655
COLLABORATOR
null
# What does this PR do? Fix the issues regarding `-1e4` as attention mask. Fix #17215 #17121 #14859
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17306/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17306", "html_url": "https://github.com/huggingface/transformers/pull/17306", "diff_url": "https://github.com/huggingface/transformers/pull/17306.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17306.patch", "merged_at": 1655734577000 }
https://api.github.com/repos/huggingface/transformers/issues/17305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17305/comments
https://api.github.com/repos/huggingface/transformers/issues/17305/events
https://github.com/huggingface/transformers/issues/17305
1,238,721,757
I_kwDOCUB6oc5J1WTd
17,305
Pipeline inference with text pair is broken
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @fxmarty ,\r\n\r\nText pair was indeed never supported by the pipeline, so it's not tested against. We could definitely add support.\r\nWithout any code change you could do :\r\n```python\r\nres = pipe([[[txt, txt_pair]]] padding=True, truncation=True)\r\n```\r\n\r\nThen we could also have a fix within the pipeline (and properly crash when malformed input instead of here a silent error).\r\n\r\n\r\nThe true culprit is two difference parts of magics conflicting.\r\n\r\n```python\r\ntokenizer([txt, txt_pair])\r\n```\r\nIs guessing `txt` and `txt_pair` are actually two different texts, and outputs batched input_ids.\r\n```python\r\ntokenizer([[txt, txt_pair]])\r\n```\r\nIs guessing the first list is the batch, and the second list therefore MUST be a pair of texts.\r\n\r\nIt interacts with `pipeline` magic, which also supposes that lists are batches so \r\n```python\r\npipe([[txt, txt_pair]])\r\n```\r\nIs understood as a list of inference to run on, and gives only `[txt, txt_pair]` to the underlying tokenizer. which in terms treats them as a batch (which is not what we want in this case).\r\n\r\nSince `pipeline.preprocess` is only really supposed to preprocess one item at a time (any sort of list is handled by the parent class) we can definitely change the call to the tokenizer appropriately into `tokenizer(text=txt, text_pair=txt_pair)` so will yield the correct output.\r\n\r\nQuestion @lhoestq : Do `datasets` have a consistent way of expressing pairs of text so that we could maybe express the whole thing as\r\n\r\n```python\r\nfor output in pipe(load_dataset('glue', 'mnli'))):\r\n print(output)\r\n```\r\nakin to what we did with ASR and the `Audio` column ? If it's not consistent, then the proposed fix should be enough.\r\n\r\n@fxmarty is my explanation clear about what's happening ? I am going to propose a fix so that we support text pairs (which is not supported by every model/tokenizer out there but still pretty useful)\r\n\r\nCheers.", "> Question @lhoestq : Do datasets have a consistent way of expressing pairs of text so that we could maybe express the whole thing as\r\n\r\nNo it doesn't. It could be pairs of question/answer, of sentence1/sentence2, of language1/language2 or any column names.", "@Narsil Thanks a lot for your detailed explanation, makes sense!" ]
1,652
1,652
1,652
COLLABORATOR
null
### System Info ```shell - `transformers` version: 4.20.0.dev0 - Platform: Linux-5.15.0-27-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) ### Reproduction Basically, the pipeline for text classification does not handle well input pairs that must be separated by [SEP] token. For example, for glue's mnli dataset, we have: ```python premise = 'The new rights are nice enough' hypothesis = 'Everyone really likes the newest benefits ' ``` Whether we pass * `pipeline([[premise, hypothesis]], padding=True, truncation=True)` * or `pipeline(" ".join([premise, hypothesis]), padding=True, truncation=True)` the pipeline output is wrong. ## Detailed reproduction If necessary, install transformers in the dev version (`pip uninstall transformers && git clone https://github.com/huggingface/transformers.git && cd transformers && pip install -e .`). Replace https://github.com/huggingface/transformers/blob/1f13ba818e0e3b780cf9155242e2c83a27fdfa9a/src/transformers/pipelines/text_classification.py#L132-L134 by ```python def preprocess(self, inputs, **tokenizer_kwargs) -> Dict[str, GenericTensor]: return_tensors = self.framework tokenized_inps = self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs) print("tokenized_inps", tokenized_inps) return tokenized_inps ``` to be able to see what are the tokenized inputs in the pipeline. Then run ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import pipeline from datasets import load_dataset tokenizer = AutoTokenizer.from_pretrained("roberta-large-mnli") model = AutoModelForSequenceClassification.from_pretrained("roberta-large-mnli") pipe = pipeline(task="text-classification", tokenizer=tokenizer, model=model) raw_datasets = load_dataset("glue", "mnli") txt1 = raw_datasets["validation_matched"][0]["premise"] txt2 = raw_datasets["validation_matched"][0]["hypothesis"] inputs = [txt1, txt2] txt = " ".join(inputs) res = pipe(txt, padding=True, truncation=True) print(res) """Output: tokenized_inps {'input_ids': tensor([[ 0, 133, 92, 659, 32, 2579, 615, 7632, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...} [{'label': 'NEUTRAL', 'score': 0.7983464002609253}] NOTE: theses input_ids correspond to: '<s>The new rights are nice enough Everyone really likes the newest benefits </s>' """ ``` We can see that separating the premise and hypothesis by a space is a very bad idea as there is no [SEP] token between the two. Now run: ```python from transformers import BatchEncoding data = raw_datasets["validation_matched"][0:1] tokenized_inps = tokenizer(data["premise"], data["hypothesis"], padding=True, truncation=True) tokenized_inps = BatchEncoding(tokenized_inps, tensor_type="pt") print(tokenized_inps) print(tokenizer.decode(tokenized_inps["input_ids"][0])) """Output: {'input_ids': tensor([[ 0, 133, 92, 659, 32, 2579, 615, 2, 2, 11243, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...} <s>The new rights are nice enough</s></s>Everyone really likes the newest benefits </s> """ ``` Here, the `tokenizer` takes a `text=premise` and `text_pair=hypothesis`, and we see as expected SEP tokens between the two. Other possibility with the pipeline: ```python txt1 = raw_datasets["validation_matched"][0]["premise"] txt2 = raw_datasets["validation_matched"][0]["hypothesis"] inputs = [txt1, txt2] res = pipe([inputs], padding=True, truncation=True) print(res) """Outputs: tokenized_inps {'input_ids': tensor([[ 0, 133, 92, 659, 32, 2579, 615, 2, 1], [ 0, 11243, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...} [{'label': 'NEUTRAL', 'score': 0.8978187441825867}] Note that now input_ids is 2D! The decoding gives: <s>The new rights are nice enough</s><pad> <s>Everyone really likes the newest benefits </s> """ ``` There is a [CLS] token inserted in the middle, most likely this is not desirable. In fact, when we run the pipeline on several examples from the dataset, all are classified as neutral and wrong. ## Hacky solution Use ```python txt1 = raw_datasets["validation_matched"][0]["premise"] txt2 = raw_datasets["validation_matched"][0]["hypothesis"] inputs = [txt1, txt2] tokenized_inps = pipe.preprocess([inputs]) res = pipe.forward(tokenized_inps) res = pipe.postprocess(res) print(res) """Output: tokenized_inps {'input_ids': tensor([[ 0, 133, 92,659, 32, 2579, 615, 2, 2, 11243, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...} {'label': 'NEUTRAL', 'score': 0.9636728167533875} We get the right input_ids, and the score is the same as with manually using tokenizer + model, yay! """ ``` which gives the same proba as with using the tokenizer and model separately. To me, the issue lies in two facts: * It is very wrong to join two sentences with a space (as suggested in the doc https://huggingface.co/tasks/text-classification ) since we loose the info that they are different sentence. * In case we pass the data as `pipeline([[premise, hypothesis]])`, it could be that there is some funny stuff happening in https://github.com/huggingface/transformers/blob/1f13ba818e0e3b780cf9155242e2c83a27fdfa9a/src/transformers/pipelines/pt_utils.py#L111 ### Expected behavior Pipeline for text-classification with text pair should output the same result than manually using tokenizer + model + softmax.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17304/comments
https://api.github.com/repos/huggingface/transformers/issues/17304/events
https://github.com/huggingface/transformers/pull/17304
1,238,677,979
PR_kwDOCUB6oc439bG9
17,304
Fix dummy creation script
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? The `check_dummies` script was not adapted to the recent changes in the main init. As a result `make fix-copies` stopped creating dummy objects. By some miracle, none were missing, but this PR fixes the script (tested locally after adding new objects).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17304/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17304", "html_url": "https://github.com/huggingface/transformers/pull/17304", "diff_url": "https://github.com/huggingface/transformers/pull/17304.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17304.patch", "merged_at": 1652806584000 }
https://api.github.com/repos/huggingface/transformers/issues/17303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17303/comments
https://api.github.com/repos/huggingface/transformers/issues/17303/events
https://github.com/huggingface/transformers/pull/17303
1,238,674,759
PR_kwDOCUB6oc439aaT
17,303
[Test] Fix W2V-Conformer integration test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes `tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py::Wav2Vec2ConformerModelTest::test_save_load_fast_init_to_base` and doc test: `transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTraining.forward` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17303/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17303", "html_url": "https://github.com/huggingface/transformers/pull/17303", "diff_url": "https://github.com/huggingface/transformers/pull/17303.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17303.patch", "merged_at": 1652804437000 }
https://api.github.com/repos/huggingface/transformers/issues/17302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17302/comments
https://api.github.com/repos/huggingface/transformers/issues/17302/events
https://github.com/huggingface/transformers/pull/17302
1,238,672,719
PR_kwDOCUB6oc439Z-W
17,302
[Speech Model] Add Emformer
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@anton-l could you leave some comments in the code so that I know where I should take a look for the model design here?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17302). All of your documentation changes will be reflected on that endpoint.", "@anton-l do you want to fix the failling tests or should I review already?", "@patrickvonplaten I've left a comment about the last major failing test above, but the rest is ready for review :) \r\nAlso I'm not sure what happened to the sentencepiece tests here https://app.circleci.com/pipelines/github/huggingface/transformers/41150/workflows/91d3a0c9-f633-4895-9169-3179f759ced6/jobs/468492, could you take a look please?", "> @patrickvonplaten I've left a comment about the last major failing test above, but the rest is ready for review :) Also I'm not sure what happened to the sentencepiece tests here https://app.circleci.com/pipelines/github/huggingface/transformers/41150/workflows/91d3a0c9-f633-4895-9169-3179f759ced6/jobs/468492, could you take a look please?\r\n\r\nSee: https://huggingface.slack.com/archives/C01NE71C4F7/p1653573450540219?thread_ts=1653570610.579129&cid=C01NE71C4F7 \r\n\r\nRebase to main should solve the issue", "@patrickvonplaten looks like the remaining failed tests are unrelated ", "Taking this PR over! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Actually closing this for now as I'm not sure anymore whether this should go into `main`. We should maybe eventually think about a new speech library on top of Transformers", "If anyone from the community is interested in taking over this PR, please feel free to do so!" ]
1,652
1,664
1,664
MEMBER
null
# What does this PR do? This PR adds the Emformer model: an auto-regressive ASR model with an option for RNN-Transducer decoding. This model shows promising result for real-time speech recognition and there are 3 pretrained checkpoints available via the torchaudio library: https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html Original `torchaudio` implementation: https://github.com/pytorch/audio/blob/main/torchaudio/models/emformer.py Paper: https://arxiv.org/abs/2010.10759 RNN-Transducer details for reference: https://lorenlugosch.github.io/posts/2020/11/transducer/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17302", "html_url": "https://github.com/huggingface/transformers/pull/17302", "diff_url": "https://github.com/huggingface/transformers/pull/17302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17302.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17301/comments
https://api.github.com/repos/huggingface/transformers/issues/17301/events
https://github.com/huggingface/transformers/pull/17301
1,238,646,648
PR_kwDOCUB6oc439US3
17,301
[Tests] Fix opt integration test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let's see whether this fixes the CI" ]
1,652
1,652
1,652
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes failing circle ci: `tests/models/opt/test_modeling_opt.py::OPTModelIntegrationTests::test_inference_no_head` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17301", "html_url": "https://github.com/huggingface/transformers/pull/17301", "diff_url": "https://github.com/huggingface/transformers/pull/17301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17301.patch", "merged_at": 1652795303000 }
https://api.github.com/repos/huggingface/transformers/issues/17300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17300/comments
https://api.github.com/repos/huggingface/transformers/issues/17300/events
https://github.com/huggingface/transformers/pull/17300
1,238,573,415
PR_kwDOCUB6oc439EY3
17,300
Fix tests of mixed precision now that experimental is deprecated
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks, @Rocketknight1 \r\n\r\nI saw we have a few\r\n\r\n```\r\ntf.keras.mixed_precision.experimental.Policy\r\n```\r\n\r\nin `src/transformers/training_args_tf.py`. Maybe we also need to update these places? (I am OK if you prefer to do it in another PR, if the change is indeed necessary)", "@ydshieh I'll fix them here too!", "_The documentation is not available anymore as the PR was closed or merged._", "![image](https://user-images.githubusercontent.com/12240844/168825240-bbf3d9bf-f113-4621-9ccd-cdeb183c9695.png)\r\n\r\nDoesn't this mean TF<=2.8 will fail in these lines?", "(@Rocketknight1 )", "For push/scheduled CI, I think it is fine, as the docker image is built with\r\n\r\n```\r\nRUN python3 -m pip install --no-cache-dir -U torch tensorflow\r\n```\r\n(right ..?)\r\n\r\nI am going to print TF version in CI jobs.\r\n\r\nUpdate: The latest docker image is built with 2.9\r\n\r\n```\r\n2022-05-17T01:34:05.0014817Z #10 15.23 Collecting tensorflow>=2.3\r\n2022-05-17T01:34:05.0016257Z #10 15.24 Downloading tensorflow-2.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (511.7 MB)\r\n```", "Yes, CI should be fine :D but users with older versions (which we support) will experience errors, right? Or is it backwards compatible?", "You are right! At least for `src/transformers/training_args_tf.py`.\r\n\r\nI am not very sure what's our policy regarding the test backward compatible (regarding the change in `tests/utils/test_modeling_tf_core.py`)" ]
1,652
1,652
1,652
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17300/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17300", "html_url": "https://github.com/huggingface/transformers/pull/17300", "diff_url": "https://github.com/huggingface/transformers/pull/17300.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17300.patch", "merged_at": 1652793257000 }
https://api.github.com/repos/huggingface/transformers/issues/17299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17299/comments
https://api.github.com/repos/huggingface/transformers/issues/17299/events
https://github.com/huggingface/transformers/pull/17299
1,238,547,180
PR_kwDOCUB6oc438-rD
17,299
Add CvT
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? Co-authored-by: AnugunjNaman <anugunjjha@gmail.com> This PR adds CvT (Convolutional Vision Transformer) by Microsoft Research. I just cleaned up the branch of @AnugunjNaman. To do: - [x] make @AnugunjNaman co-author of this PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17299/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17299", "html_url": "https://github.com/huggingface/transformers/pull/17299", "diff_url": "https://github.com/huggingface/transformers/pull/17299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17299.patch", "merged_at": 1652888838000 }
https://api.github.com/repos/huggingface/transformers/issues/17298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17298/comments
https://api.github.com/repos/huggingface/transformers/issues/17298/events
https://github.com/huggingface/transformers/pull/17298
1,238,208,847
PR_kwDOCUB6oc43712C
17,298
Add PR author in CI report + merged by info
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? As title. The result looks like <img width="502" alt="Screenshot 2022-05-17 093737" src="https://user-images.githubusercontent.com/2521628/168756091-6969528d-a7c7-492e-89f4-ae3649fb10bd.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17298/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17298", "html_url": "https://github.com/huggingface/transformers/pull/17298", "diff_url": "https://github.com/huggingface/transformers/pull/17298.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17298.patch", "merged_at": 1652806618000 }
https://api.github.com/repos/huggingface/transformers/issues/17297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17297/comments
https://api.github.com/repos/huggingface/transformers/issues/17297/events
https://github.com/huggingface/transformers/issues/17297
1,238,182,109
I_kwDOCUB6oc5JzSjd
17,297
Error while finetuning XLM-RoBERTa on Tensorflow-Keras
{ "login": "tdr1991", "id": 28622364, "node_id": "MDQ6VXNlcjI4NjIyMzY0", "avatar_url": "https://avatars.githubusercontent.com/u/28622364?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tdr1991", "html_url": "https://github.com/tdr1991", "followers_url": "https://api.github.com/users/tdr1991/followers", "following_url": "https://api.github.com/users/tdr1991/following{/other_user}", "gists_url": "https://api.github.com/users/tdr1991/gists{/gist_id}", "starred_url": "https://api.github.com/users/tdr1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tdr1991/subscriptions", "organizations_url": "https://api.github.com/users/tdr1991/orgs", "repos_url": "https://api.github.com/users/tdr1991/repos", "events_url": "https://api.github.com/users/tdr1991/events{/privacy}", "received_events_url": "https://api.github.com/users/tdr1991/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @tdr1991 👋 The error seems to come from `Precision` and `Recall`, which are designed for binary classification -- see this StackOverflow thread: https://stackoverflow.com/questions/59305514/tensorflow-how-to-use-tf-keras-metrics-in-multiclass-classification\r\n\r\n(Closing the issue, since it is not related to `transformers`. Feel free to reopen if you find a `transformers`-related issue :) )" ]
1,652
1,652
1,652
NONE
null
## env: tensorflow 2.8.0 keras 2.8.0 transformers 4.18.0 python 3.8 ## code ### data loader ```python def gen_dataset_iter(model_config): data_files = {"train": model_config.train_path, "validation": model_config.dev_path, "test": model_config.test_path} src_data = load_dataset("csv", data_files=data_files) def preprocess_function(examples): return model_config.tokenizer(examples["content"], truncation=True) tokenized_data = src_data.map(preprocess_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=model_config.tokenizer, return_tensors="tf") train_iter = tokenized_data["train"].to_tf_dataset( columns=["attention_mask", "input_ids"], label_cols=["label"], shuffle=True, batch_size=model_config.batch_size, collate_fn=data_collator, ) val_iter = tokenized_data["validation"].to_tf_dataset( columns=["attention_mask", "input_ids"], label_cols=["label"], shuffle=False, batch_size=model_config.batch_size, collate_fn=data_collator, ) test_iter = tokenized_data["test"].to_tf_dataset( columns=["attention_mask", "input_ids"], label_cols=["label"], shuffle=False, batch_size=model_config.batch_size, collate_fn=data_collator, ) return train_iter, val_iter, test_iter ``` ### model ```python model = TFAutoModelForSequenceClassification.from_pretrained(self.model_config.model_path, num_labels=self.model_config.num_classes) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metrics = [tf.keras.metrics.SparseCategoricalAccuracy(), tf.keras.metrics.Precision(), tf.keras.metrics.Recall()] self.model.compile(optimizer="adam", loss=loss, metrics=metrics) stop_callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=3) best_model_callback = tf.keras.callbacks.ModelCheckpoint(self.model_config.save_path, monitor="val_loss", verbose=1, save_best_only=True, save_weights_only=True) call_funs = [stop_callback, best_model_callback] self.model.fit(self.train_iter, epochs=self.model_config.num_epochs, verbose=2, validation_data=self.val_iter, callbacks=call_funs) ``` ## error ```shell File "/home/hellotalk/work/content_detect/models/model.py", line 70, in train self.model.fit(self.train_iter, epochs=self.model_config.num_epochs, verbose=2, validation_data=self.val_iter, callbacks=call_funs) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1147, in autograph_handler raise e.ag_error_metadata.to_exception(e) ValueError: in user code: File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/training.py", line 1021, in train_function * return step_function(self, iterator) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/training.py", line 1010, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/training.py", line 1000, in run_step ** outputs = model.train_step(data) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1008, in train_step self.compiled_metrics.update_state(y, y_pred, sample_weight) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/compile_utils.py", line 459, in update_state metric_obj.update_state(y_t, y_p, sample_weight=mask) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/utils/metrics_utils.py", line 70, in decorated update_op = update_state_fn(*args, **kwargs) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/metrics.py", line 178, in update_state_fn return ag_update_state(*args, **kwargs) File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/metrics.py", line 1403, in update_state ** return metrics_utils.update_confusion_matrix_variables( File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/utils/metrics_utils.py", line 619, in update_confusion_matrix_variables y_pred.shape.assert_is_compatible_with(y_true.shape) ValueError: Shapes (256, 20) and (256, 1) are incompatible ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17297/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17296/comments
https://api.github.com/repos/huggingface/transformers/issues/17296/events
https://github.com/huggingface/transformers/issues/17296
1,238,168,352
I_kwDOCUB6oc5JzPMg
17,296
about the opt model KeyError: 'opt'
{ "login": "DericZhao", "id": 54764601, "node_id": "MDQ6VXNlcjU0NzY0NjAx", "avatar_url": "https://avatars.githubusercontent.com/u/54764601?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DericZhao", "html_url": "https://github.com/DericZhao", "followers_url": "https://api.github.com/users/DericZhao/followers", "following_url": "https://api.github.com/users/DericZhao/following{/other_user}", "gists_url": "https://api.github.com/users/DericZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/DericZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DericZhao/subscriptions", "organizations_url": "https://api.github.com/users/DericZhao/orgs", "repos_url": "https://api.github.com/users/DericZhao/repos", "events_url": "https://api.github.com/users/DericZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/DericZhao/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I met the same problem. Have you fixed this?", "> \r\n\r\nYou can download the latest version of transformers, have a good day.", "@DericZhao I installed the latest version, I am still getting the error. Any help?", "I met the same problem. Any solutions?", "me too!my transformers version is 4.4.1,I still have the problem.", "`pip install --upgrade transformers`", "Check the versions of python(>=3.8 required) and pytorch , they may not match transformers" ]
1,652
1,678
1,652
NONE
null
### System Info ```shell I just copy the code from the model card on the https://huggingface.co/facebook/opt-30b, then there is an error, can you help me to find what's the problem of it? Traceback (most recent call last): File "D:\SentenceRewrite\opt.py", line 4, in <module> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda() File "D:\Anaconda\envs\torch\lib\site-packages\transformers\models\auto\auto_factory.py", line 382, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "D:\Anaconda\envs\torch\lib\site-packages\transformers\models\auto\configuration_auto.py", line 517, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "D:\Anaconda\envs\torch\lib\site-packages\transformers\models\auto\configuration_auto.py", line 266, in __getitem__ raise KeyError(key) KeyError: 'opt' ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda() # the fast tokenizer currently does not work correctly tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False) prompt = "Hello, I'm am conscious and" input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() generated_ids = model.generate(input_ids) tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ### Expected behavior ```shell KeyError: 'opt' how to solve it? ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17296/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17295/comments
https://api.github.com/repos/huggingface/transformers/issues/17295/events
https://github.com/huggingface/transformers/issues/17295
1,238,130,438
I_kwDOCUB6oc5JzF8G
17,295
ValueError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 1024. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape for ReformerForSequenceClassification
{ "login": "ShubhamKumarNigam", "id": 19687704, "node_id": "MDQ6VXNlcjE5Njg3NzA0", "avatar_url": "https://avatars.githubusercontent.com/u/19687704?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShubhamKumarNigam", "html_url": "https://github.com/ShubhamKumarNigam", "followers_url": "https://api.github.com/users/ShubhamKumarNigam/followers", "following_url": "https://api.github.com/users/ShubhamKumarNigam/following{/other_user}", "gists_url": "https://api.github.com/users/ShubhamKumarNigam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShubhamKumarNigam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShubhamKumarNigam/subscriptions", "organizations_url": "https://api.github.com/users/ShubhamKumarNigam/orgs", "repos_url": "https://api.github.com/users/ShubhamKumarNigam/repos", "events_url": "https://api.github.com/users/ShubhamKumarNigam/events{/privacy}", "received_events_url": "https://api.github.com/users/ShubhamKumarNigam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Still. getting the same problem.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello, thanks for opening an issue @ShubhamKumarNigam ! We try to keep the github issues for bugs/feature requests.\r\nFor user code, we recommend using the forum instead, where you're more likely to have a community member help you out on your issue.\r\n\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,660
1,660
NONE
null
` from transformers import ReformerTokenizer, ReformerForSequenceClassification, ReformerConfig model_name = "google/reformer-crime-and-punishment" tokenizer = ReformerTokenizer.from_pretrained(model_name) def input_id_maker(dataf, tokenizer): input_ids = [] lengths = [] for i in progressbar.progressbar(range(len(dataf['text']))): sen = dataf['text'].iloc[i] sen = tokenizer.tokenize(sen)#, add_prefix_space=True) if(len(sen) > 1024): sen = sen[len(sen)-1024:] encoded_sent = tokenizer.convert_tokens_to_ids(sen) input_ids.append(encoded_sent) lengths.append(len(encoded_sent)) input_ids = pad_sequences(input_ids, maxlen=1024, value=0, dtype="long", truncating="pre", padding="post") return input_ids, lengths train_input_ids, train_lengths = input_id_maker(train_set, tokenizer) validation_input_ids, validation_lengths = input_id_maker(validation_set, tokenizer) def att_masking(input_ids): attention_masks = [] for sent in input_ids: att_mask = [int(token_id > 0) for token_id in sent] attention_masks.append(att_mask) return attention_masks train_attention_masks = att_masking(train_input_ids) validation_attention_masks = att_masking(validation_input_ids) train_labels = train_set['label'].to_numpy().astype('int') validation_labels = validation_set['label'].to_numpy().astype('int') train_inputs = train_input_ids validation_inputs = validation_input_ids train_masks = train_attention_masks validation_masks = validation_attention_masks train_inputs = torch.tensor(train_inputs) train_labels = torch.tensor(train_labels) train_masks = torch.tensor(train_masks) validation_inputs = torch.tensor(validation_inputs) validation_labels = torch.tensor(validation_labels) validation_masks = torch.tensor(validation_masks) batch_size = 6 train_data = TensorDataset(train_inputs, train_masks, train_labels) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size = batch_size) validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels) validation_sampler = RandomSampler(validation_data) validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size = batch_size) device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu") model = model_class.from_pretrained(model_name, num_labels=2) model.to(device) lr = 2e-6 max_grad_norm = 1.0 epochs = 3 num_total_steps = len(train_dataloader)*epochs num_warmup_steps = 1000 warmup_proportion = float(num_warmup_steps) / float(num_total_steps) # 0.1 optimizer = AdamW(model.parameters(), lr=lr, correct_bias=True) scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = num_warmup_steps, num_training_steps = num_total_steps) def flat_accuracy(preds, labels): pred_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return np.sum(pred_flat == labels_flat) / len(labels_flat) seed_val = 2212 np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) loss_values = [] for epoch_i in range(0, epochs): print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') t0 = time.time() total_loss = 0 model.train() for step, batch in enumerate(train_dataloader): if step % 40 == 0 and not step == 0: print(' Batch {:>5,} of {:>5,}. '.format(step, len(train_dataloader))) b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) model.zero_grad() outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) loss = outputs[0] total_loss += loss.item() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() avg_train_loss = total_loss / len(train_dataloader) loss_values.append(avg_train_loss) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print("") print("Running Validation...") t0 = time.time() model.eval() eval_loss, eval_accuracy = 0, 0 nb_eval_steps, nb_eval_examples = 0, 0 for batch in validation_dataloader: batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch with torch.no_grad(): outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) logits = outputs[0] logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() tmp_eval_accuracy = flat_accuracy(logits, label_ids) eval_accuracy += tmp_eval_accuracy nb_eval_steps += 1 # Report the final accuracy for this validation run. print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps)) print("") print("Training complete!") ` ### After this getting error "ValueError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 1024. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape."
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17295/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17294/comments
https://api.github.com/repos/huggingface/transformers/issues/17294/events
https://github.com/huggingface/transformers/pull/17294
1,237,836,090
PR_kwDOCUB6oc436l6g
17,294
[T5] Fix init in TF and Flax for pretraining
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16749 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17294/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17294", "html_url": "https://github.com/huggingface/transformers/pull/17294", "diff_url": "https://github.com/huggingface/transformers/pull/17294.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17294.patch", "merged_at": 1652879337000 }
https://api.github.com/repos/huggingface/transformers/issues/17293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17293/comments
https://api.github.com/repos/huggingface/transformers/issues/17293/events
https://github.com/huggingface/transformers/pull/17293
1,237,818,719
PR_kwDOCUB6oc436iC_
17,293
fix for 17292
{ "login": "nadahlberg", "id": 58701810, "node_id": "MDQ6VXNlcjU4NzAxODEw", "avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nadahlberg", "html_url": "https://github.com/nadahlberg", "followers_url": "https://api.github.com/users/nadahlberg/followers", "following_url": "https://api.github.com/users/nadahlberg/following{/other_user}", "gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions", "organizations_url": "https://api.github.com/users/nadahlberg/orgs", "repos_url": "https://api.github.com/users/nadahlberg/repos", "events_url": "https://api.github.com/users/nadahlberg/events{/privacy}", "received_events_url": "https://api.github.com/users/nadahlberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
Fixes #17292 ## Before submitting - [N/A] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [N/A] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [N/A] Did you write any new necessary tests? Tested locally
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17293/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17293", "html_url": "https://github.com/huggingface/transformers/pull/17293", "diff_url": "https://github.com/huggingface/transformers/pull/17293.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17293.patch", "merged_at": 1652991679000 }
https://api.github.com/repos/huggingface/transformers/issues/17292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17292/comments
https://api.github.com/repos/huggingface/transformers/issues/17292/events
https://github.com/huggingface/transformers/issues/17292
1,237,807,457
I_kwDOCUB6oc5Jx3Fh
17,292
Misleading error when from_pretrained fails, says there are flax weights when there aren't
{ "login": "nadahlberg", "id": 58701810, "node_id": "MDQ6VXNlcjU4NzAxODEw", "avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nadahlberg", "html_url": "https://github.com/nadahlberg", "followers_url": "https://api.github.com/users/nadahlberg/followers", "following_url": "https://api.github.com/users/nadahlberg/following{/other_user}", "gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions", "organizations_url": "https://api.github.com/users/nadahlberg/orgs", "repos_url": "https://api.github.com/users/nadahlberg/repos", "events_url": "https://api.github.com/users/nadahlberg/events{/privacy}", "received_events_url": "https://api.github.com/users/nadahlberg/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[]
1,652
1,652
1,652
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.20.0.dev0 - Platform: Linux-5.14.0-1031-oem-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): 2.9.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Tiny bug in the from_pretrained code that checks if flax weights are present. ``` from transformers import AutoModel, AutoConfig config = AutoConfig.from_pretrained("roberta-base") model = AutoModel.from_pretrained("some_empty_dir", config=config) ``` ### Expected behavior ```shell Should return: EnvironmentError( f"Error no file named {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME + '.index'} or " f"{FLAX_WEIGHTS_NAME} found in directory {pretrained_model_name_or_path}." ) Instead returns: EnvironmentError( f"Error no file named {WEIGHTS_NAME} found in directory {pretrained_model_name_or_path} but " "there is a file for Flax weights. Use `from_flax=True` to load this model from those " "weights." ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17292/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17291/comments
https://api.github.com/repos/huggingface/transformers/issues/17291/events
https://github.com/huggingface/transformers/pull/17291
1,237,806,692
PR_kwDOCUB6oc436fVz
17,291
Fix CodeParrot training script
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,653
1,653
CONTRIBUTOR
null
This PR fixes some features in the training script of CodeParrot: * use `Pytorch` implementation of `AdamW` instead of `transformers` implementation * add shuffling of the sequences in the batches * fix error in weight decay for LayerNorm * change the tracked loss to the average over batches instead of the main worker loss + compute average over accumulated steps manually for wandb/tensorboard plot * update requirements cc @lvwerra
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17291", "html_url": "https://github.com/huggingface/transformers/pull/17291", "diff_url": "https://github.com/huggingface/transformers/pull/17291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17291.patch", "merged_at": 1653303336000 }
https://api.github.com/repos/huggingface/transformers/issues/17290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17290/comments
https://api.github.com/repos/huggingface/transformers/issues/17290/events
https://github.com/huggingface/transformers/issues/17290
1,237,522,840
I_kwDOCUB6oc5JwxmY
17,290
Accepting torch device objects in the Pipeline init
{ "login": "courtneysprouse", "id": 25102613, "node_id": "MDQ6VXNlcjI1MTAyNjEz", "avatar_url": "https://avatars.githubusercontent.com/u/25102613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/courtneysprouse", "html_url": "https://github.com/courtneysprouse", "followers_url": "https://api.github.com/users/courtneysprouse/followers", "following_url": "https://api.github.com/users/courtneysprouse/following{/other_user}", "gists_url": "https://api.github.com/users/courtneysprouse/gists{/gist_id}", "starred_url": "https://api.github.com/users/courtneysprouse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/courtneysprouse/subscriptions", "organizations_url": "https://api.github.com/users/courtneysprouse/orgs", "repos_url": "https://api.github.com/users/courtneysprouse/repos", "events_url": "https://api.github.com/users/courtneysprouse/events{/privacy}", "received_events_url": "https://api.github.com/users/courtneysprouse/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "@courtneysprouse you're entirely correct, there's no real reason why we don't accept those.\r\n\r\nThe reason why we want to be able to accept `int` is to support seamless `TF` and `PyTorch` (which don't use the same conventions for devices) but the pipeline abstracts that away. But if you want to use native objects, you should always be able to for sure.", "Awesome! Thank you so much!" ]
1,652
1,652
1,652
NONE
null
### Feature request Currently Pipeline init only takes an integer as a device argument in the constructor. It would make it a little easier to interface with and integrate with existing code if it also took a pytorch device object. ### Motivation It's frustrating not be to be able to use the model.device/self.device object when instantiating the pipeline object ### Your contribution I feel like I could do this and I'm happy to take a whack, but I haven't contributed to Transformers yet
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17290/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17289/comments
https://api.github.com/repos/huggingface/transformers/issues/17289/events
https://github.com/huggingface/transformers/pull/17289
1,237,503,870
PR_kwDOCUB6oc435cGW
17,289
Better error in the Auto API when a dep is missing
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? As reported in #17266, the error message when an auto API is used to load a class that can't be loaded because a dep is missing is not helpful (it actually got worse since #17250). This PR addresses the problem by returning the associated dummy class when the right class can't be loaded, which means the subsequent call to `from_pretrained` fails with a helpful error message. For instance, the sample given in #17266 will now error with: ``` ConvNextFeatureExtractor requires the PIL library but it was not found in your environment. You can install it with pip: `pip install pillow` ``` Fixes #17266
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17289/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17289", "html_url": "https://github.com/huggingface/transformers/pull/17289", "diff_url": "https://github.com/huggingface/transformers/pull/17289.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17289.patch", "merged_at": 1652727347000 }
https://api.github.com/repos/huggingface/transformers/issues/17288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17288/comments
https://api.github.com/repos/huggingface/transformers/issues/17288/events
https://github.com/huggingface/transformers/pull/17288
1,237,497,139
PR_kwDOCUB6oc435aoJ
17,288
Make TrainerHyperParameterSigOptIntegrationTest slow test
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Add @sgugger for a double check :-)", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? As discussed offline, make `TrainerHyperParameterSigOptIntegrationTest` a slow test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17288/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17288", "html_url": "https://github.com/huggingface/transformers/pull/17288", "diff_url": "https://github.com/huggingface/transformers/pull/17288.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17288.patch", "merged_at": 1652725089000 }
https://api.github.com/repos/huggingface/transformers/issues/17287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17287/comments
https://api.github.com/repos/huggingface/transformers/issues/17287/events
https://github.com/huggingface/transformers/pull/17287
1,237,443,109
PR_kwDOCUB6oc435O5C
17,287
Improved Documentation for Encoder Decoder models
{ "login": "Threepointone4", "id": 22583613, "node_id": "MDQ6VXNlcjIyNTgzNjEz", "avatar_url": "https://avatars.githubusercontent.com/u/22583613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Threepointone4", "html_url": "https://github.com/Threepointone4", "followers_url": "https://api.github.com/users/Threepointone4/followers", "following_url": "https://api.github.com/users/Threepointone4/following{/other_user}", "gists_url": "https://api.github.com/users/Threepointone4/gists{/gist_id}", "starred_url": "https://api.github.com/users/Threepointone4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Threepointone4/subscriptions", "organizations_url": "https://api.github.com/users/Threepointone4/orgs", "repos_url": "https://api.github.com/users/Threepointone4/repos", "events_url": "https://api.github.com/users/Threepointone4/events{/privacy}", "received_events_url": "https://api.github.com/users/Threepointone4/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Very nice! Thanks for taking the initiative here @Threepointone4. Left some suggestions to make the text a bit clearer :-) Let me know what you think!\r\n\r\nThese make sense, Thanks for the edits. \r\n", "@patrickvonplaten what are the next steps in this ? Let me know if i have to add anything.", "@Threepointone4, I commited all the suggestions after having read your comment. I think we are close to merging this PR now :-) \r\n\r\nCould you please in a last step add this file to our documentation tests? Those ensure that the code actually runs correctly.\r\nAll you have to do is to add the name of the doc file to this file: https://github.com/huggingface/transformers/blob/38ddab10da90e64297a37c0719ed9309e693317a/utils/documentation_tests.txt#L10 \r\n\r\nFor more information on the doc tests you can read this document: \r\nhttps://github.com/huggingface/transformers/tree/main/docs#testing-documentation-examples", "At the moment it seems like the doc tests would fail with the following error message:\r\n\r\n```\r\n109 ... return_tensors=\"pt\", \r\n110 ... ).input_ids \r\n111 \r\n112 >>> labels = tokenizer( \r\n113 ... \"the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2\r\n metres ( 17 ft ) and is the second tallest free - standing structure in paris.\", \r\n114 ... return_tensors=\"pt\", \r\n115 ... ).input_ids \r\n116 \r\n117 >>> # the forward function automatically creates the correct decoder_input_ids \r\n118 >>> loss = model(input_ids=input_ids, labels=labels).loss \r\nUNEXPECTED EXCEPTION: ValueError(\"Make sure to set the decoder_start_token_id attribute of the model's configuration.\") \r\n```\r\n\r\nCould you try to correct it? :-)", "> At the moment it seems like the doc tests would fail with the following error message:\r\n> \r\n> ```\r\n> 109 ... return_tensors=\"pt\", \r\n> 110 ... ).input_ids \r\n> 111 \r\n> 112 >>> labels = tokenizer( \r\n> 113 ... \"the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2\r\n> metres ( 17 ft ) and is the second tallest free - standing structure in paris.\", \r\n> 114 ... return_tensors=\"pt\", \r\n> 115 ... ).input_ids \r\n> 116 \r\n> 117 >>> # the forward function automatically creates the correct decoder_input_ids \r\n> 118 >>> loss = model(input_ids=input_ids, labels=labels).loss \r\n> UNEXPECTED EXCEPTION: ValueError(\"Make sure to set the decoder_start_token_id attribute of the model's configuration.\") \r\n> ```\r\n> \r\n> Could you try to correct it? :-)\r\n\r\nHey @Threepointone4, \r\n\r\nThanks a lot for having added the doc to the doc tests. Could you quickly check that they work as expected? E.g. I'm currently getting the above error when running the docs. Thanks!", "> > At the moment it seems like the doc tests would fail with the following error message:\r\n> > ```\r\n> > 109 ... return_tensors=\"pt\", \r\n> > 110 ... ).input_ids \r\n> > 111 \r\n> > 112 >>> labels = tokenizer( \r\n> > 113 ... \"the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2\r\n> > metres ( 17 ft ) and is the second tallest free - standing structure in paris.\", \r\n> > 114 ... return_tensors=\"pt\", \r\n> > 115 ... ).input_ids \r\n> > 116 \r\n> > 117 >>> # the forward function automatically creates the correct decoder_input_ids \r\n> > 118 >>> loss = model(input_ids=input_ids, labels=labels).loss \r\n> > UNEXPECTED EXCEPTION: ValueError(\"Make sure to set the decoder_start_token_id attribute of the model's configuration.\") \r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > Could you try to correct it? :-)\r\n> \r\n> Hey @Threepointone4,\r\n> \r\n> Thanks a lot for having added the doc to the doc tests. Could you quickly check that they work as expected? E.g. I'm currently getting the above error when running the docs. Thanks!\r\n\r\n@patrickvonplaten \r\nI am currently running this cmd \r\n` pytest --doctest-modules docs/source/en/model_doc/encoder-decoder.mdx -sv --doctest-glob=\"*.mdx\"\r\n`\r\nIs this proper way to do that? I am getting different error, so just double checking .", "Hey @Threepointone4,\r\n\r\nYes that's the correct command, but note that you need to run this command before-hand: \r\n```python utils/prepare_for_doc_test.py src docs```\r\nand then you can run:\r\n```pytest --doctest-modules docs/source/en/model_doc/encoder-decoder.mdx -sv --doctest-glob=\"*.mdx\"```\r\nAfter the command you should run: \r\n```python utils/prepare_for_doc_test.py src docs --remove_new_line``` once more\r\nto re-convert the example doc strings correctly :-)\r\n\r\nWhat's easier however is to do the following, add the following code into a `doc_test` file:\r\n```\r\n#!/usr/bin/env bash\r\ndoc_file=${1}\r\n\r\npython utils/prepare_for_doc_test.py src docs &>/dev/null\r\npytest -sv --doctest-modules ${doc_file} --doctest-continue-on-failure --doctest-glob=\"*.mdx\"\r\npython utils/prepare_for_doc_test.py src docs --remove_new_line &>/dev/null\r\n```\r\n\r\nand then run:\r\n`doc_test <path/to/python/file>` -> this will automatically prepare the doc tests before hand :-)", "@patrickvonplaten Sorry for the delay. \r\nI have ran the cmd's you have shared and was able to reproduce the error in my local.\r\n```\r\nmodel.config.decoder_start_token_id = tokenizer.cls_token_id\r\nmodel.config.pad_token_id = tokenizer.pad_token_id\r\n```\r\nThese are changes i need to add right ? I will re-base also with original repository. ", "@patrickvonplaten what are the next steps in this ? Let me know if i have to add anything. \r\n\r\n", "Hey @Threepointone4,\r\n\r\nIt sadly looks a bit like the git history was messed up in this PR (maybe a `git rebase` was incorrectly used?)", "@patrickvonplaten I accidentally git rebase with the branch and main separately and pushed it. Can that be an issue ? \r\nI was waiting for your input on this. ", "I see! Sorry it's quite hard to recover the PR from this :sweat_smile: Could you maybe copy-paste the files that were changed to a new PR and close this one? :-)", "> I see! Sorry it's quite hard to recover the PR from this sweat_smile Could you maybe copy-paste the files that were changed to a new PR and close this one? :-)\r\n\r\nSure @patrickvonplaten , I will do that it will be easier.\r\nLink for the new PR : [link](https://github.com/huggingface/transformers/pull/17815)", "Thanks a lot!" ]
1,652
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? This PR improves the documentation of encoder decoder model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Issues [link](https://github.com/huggingface/transformers/issues/16135) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17287/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17287", "html_url": "https://github.com/huggingface/transformers/pull/17287", "diff_url": "https://github.com/huggingface/transformers/pull/17287.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17287.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17286/comments
https://api.github.com/repos/huggingface/transformers/issues/17286/events
https://github.com/huggingface/transformers/pull/17286
1,237,384,735
PR_kwDOCUB6oc435CQp
17,286
Add Visual Question Answering (VQA) pipeline
{ "login": "sijunhe", "id": 11987277, "node_id": "MDQ6VXNlcjExOTg3Mjc3", "avatar_url": "https://avatars.githubusercontent.com/u/11987277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sijunhe", "html_url": "https://github.com/sijunhe", "followers_url": "https://api.github.com/users/sijunhe/followers", "following_url": "https://api.github.com/users/sijunhe/following{/other_user}", "gists_url": "https://api.github.com/users/sijunhe/gists{/gist_id}", "starred_url": "https://api.github.com/users/sijunhe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sijunhe/subscriptions", "organizations_url": "https://api.github.com/users/sijunhe/orgs", "repos_url": "https://api.github.com/users/sijunhe/repos", "events_url": "https://api.github.com/users/sijunhe/events{/privacy}", "received_events_url": "https://api.github.com/users/sijunhe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi,\r\n\r\nCan you first make sure that your branch is up to date with the main branch? You can achieve that as follows:\r\n\r\n```\r\ngit remote add upstream https://github.com/huggingface/transformers.git\r\ngit fetch upstream\r\ngit rebase upstream/main\r\n```", "Hi @sijunhe ,\r\n\r\nThanks a lot, I think for the last test (consistency), usually `make fixup` (make sure you have the right versions with `pip install -e .[dev]` if you want. It works for me most fo the time (don't include files you didn't touch I think).", "> Looks like a solid PR ! Thank you for this.\r\n> \r\n> The main thing I would think is actually remove some of the flexibility introduced here (modeling QA pipeline I think). As it exists mostly as legacy and is usually preventing some modifications later on because we can't do backward compatibilty breaking.\r\n> \r\n> Aiming for a pure simple `pipe(image=image, question=question)` and `pipe({\"image\": image, \"question\": question})` should be enough and not need a full blown class to support.\r\n> \r\n> The main reason for `{\"image\": image, \"question\": question}` is to support datasets which return single item and the reason for `pipe(image=image, question=question)` is just simpler for simple Python code (more natural than the dict let's say :)).\r\n> \r\n> This will natively support `List[{\"image\": image, \"question\": question}]` which is handled by the parent class already (and as already mentionned datasets).\r\n> \r\n> The code could look like something like\r\n> \r\n> ```python\r\n> def __call__(self, image, question=None, **kwargs):\r\n> if isinstance(image, (PIL.Image...)) and question is not None:\r\n> # Nice pure Python support.\r\n> inputs_ = {\"image\": image, \"question\": question}\r\n> else:\r\n> # Suppose this is correct dict or list.\r\n> inputs_ = image\r\n> ```\r\n> \r\n> What do you think ?\r\n\r\nThanks for the elaborate reply! I agree that the class is not necessary. This is my first PR so I wasn't sure what kind of inputs needed to be handled and I was mostly following the QA pipeline. :)\r\nHopefully all the tests pass now and we can land this soon!", "This PR is fine for me !\r\n\r\nI will let a core maintainer approve this.", "@LysandreJik kindly pinging you for a final review", "Hmm seems like the CI is flaky and it is failing now after I committed the trivial docstring change suggested by @NielsRogge ", "> Hmm seems like the CI is flaky and it is failing now after I committed the trivial docstring change suggested by @NielsRogge\r\n\r\nDon't hesitate to rebase on `main` too, might have been fixed within the code itself since .", "Hey folks, seems like the last remaining issue was the use of community models as the default in the pipeline and the conclusion was to \"specify a revision\". However, I am not sure how to specify a version of the model. Any suggestions here? @patrickvonplaten @LysandreJik ", "> issue was the use of community models as the default in the pipeline and the conclusion was to \"specify a revision\". However, I am not sure how to specify a version of the model. Any suggestions here? @patrickvonplaten @LysandreJik\r\n\r\nHey @sijunhe,\r\n\r\nVery sorry to keep you waiting here! \r\n\r\nI'm in favor of merging this PR as is and to open a follow-up PR that specifies a revision for all default pipeline models (happy to take care of this early next week) \r\n\r\n@LysandreJik @sgugger would this be ok for you? ", "@NielsRogge also good for you?", "Thanks folks. I think we are ready to merge!", "Thanks again for all you work on this!", "@sijunhe great work! I will start the work on the widgets 👍 ", "@sijunhe the widgets are live on the hub! https://huggingface.co/dandelin/vilt-b32-finetuned-vqa\r\n<img width=\"639\" alt=\"Screenshot 2022-07-29 at 15 49 21\" src=\"https://user-images.githubusercontent.com/11827707/182323267-e7eab74e-5d88-46e2-8ce6-b3409d21926d.png\">\r\n\r\n" ]
1,652
1,659
1,655
CONTRIBUTOR
null
# What does this PR do? Add Visual Question Answering (VQA) pipeline, as described in #17208. The pipeline currently defaults to [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt), which is also the only model it supports for now. It also adds all the necessary class such as `AutoModelForVisualQuestionAnswering`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge @LysandreJik @Narsil @mishig25 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17286", "html_url": "https://github.com/huggingface/transformers/pull/17286", "diff_url": "https://github.com/huggingface/transformers/pull/17286.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17286.patch", "merged_at": 1655120984000 }
https://api.github.com/repos/huggingface/transformers/issues/17285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17285/comments
https://api.github.com/repos/huggingface/transformers/issues/17285/events
https://github.com/huggingface/transformers/issues/17285
1,237,349,459
I_kwDOCUB6oc5JwHRT
17,285
TF: all models can run in Graph mode
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "cc @Rocketknight1 and @ydshieh -- link related issues as you see them plz :D ", "Strong +1 on this, since compiling in graph mode is needed for all the other downstream things we want to do (TF Serving, tf.js, etc.)", "Link a potentially related issue as early as possible, as I have a poor memory capability\r\n\r\nhttps://github.com/huggingface/transformers/pull/16886#issuecomment-1113448810", "[#17233](https://github.com/huggingface/transformers/issues/17233)", "> [#17233](https://github.com/huggingface/transformers/issues/17233)\r\n\r\nTo add another info: I found we have something like the following in `TFWav2Vec2Encoder`\r\n```\r\n# add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)\r\ndropout_probability = np.random.uniform(0, 1)\r\nif training and (dropout_probability < self.config.layerdrop): # skip the layer\r\n continue\r\n```\r\nI think we should avoid this, right?", "@ydshieh it's undesirable, but I think TF can handle it (with potential retracing if the parameters change). Will have to double-check, though", "I do have a general fear with things like `LayerDrop`, which is that XLA cannot accept any data-dependent computation paths. In other words, you cannot have a scenario where a layer is only run if a random number, generated by the GPU for each sample/batch, is over a threshold value. You can only implement something like this by running the layer every time with a residual connection, and multiplying the layer outputs by 0 if it is going to be \"dropped\". Doing this, of course, has no performance benefit at all.\r\n\r\nIn code like the above, the number is generated by `numpy`, which will be run once at the point of graph tracing. Therefore, that layer will either be skipped *always* or *never*. Correct code would use `tf.random` instead, which will insert the random generation into the graph correctly, but then the `LayerDrop` would cause XLA tracing to fail, and I'm not sure about regular Graph mode tracing.", "> you cannot have a scenario where a layer is only run if a random number\r\n\r\n`tf.cond` should be able to handle it, I believe 🤔 As you mentioned, with further changes as well. This is going to be fun! \r\n\r\nBonus: the just-released TF 2.9 compiles the forward pass with XLA when `Model.compile(jit_compile=True)` is called, so we might be able to get cool performance numbers out of sorting this on as many models as we can 😎 ", "Ah, I'm completely wrong - I was confusing two of the XLA requirements. `tf.cond()` will solve it all, I'm sorry!", "Hey has this been fixed? ", "Hey @ahmedlone127 👋 It hasn't been fixed yet.", "Okay thanks :) ", "#18153 Ensures all our models can be saved as `SavedModel`, with the exception of CLIP. I'm closing this issue as having a `SavedModel` implies this issue is solved \r\n\r\nKudos to @amyeroberts for smashing it!" ]
1,652
1,658
1,658
MEMBER
null
### Feature request Our models are executed in Eager mode by default. Eager mode is more permissive than Graph mode, and we have models that don't work in Graph mode at the moment. This (self) feature request is being added to bring visibility to the problem, link issues, and track progress. ### Motivation It is a requirement for several downstream uses, like XLA-accelerated forward passes or TF serving. ### Your contribution Adding a general test to ensure all existing and new models are compatible with graph mode. I will also attempt to fix as many related issues as I can.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17285/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17285/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17284/comments
https://api.github.com/repos/huggingface/transformers/issues/17284/events
https://github.com/huggingface/transformers/pull/17284
1,237,316,685
PR_kwDOCUB6oc434zmO
17,284
Bug fix: move tensors to GPU in GeneralizedRCNN.inference()
{ "login": "yutanakamura-tky", "id": 51123494, "node_id": "MDQ6VXNlcjUxMTIzNDk0", "avatar_url": "https://avatars.githubusercontent.com/u/51123494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yutanakamura-tky", "html_url": "https://github.com/yutanakamura-tky", "followers_url": "https://api.github.com/users/yutanakamura-tky/followers", "following_url": "https://api.github.com/users/yutanakamura-tky/following{/other_user}", "gists_url": "https://api.github.com/users/yutanakamura-tky/gists{/gist_id}", "starred_url": "https://api.github.com/users/yutanakamura-tky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yutanakamura-tky/subscriptions", "organizations_url": "https://api.github.com/users/yutanakamura-tky/orgs", "repos_url": "https://api.github.com/users/yutanakamura-tky/repos", "events_url": "https://api.github.com/users/yutanakamura-tky/events{/privacy}", "received_events_url": "https://api.github.com/users/yutanakamura-tky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17284). All of your documentation changes will be reflected on that endpoint.", "Pinging @eltoto1219 as the author of that research project!", "@LysandreJik @eltoto1219 Thank you! Please let me know if there is anything required to go a step further.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,656
1,656
NONE
null
# What does this PR do? This PR fixes a bug in `research_projects/lxmert`. Currently, visual feature extraction using `GeneralizedRCNN` fails when `GeneralizedRCNN` is moved onto GPU because some intermediate outputs in `GeneralizedRCNN.inference()` are generated and left on CPU. I fixed the bug by adding `.to(<current_device>)` to the involved intermediate outputs. I wonder if this PR requires addition of tests because `research_projects/lxmert` does not seems to have tests, so could you please inform me about proper action? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17284/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17284", "html_url": "https://github.com/huggingface/transformers/pull/17284", "diff_url": "https://github.com/huggingface/transformers/pull/17284.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17284.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17283/comments
https://api.github.com/repos/huggingface/transformers/issues/17283/events
https://github.com/huggingface/transformers/issues/17283
1,237,310,243
I_kwDOCUB6oc5Jv9sj
17,283
GPT-neo generate is ignoring passed position ids
{ "login": "SwordShieldMouse", "id": 10066280, "node_id": "MDQ6VXNlcjEwMDY2Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/10066280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SwordShieldMouse", "html_url": "https://github.com/SwordShieldMouse", "followers_url": "https://api.github.com/users/SwordShieldMouse/followers", "following_url": "https://api.github.com/users/SwordShieldMouse/following{/other_user}", "gists_url": "https://api.github.com/users/SwordShieldMouse/gists{/gist_id}", "starred_url": "https://api.github.com/users/SwordShieldMouse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SwordShieldMouse/subscriptions", "organizations_url": "https://api.github.com/users/SwordShieldMouse/orgs", "repos_url": "https://api.github.com/users/SwordShieldMouse/repos", "events_url": "https://api.github.com/users/SwordShieldMouse/events{/privacy}", "received_events_url": "https://api.github.com/users/SwordShieldMouse/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
open
false
null
[]
[ "Hey @SwordShieldMouse,\r\n\r\nnote that `position_ids` does not have a huge influence on the output. It is not very surprising to me that the generated output ids (that we generated by taking the `argmax(...)` logits) are the same. It essentially just means that different positions ids still lead to the model outputting the same logit as the highest logit. It doesn't mean that the logits are the same. E.g. if you execute the following code, you see that the logit outputs `a` and `b` differ.\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport torch \r\n\r\ntext = \"hi there: \"\r\nmodelname = \"EleutherAI/gpt-neo-125M\"\r\nmodel = AutoModelForCausalLM.from_pretrained(modelname)\r\ntokenizer = AutoTokenizer.from_pretrained(modelname)\r\n\r\ninputs = tokenizer(text, return_tensors =\"pt\")\r\ninputs[\"position_ids\"] = inputs[\"attention_mask\"].cumsum(-1)\r\nprint(inputs)\r\na = model(**inputs).logits\r\n\r\n\r\ninputs[\"position_ids\"] = inputs[\"position_ids\"] + 10\r\nprint(inputs)\r\nb = model(**inputs).logits\r\n\r\nprint(\"a\", a.abs().sum())\r\nprint(\"b\", b.abs().sum())\r\n```", "Yes, I agree that that the logits are different in your example because the model `forward` indeed takes in the position ids. My point is that when the model `forward` is called through the `generate` function, the position IDs are not passed. Please [see here](https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L693). If both an attn mask and position ids are passed, the position ids are set to `None`. The resulting dict, with `None` position ids, is passed to the model forward [here](https://github.com/huggingface/transformers/blob/4710702837a9262e730b798a30c0609e322d02ed/src/transformers/generation_utils.py#L1675). \r\n\r\nIt's true that one shouldn't expect `generate` to output different results all the time. However, for a paper I'm working on atm that involves modifying position ids, and whose code would have been too long to put here, changing the position ids results in the same generation 100% of the time for every sample. It's only when I remove the `else` block on line 699 and unindent the `if past` block that I get different results for different position ids, for every sample.", "@patrickvonplaten Here is a modified snippet with my proposed change to the code, which indeed gives different generation results.\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport torch \r\n\r\nmodelname = \"EleutherAI/gpt-neo-125M\"\r\nmodel = AutoModelForCausalLM.from_pretrained(modelname)\r\ntokenizer = AutoTokenizer.from_pretrained(modelname)\r\n\r\ndef prepare_inputs_for_generation(input_ids, past=None, **kwargs):\r\n token_type_ids = kwargs.get(\"token_type_ids\", None)\r\n # only last token for inputs_ids if past is defined in kwargs\r\n if past:\r\n input_ids = input_ids[:, -1].unsqueeze(-1)\r\n if token_type_ids is not None:\r\n token_type_ids = token_type_ids[:, -1].unsqueeze(-1)\r\n\r\n attention_mask = kwargs.get(\"attention_mask\", None)\r\n position_ids = kwargs.get(\"position_ids\", None)\r\n\r\n if attention_mask is not None and position_ids is None:\r\n # create position_ids on the fly for batch generation\r\n position_ids = attention_mask.long().cumsum(-1) - 1\r\n position_ids.masked_fill_(attention_mask == 0, 1)\r\n if past:\r\n position_ids = position_ids[:, -1].unsqueeze(-1)\r\n # else:\r\n # position_ids = None\r\n return {\r\n \"input_ids\": input_ids,\r\n \"past_key_values\": past,\r\n \"use_cache\": kwargs.get(\"use_cache\"),\r\n \"position_ids\": position_ids,\r\n \"attention_mask\": attention_mask,\r\n \"token_type_ids\": token_type_ids,\r\n }\r\n\r\ntexts = [\r\n \"what is your name?\",\r\n \"2 + 2 = \",\r\n \"what is the capital of france?\",\r\n \"astnhoeruchau2918uh93u\",\r\n \"how many humans are there in the world?\",\r\n \"my favourite colour is \",\r\n \"hi there: \"\r\n]\r\nres = []\r\n\r\nfor text in texts:\r\n inputs = tokenizer(text, return_tensors =\"pt\")\r\n inputs[\"position_ids\"] = inputs[\"attention_mask\"].cumsum(-1)\r\n # print(inputs)\r\n a = model.generate(**inputs)\r\n\r\n inputs[\"position_ids\"] = inputs[\"position_ids\"] + 10\r\n # print(inputs)\r\n b = model.generate(**inputs)\r\n\r\n res.append((a == b).all())\r\n\r\n# implement the fix\r\nmodel.prepare_inputs_for_generation = prepare_inputs_for_generation\r\n\r\nfixed_res = []\r\nfor text in texts:\r\n inputs = tokenizer(text, return_tensors =\"pt\")\r\n inputs[\"position_ids\"] = inputs[\"attention_mask\"].cumsum(-1)\r\n # print(inputs)\r\n a = model.generate(**inputs)\r\n\r\n inputs[\"position_ids\"] = inputs[\"position_ids\"] + 10\r\n # print(inputs)\r\n b = model.generate(**inputs)\r\n\r\n fixed_res.append((a == b).all())\r\n\r\n# these are different but should be the same!\r\nprint(res)\r\nprint(fixed_res)\r\n```\r\nResults\r\n```\r\nres = [tensor(True), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True)]\r\nfixed_res = [tensor(False), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True)]\r\n```\r\n\r\nThe difference is slight, but exists.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Bump.\n\nOn Thu., Jun. 16, 2022, 11:02 a.m. github-actions[bot], <\n***@***.***> wrote:\n\n> This issue has been automatically marked as stale because it has not had\n> recent activity. If you think this still needs to be addressed please\n> comment on this thread.\n>\n> Please note that issues that do not follow the contributing guidelines\n> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>\n> are likely to be ignored.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17283#issuecomment-1157763956>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACMZS2BOVYUJBONE7562IM3VPM6YBANCNFSM5WB2QOEQ>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n", "Hey @SwordShieldMouse,\r\n\r\nSorry for answering so late! Upon having taken a second look, you're right those two lines should not exist - i.e. we should ideally delete them completely - great job spotting the bug and sorry for the misunderstanding before - I now see what is meant! \r\n\r\nWould you mind opening a PR to fix it? Otherwise happy to do so myself :-)", "Great!\n\nI'm busy for the rest of the week so I could do it next week, but I'm happy\nif you want to get it done this week :)\n\nOn Mon, 20 Jun 2022 at 18:19, Patrick von Platen ***@***.***>\nwrote:\n\n> Hey @SwordShieldMouse <https://github.com/SwordShieldMouse>,\n>\n> Sorry for answering so late! Upon having taken a second look, you're right\n> those two lines should not exist - i.e. we should ideally delete them\n> completely - great job spotting the bug and sorry for the misunderstanding\n> before - I now see what is meant!\n>\n> Would you mind opening a PR to fix it? Otherwise happy to do so myself :-)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17283#issuecomment-1160685261>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACMZS2EZU7EKBTBCEEYD2T3VQCRZBANCNFSM5WB2QOEQ>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n\n\n-- \nAlan C.\n", "Awesome - happy to wait a week! More than happy though to take over the issue if you find that you won't find time the next week(s) :-)", "Hi Patrick,\n\nSorry for getting back to you late. It turns out I won't have time after\nall :( Would you be able to do the fix?\n\nOn Wed., Jun. 22, 2022, 12:12 a.m. Patrick von Platen, <\n***@***.***> wrote:\n\n> Awesome - happy to wait a week! More than happy though to take over the\n> issue if you find that you won't find time the next week(s) :-)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17283#issuecomment-1162450396>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACMZS2AH7OOHB6KX67EPFILVQJD4DANCNFSM5WB2QOEQ>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n", "Hey @SwordShieldMouse,\r\n\r\nStarted a PR here: https://github.com/huggingface/transformers/pull/18048 - it's actually much more work then I thought so will take a while maybe (cc @gante)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "(being worked on)" ]
1,652
1,668
null
NONE
null
### System Info ```shell python version: 3.9 transformers version: 4.18 ``` ### Who can help? @patil-suraj @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch text = "hi there: " modelname = "EleutherAI/gpt-neo-125M" model = AutoModelForCausalLM.from_pretrained(modelname) tokenizer = AutoTokenizer.from_pretrained(modelname) inputs = tokenizer(text, return_tensors ="pt") inputs["position_ids"] = inputs["attention_mask"].cumsum(-1) print(inputs) a = model.generate(**inputs) inputs["position_ids"] = inputs["position_ids"] + 10 print(inputs) b = model.generate(**inputs) # a and b should be different because the position ids are different print(a) print(b) ``` the result ``` a = tensor([[5303, 612, 25, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220]]) b = tensor([[5303, 612, 25, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220, 220]]) ``` ### Expected behavior ```shell The outputs a and b should (almost always) be different because different position ids should be passed to the model's forward function, resulting in different activations. The issue seems to be here: https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L699 Specifically, the else statement is such that if both an attention mask and position ids are passed, the position ids are erased. In such a scenario, the default position ids in the model's forward function (https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L537) are used, rather than the passed-in position ids. Proposed fix: remove the `else` block on line 699 and unindent the `if past` block. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17283/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/17282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17282/comments
https://api.github.com/repos/huggingface/transformers/issues/17282/events
https://github.com/huggingface/transformers/pull/17282
1,237,294,058
PR_kwDOCUB6oc434ut5
17,282
[Tests] Fix slow opt tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey, will review that ASAP (~1-2h) " ]
1,652
1,652
1,652
MEMBER
null
Fixes failing circle ci tests: ``` tests/models/opt/test_modeling_opt.py::OPTEmbeddingsTest::test_logits tests/models/opt/test_modeling_opt.py::OPTModelIntegrationTests::test_inference_no_head ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17282/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17282", "html_url": "https://github.com/huggingface/transformers/pull/17282", "diff_url": "https://github.com/huggingface/transformers/pull/17282.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17282.patch", "merged_at": 1652736260000 }
https://api.github.com/repos/huggingface/transformers/issues/17281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17281/comments
https://api.github.com/repos/huggingface/transformers/issues/17281/events
https://github.com/huggingface/transformers/pull/17281
1,237,224,329
PR_kwDOCUB6oc434fl_
17,281
Add Deformable DETR
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Addressed most comments. I would like to have:\r\n\r\n- [ ] @Narsil reviewing the initialization of the model using the custom CUDA kernel\r\n- [ ] @LysandreJik (and possibly @Narsil) help me out regarding making the CI green for a model that only runs on GPU. Should we define a custom CI job for this particular model?\r\n- [x] @NouamaneTazi will take care of the remaining comments regarding clearer variable names/docstrings, as he has a detailed understanding of this model.", "> @LysandreJik (and possibly @Narsil) help me out regarding making the CI green for a model that only runs on GPU. Should we define a custom CI job for this particular model?\r\n\r\nWe have a `require_torch_gpu` decorator. Would it help in that case? We could add it to the model tester as a whole, if the model needs GPU to run.", "@Narsil there's an issue with the pipeline tests, I added `DeformableDetrForObjectDetection` to the object detection mapping, but this model requires the custom CUDA kernel to be run.\r\n\r\nAlso, CircleCI reports the following:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"utils/check_repo.py\", line 764, in <module>\r\n check_repo_quality()\r\n File \"utils/check_repo.py\", line 753, in check_repo_quality\r\n check_models_are_in_init()\r\n File \"utils/check_repo.py\", line 305, in check_models_are_in_init\r\n for module in get_model_modules():\r\n File \"utils/check_repo.py\", line 267, in get_model_modules\r\n modeling_module = getattr(model_module, submodule)\r\n File \"/home/circleci/.local/lib/python3.7/site-packages/transformers/utils/import_utils.py\", line 866, in __getattr__\r\n value = self._get_module(name)\r\n File \"/home/circleci/.local/lib/python3.7/site-packages/transformers/utils/import_utils.py\", line 883, in _get_module\r\n ) from e\r\nRuntimeError: Failed to import transformers.models.deformable_detr.modeling_deformable_detr because of the following error (look up to see its traceback):\r\n[Errno 2] No such file or directory: '/home/circleci/.local/lib/python3.7/site-packages/transformers/models/deformable_detr/custom_kernel/vision.cpp'\r\n```\r\nI might need some help with this.", "> @Narsil there's an issue with the pipeline tests, I added DeformableDetrForObjectDetection to the object detection mapping, but this model requires the custom CUDA kernel to be run.\r\n\r\nThe generic tests will always run the model on CPU, so the best way is to discard this model from the test.\r\n\r\nDoing `if isinstance(pipeline.models, Deformable...): self.skipTest(\"This model requires a custom CUDA kernel and is NOT implemented for CPU\")` should be enough IMO (we know how to update later when needed).\r\n\r\nI would also add a slow GPU test that tries to use the pipeline directly if that's OK for the CI.\r\n\r\n```\r\n@require_gpu\r\n@require_torch\r\n@slow\r\ndef test_slow(self):\r\n pipe = pipeline(model=\"hf-internal-testing/....\", device=0)\r\n out = pipe(....)\r\n self.assertEqual(out, {....})\r\n ```\r\n Does that make sense ? If it's hard to have a GPU test (not sure we ever call those anyway for pipelines, no @LysandreJik then we can do without but even if it's not auto tested there's value in creating the test IMO (it will run on local machines that try to run the test)", "As for the missing file, It's probably because the `setup.py` doesn't properly include the file when installing `transformers`.\r\n\r\nI don't really have good pointers for that since you seem to have added the correct line. The main advice would be to do \r\n`python -m build` and looking at the output to check that the proper `.cpp`, `.h` `.cuh` are properly included in the build folder. (Installing from source with `pip install -e .` won't work as it always copy all the files I think so you won't see how the built version fails, maybe it does I am unsure)", "OK, so looking at why the custom kernel fails to build:\r\n```\r\n_ ERROR collecting tests/models/deformable_detr/test_modeling_deformable_detr.py _\r\nsrc/transformers/utils/import_utils.py:893: in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n/usr/local/lib/python3.7/importlib/__init__.py:127: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n<frozen importlib._bootstrap>:1006: in _gcd_import\r\n ???\r\n<frozen importlib._bootstrap>:983: in _find_and_load\r\n ???\r\n<frozen importlib._bootstrap>:967: in _find_and_load_unlocked\r\n ???\r\n<frozen importlib._bootstrap>:677: in _load_unlocked\r\n ???\r\n<frozen importlib._bootstrap_external>:728: in exec_module\r\n ???\r\n<frozen importlib._bootstrap>:219: in _call_with_frames_removed\r\n ???\r\nsrc/transformers/models/deformable_detr/modeling_deformable_detr.py:49: in <module>\r\n MSDA = load_cuda_kernels()\r\nsrc/transformers/models/deformable_detr/load_custom.py:45: in load_cuda_kernels\r\n \"-D__CUDA_NO_HALF2_OPERATORS__\",\r\n../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1156: in load\r\n keep_intermediates=keep_intermediates)\r\n../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1367: in _jit_compile\r\n is_standalone=is_standalone)\r\n../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1438: in _write_ninja_file_and_build_library\r\n verify_ninja_availability()\r\n../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1494: in verify_ninja_availability\r\n raise RuntimeError(\"Ninja is required to load C++ extensions\")\r\nE RuntimeError: Ninja is required to load C++ extensions\r\n```\r\n\r\nThis occurs quite often. The build is missing `ninja`.\r\n\r\nTry adding `pip install ninja` to the CircleCI job workflow and see if it solves the problem. Please ping me if it doesn't.", "Additionally, if we start having custom cuda kernels that are enabled by default we must include `ninja` in our main python dependencies in `setup.py`.", "so installing ninja did the trick of overcoming the initial hurdle. as commented above - if we make it work it should go into `setup.py`'s dependencies and not the job file - but for now this is good enough while we figure out how to make it work.\r\n\r\nNow it's failing:\r\n\r\n```\r\nE OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.\r\n```\r\n\r\nbecause CircleCI is cpu-only and doesn't have `cuda` installed by default.\r\n\r\nBasically your custom cuda kernel requires `cuda` installed to build. You don't have to have a gpu to build it, but it needs to be installed.\r\n\r\n@ydshieh, do you by chance know if we are planning to get `cuda` installed on CircleCI? it's easy to do via `apt` directly from nvidia with .deb packages. Except it's not fast if it's reinstalled on every job run.\r\n\r\n@NielsRogge, does this model work on CPU at all? i.e. is there a fallback to non-custom kernel in the absense of GPUs? If it is then the code should be modified to verify if there is a CUDA environment available and if it's not available not to load the custom kernel and everything will just work.\r\n", "The model only runs on GPU and requires the custom kernel. The authors do provide a CPU version [here](https://github.com/fundamentalvision/Deformable-DETR/blob/11169a60c33333af00a4849f1808023eba96a931/models/ops/functions/ms_deform_attn_func.py#L41), but it's for \"debugging and testing purposes only\".", "The current CircleCI jobs use the docker image `circleci/python:3.7`. If we decide to install `cuda`, I think we can build a custom docker image based on it.", "If it is not too much work to make running on both CPU/GPU work (considering the authors provide some implementation), I would advocate doing it - also mainly for \"debugging and testing purposes only\".", "> If it is not too much work to make running on both CPU/GPU work (considering the authors provide some implementation), I would advocate doing it - also mainly for \"debugging and testing purposes only\".\r\n\r\nHmm I looked into the code, the problem is that their CPU version doesn't accept 2 arguments (`level_start_index` and `im2col_step`) which the CUDA version has, and are required for correct computation. Hence, I don't think it's possible to have a CPU version of it in the library. The authors also explicitly [indicate](https://github.com/fundamentalvision/Deformable-DETR/blob/11169a60c33333af00a4849f1808023eba96a931/models/ops/src/cpu/ms_deform_attn_cpu.cpp#L26) that the layer isn't implemented on CPU.", "1. OK, so if the CPU version is not the same then we won't be testing the actual modeling code - not a good idea. let's stick to testing the actual GPU modeling code.\r\n\r\n2. You're setting a new precedent with this model, @NielsRogge - so we need to decide how to deal with such models, so let's bring @LysandreJik and @sgugger to this discussion - I wonder if we should perhaps discuss this in a separate RFC Issue since it will probably impact other similar models in the future.\r\n\r\nBut we need:\r\n\r\na. the modeling files not fail on `import` in an environment that lacks `cuda` installed- so probably either using the earlier suggestion of moving the model loading into `__init__` (less ideal) or using `try/except` and recovering gracefully if cuda env is not availble.\r\n\r\nb. the tests for such model should all be decorated with `@require_torch_gpu` - so it might be tricky with common tests - I wonder if perhaps decorating the test class with `@require_torch_gpu` would do the trick.\r\n\r\nc. the testing will have to happen on our CI that has GPUs. which means no \"real-time\" testing.", "> b. the tests for such model should all be decorated with @require_torch_gpu - so it might be tricky with common tests - I wonder if perhaps decorating the test class with @require_torch_gpu would do the trick.\r\n\r\nI've done this as seen here: https://github.com/NielsRogge/transformers/commit/ec61d727615d9cff93df59adfd3dd40091401658.", "Pinging @Narsil regarding excluding this model from the pipeline tests.", "Hi @NielsRogge ,\r\n\r\nThe best location to do this is in `tests/pipelines/test_pipelines_xxxx.py` and simply add some logic in `get_test_pipeline` function.\r\n\r\nBut the tests currently seem to be passing, so is this really necessary ?", "_The documentation is not available anymore as the PR was closed or merged._", "PR is ready for review, by adding the model to the mappings this happens:\r\n```\r\nERROR tests/pipelines/test_pipelines_feature_extraction.py - RecursionError: ...\r\nERROR tests/pipelines/test_pipelines_object_detection.py - RecursionError: ma...\r\n!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!\r\n```", "@sgugger that didn't seem to fix the recursion error.", "I never said it would.\r\n\r\nSince you asked so nicely, I investigated and foudn the fix. I don't seem to have the rights to push on your branch so I made a PR [here](https://github.com/NielsRogge/transformers/pull/42).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge Shouldn't we re-open ? This closing was slightly agressive wasn't it ?", "Yes, PR should be close to merge. Hoping to merge this week.\r\n\r\nPS: CPU implementation is added, model doesn't require GPU anymore :D ", "Hi @NielsRogge . I am following the finetuning notebook for [DETR object detection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb). \r\n\r\nYou have mentioned that DeformableDETR follows mostly same API. But I noticed that model based on `DeformableDetrForObjectDetection` doesn't automatically add +1 to number classes. \r\n\r\nAlso, for the Feature-Extractor, I am confused whether we should opt for as per [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/deformable_detr#transformers.DeformableDetrForObjectDetection.forward.example) to use `AutoImageProcessor` or `DeformableDetrFeatureExtractor` instead.\r\n\r\nTo add further, I was wondering if we could add in the augmentation that the original paper follows from the official Repo. I managed to add augmentation based on functions available in official repo for Deformable-DETR. But not sure of the correctness. " ]
1,652
1,673
1,663
CONTRIBUTOR
null
# What does this PR do? This PR implements [Deformable DETR](https://github.com/fundamentalvision/Deformable-DETR), which improves the original [DETR](https://huggingface.co/docs/transformers/model_doc/detr) using a new "deformable attention" module. This model requires a custom CUDA kernel (hence it can only be run on GPU). Other than that, the API is entirely the same as DETR. Models are on the [hub](https://huggingface.co/models?other=deformable_detr).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17281/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17281", "html_url": "https://github.com/huggingface/transformers/pull/17281", "diff_url": "https://github.com/huggingface/transformers/pull/17281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17281.patch", "merged_at": 1663148722000 }
https://api.github.com/repos/huggingface/transformers/issues/17280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17280/comments
https://api.github.com/repos/huggingface/transformers/issues/17280/events
https://github.com/huggingface/transformers/pull/17280
1,237,108,121
PR_kwDOCUB6oc434Gjk
17,280
[ConvNeXT] Fix drop_path_rate
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger ok for you?" ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? As pointed out by #16699, the drop path rate attribute of `ConvNextConfig` wasn't implemented correctly. This PR fixes that, for both the PyTorch and Tensorflow implementations.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17280/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17280", "html_url": "https://github.com/huggingface/transformers/pull/17280", "diff_url": "https://github.com/huggingface/transformers/pull/17280.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17280.patch", "merged_at": 1652787468000 }
https://api.github.com/repos/huggingface/transformers/issues/17279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17279/comments
https://api.github.com/repos/huggingface/transformers/issues/17279/events
https://github.com/huggingface/transformers/pull/17279
1,236,939,889
PR_kwDOCUB6oc433ic4
17,279
Change `config.encoder_ffn_dim` to `config.decoder_ffn_dim` for decoder impl
{ "login": "cloudhan", "id": 1279292, "node_id": "MDQ6VXNlcjEyNzkyOTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1279292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cloudhan", "html_url": "https://github.com/cloudhan", "followers_url": "https://api.github.com/users/cloudhan/followers", "following_url": "https://api.github.com/users/cloudhan/following{/other_user}", "gists_url": "https://api.github.com/users/cloudhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cloudhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cloudhan/subscriptions", "organizations_url": "https://api.github.com/users/cloudhan/orgs", "repos_url": "https://api.github.com/users/cloudhan/repos", "events_url": "https://api.github.com/users/cloudhan/events{/privacy}", "received_events_url": "https://api.github.com/users/cloudhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
Change `config.encoder_ffn_dim` to `config.decoder_ffn_dim` for decoder. These typos are detected during loading flax from pytorch checkpoint where `encoder_ffn_dim` != `decoder_ffn_dim` # What does this PR do? This PR fix typos, these typos is critical!!! @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17279", "html_url": "https://github.com/huggingface/transformers/pull/17279", "diff_url": "https://github.com/huggingface/transformers/pull/17279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17279.patch", "merged_at": 1652699284000 }
https://api.github.com/repos/huggingface/transformers/issues/17278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17278/comments
https://api.github.com/repos/huggingface/transformers/issues/17278/events
https://github.com/huggingface/transformers/issues/17278
1,236,908,253
I_kwDOCUB6oc5Jubjd
17,278
`LayoutXLMProcessor` returns unexpected `offset_mapping`
{ "login": "fredo838", "id": 11276933, "node_id": "MDQ6VXNlcjExMjc2OTMz", "avatar_url": "https://avatars.githubusercontent.com/u/11276933?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fredo838", "html_url": "https://github.com/fredo838", "followers_url": "https://api.github.com/users/fredo838/followers", "following_url": "https://api.github.com/users/fredo838/following{/other_user}", "gists_url": "https://api.github.com/users/fredo838/gists{/gist_id}", "starred_url": "https://api.github.com/users/fredo838/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fredo838/subscriptions", "organizations_url": "https://api.github.com/users/fredo838/orgs", "repos_url": "https://api.github.com/users/fredo838/repos", "events_url": "https://api.github.com/users/fredo838/events{/privacy}", "received_events_url": "https://api.github.com/users/fredo838/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "@NielsRogge Pick me pick me pick me!", "Pinging @SaulLu here as she might have a better clue regarding the tokenization. For context, LayoutXLM uses the same tokenization as XLMRoBERTa. ", "Hi @fredo838 ,\r\n\r\nThank you very much for the detailed issue! Quite a few things seem to come into play here.\r\n\r\n@NielsRogge , do you know if it is expected that LayoutXLM adds prefix space automatically? I think it is but it's best to be sure.\r\n\r\nIf so, I think the tokenization of `\"7.0\" -> ['▁', '7.0']` with `LayoutXLM` is correct: the original tokenizer chosen by `LayoutXLM` is a trained model with sentencepiece with the `split_by_number: true` setting.\r\n\r\nOn the other hand, I agree that the offsets seem odd. To simplify things a bit (to help fix things in the future), here is a snippet that shows the behavior:\r\n```python\r\ntokenizer = XLMRobertaTokenizerFast.from_pretrained(\"microsoft/layoutxlm-base\")\r\n\r\ntexts = [\" 7.0\", \"7.0\", \" hello\", \"hello\"]\r\nencoding = tokenizer(texts, return_offsets_mapping=True, add_special_tokens=False)\r\n\r\nfor text, input_ids, offsets in zip(texts, encoding.input_ids, encoding.offset_mapping):\r\n print(\r\n repr(text), \r\n tokenizer.convert_ids_to_tokens(input_ids), \r\n offsets, \r\n [text[start:end] for start,end in offsets]\r\n )\r\n# ' 7.0' ['▁', '7.0'] [(1, 2), (1, 4)] ['7', '7.0'] <- looks weird\r\n# '7.0' ['▁', '7.0'] [(0, 1), (0, 3)] ['7', '7.0'] <- looks weird\r\n# ' hello' ['▁hell', 'o'] [(1, 5), (5, 6)] ['hell', 'o'] <- looks good\r\n# 'hello' ['▁hell', 'o'] [(0, 4), (4, 5)] ['hell', 'o'] <- looks good\r\n```\r\nTo advance on the resolution of the problem, I think nevertheless that we should discuss this on the side of the [tokenizers](https://github.com/huggingface/tokenizers) repo because the offsets are calculated by the backend tokenizer which is an instance of this library. I still have in mind 2 issues ( https://github.com/huggingface/tokenizers/issues/852 and https://github.com/huggingface/tokenizers/issues/843) that were related to offsets but it seems to me that this is a new case, would you like to open an issue on the tokenizer repo too?", "https://github.com/huggingface/tokenizers/issues/1006 I posted the issue in the tokenizers repo" ]
1,652
1,654
1,654
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.19.1 - Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.31 - Python version: 3.10.0 - Huggingface_hub version: 0.6.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` 1. docker run -it --rm --entrypoint bash python 2. python3 -m pip install pip --upgrade 3. python3 -m pip install torch tensorflow pillow transformers 4. from transformers import LayoutXLMProcessor from PIL import Image import numpy as np processor = LayoutXLMProcessor.from_pretrained("microsoft/layoutxlm-base", apply_ocr=False) out = processor( text=["Hello", "there", "7.0", "Koningshof", "General", "Kenobi"], images=[Image.new(mode='RGB',size=(200, 200))], boxes=[[1,2,3,4] for _ in range(6)], return_offsets_mapping=True ) reverted = processor.tokenizer.convert_ids_to_tokens(out.input_ids) print(reverted) is_start_of_word = np.asarray(out.offset_mapping)[:, 0] == 0 print(list(zip(reverted, is_start_of_word))) ``` prints ```[('<s>', True), ('▁Hello', True), ('▁there', True), ('▁', True), ('7.0', True), ('▁Koning', True), ('s', False), ('hof', False), ('▁General', True), ('▁Ken', True), ('obi', False), ('</s>', True)]``` ### Expected behavior ```shell The token `"7.0"` gets converted to `('▁', True), ('7.0', True)`. I would expect that the token `"7.0"` stays one token `('▁7.0', True)` or that it gets converted to `('▁', True), ('7.0', False)` Questions: - It could very well be that this _is_ expected behavior. Is it? - If not, what conversion is LayoutXLM expecting? What is the correct way to convert? ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17278/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17277/comments
https://api.github.com/repos/huggingface/transformers/issues/17277/events
https://github.com/huggingface/transformers/pull/17277
1,236,884,239
PR_kwDOCUB6oc433Wto
17,277
Issue 17128
{ "login": "mygithubid1", "id": 19863166, "node_id": "MDQ6VXNlcjE5ODYzMTY2", "avatar_url": "https://avatars.githubusercontent.com/u/19863166?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mygithubid1", "html_url": "https://github.com/mygithubid1", "followers_url": "https://api.github.com/users/mygithubid1/followers", "following_url": "https://api.github.com/users/mygithubid1/following{/other_user}", "gists_url": "https://api.github.com/users/mygithubid1/gists{/gist_id}", "starred_url": "https://api.github.com/users/mygithubid1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mygithubid1/subscriptions", "organizations_url": "https://api.github.com/users/mygithubid1/orgs", "repos_url": "https://api.github.com/users/mygithubid1/repos", "events_url": "https://api.github.com/users/mygithubid1/events{/privacy}", "received_events_url": "https://api.github.com/users/mygithubid1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17277). All of your documentation changes will be reflected on that endpoint.", "Thank you, Narsil. I don't have write access. So, please merge to `main`.", "I will ping a wait for a core maintainer second eye.\r\n\r\n@LysandreJik can you get a look ?", "Look good @mygithubid1! I see there are some blank line changes in the `tokenization_utils_base.py`. Could you revert those?", "Please ignore this. My mistake." ]
1,652
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #17128 ## Before submitting - [N/A] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Here's the [link](https://github.com/huggingface/transformers/issues/17128) - [N/A] Did you make sure to update the documentation with your changes? - [ ] Did you write any new necessary tests? I didn't write a custom test. Ran the following commands run to ensure local tests pass 1. `RUN_PIPELINE_TESTS=yes python -m unittest discover -s tests/pipelines -p "test_pipelines_question_answering.py" -t . -v -f ` 2. `python -m unittest discover -s . -p "test_tokenization_wav2vec2.py" -t . -v -f` ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17277/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17277", "html_url": "https://github.com/huggingface/transformers/pull/17277", "diff_url": "https://github.com/huggingface/transformers/pull/17277.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17277.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17276/comments
https://api.github.com/repos/huggingface/transformers/issues/17276/events
https://github.com/huggingface/transformers/pull/17276
1,236,882,855
PR_kwDOCUB6oc433WbH
17,276
Remove next sentence prediction from supported ONNX tasks
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
MEMBER
null
# What does this PR do? This PR removes the `next-sentence-prediction` feature that was added in https://github.com/huggingface/transformers/pull/17029 as part of the MobileBERT ONNX export. It turns out that the `forward()` method of MobileBERT and BERT includes `kwargs`, which is not supported with PyTorch's ONNX exporter. Since this feature is unlikely to be used for inference, the simplest solution is to remove it. With this change, the ONNX slow tests all pass.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17276", "html_url": "https://github.com/huggingface/transformers/pull/17276", "diff_url": "https://github.com/huggingface/transformers/pull/17276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17276.patch", "merged_at": 1652708045000 }
https://api.github.com/repos/huggingface/transformers/issues/17275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17275/comments
https://api.github.com/repos/huggingface/transformers/issues/17275/events
https://github.com/huggingface/transformers/pull/17275
1,236,839,590
PR_kwDOCUB6oc433NR5
17,275
Fixes #17128 .
{ "login": "mygithubid1", "id": 19863166, "node_id": "MDQ6VXNlcjE5ODYzMTY2", "avatar_url": "https://avatars.githubusercontent.com/u/19863166?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mygithubid1", "html_url": "https://github.com/mygithubid1", "followers_url": "https://api.github.com/users/mygithubid1/followers", "following_url": "https://api.github.com/users/mygithubid1/following{/other_user}", "gists_url": "https://api.github.com/users/mygithubid1/gists{/gist_id}", "starred_url": "https://api.github.com/users/mygithubid1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mygithubid1/subscriptions", "organizations_url": "https://api.github.com/users/mygithubid1/orgs", "repos_url": "https://api.github.com/users/mygithubid1/repos", "events_url": "https://api.github.com/users/mygithubid1/events{/privacy}", "received_events_url": "https://api.github.com/users/mygithubid1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
CONTRIBUTOR
null
VisibleDeprecationWarning is addressed by specifying dtype=object when creating numpy array. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17275/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17275", "html_url": "https://github.com/huggingface/transformers/pull/17275", "diff_url": "https://github.com/huggingface/transformers/pull/17275.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17275.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17274/comments
https://api.github.com/repos/huggingface/transformers/issues/17274/events
https://github.com/huggingface/transformers/issues/17274
1,236,776,777
I_kwDOCUB6oc5Jt7dJ
17,274
Add pipeline for cross-modal / uni-modal ranking
{ "login": "ggoggam", "id": 47265378, "node_id": "MDQ6VXNlcjQ3MjY1Mzc4", "avatar_url": "https://avatars.githubusercontent.com/u/47265378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ggoggam", "html_url": "https://github.com/ggoggam", "followers_url": "https://api.github.com/users/ggoggam/followers", "following_url": "https://api.github.com/users/ggoggam/following{/other_user}", "gists_url": "https://api.github.com/users/ggoggam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ggoggam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggoggam/subscriptions", "organizations_url": "https://api.github.com/users/ggoggam/orgs", "repos_url": "https://api.github.com/users/ggoggam/repos", "events_url": "https://api.github.com/users/ggoggam/events{/privacy}", "received_events_url": "https://api.github.com/users/ggoggam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think there are some image-text retrieval capabilities already, such as [ViltForImageAndTextRetrieval](https://huggingface.co/docs/transformers/model_doc/vilt#transformers.ViltForImageAndTextRetrieval). But these can only work for a toy example set of queries and keys due to its interaction-based nature. A true retrieval (cross-model or not) would probably need `datasets` and `faiss`. I think that may be too complicated for for a `pipeline`?", "- I believe there is no unified pipeline for cross-modal search that can be applied to different models.\r\n- Thank you for clarifying. What I had in mind was more of a ranking than a retrieval since I do not expect to have index search baked into the pipeline.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,652
1,655
1,655
NONE
null
### Feature request Given queries and keys, the proposed pipeline returns a ranked list of keys that are most similar to each respective query. This pipeline should support uni-modal and cross-modal retrieval, i.e. - Text-to-Text - Text-to-Image - Image-to-Text - Image-to-Image Prominent use cases would be: - Using BERT family of models to perform text-to-text retrieval - Using multi-modal models such as CLIP to perform any of the retrieval methods above. There can be multiple ranking methods for different multi-modal models, for instance - For VILT, we can use [CLS] pooled image-text matching score for ranking - For CLIP, we can use logits_per_modality for cross-modal similarity score for ranking - For ALBEF (https://github.com/huggingface/transformers/issues/17224), we have a two-stage (coarse-to-fine) ranking (image-text similarity -> [CLS] pooled image-text matching score) ### Motivation I was looking for a use-case for CLIP for cross-modal retrieval, but the current pipeline for CLIP does not seem to support cross-modal retrieval. I believe there is a demand for this pipeline. ### Your contribution - I can help with the implementation once we polish the parameter definitions and outputs!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17274/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17273/comments
https://api.github.com/repos/huggingface/transformers/issues/17273/events
https://github.com/huggingface/transformers/issues/17273
1,236,766,428
I_kwDOCUB6oc5Jt47c
17,273
How to input word2vec embeddings to gpt2 model?
{ "login": "tejaravi675", "id": 65293989, "node_id": "MDQ6VXNlcjY1MjkzOTg5", "avatar_url": "https://avatars.githubusercontent.com/u/65293989?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tejaravi675", "html_url": "https://github.com/tejaravi675", "followers_url": "https://api.github.com/users/tejaravi675/followers", "following_url": "https://api.github.com/users/tejaravi675/following{/other_user}", "gists_url": "https://api.github.com/users/tejaravi675/gists{/gist_id}", "starred_url": "https://api.github.com/users/tejaravi675/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tejaravi675/subscriptions", "organizations_url": "https://api.github.com/users/tejaravi675/orgs", "repos_url": "https://api.github.com/users/tejaravi675/repos", "events_url": "https://api.github.com/users/tejaravi675/events{/privacy}", "received_events_url": "https://api.github.com/users/tejaravi675/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @tejaravi675 👋 Yes, you can pass embeddings to the model as you described (through `inputs_embeds`), but the embeddings will be unknown to the model, as it was not trained with them. In essence, you will have to build a script to finetune the model using your embeddings (for GPT-2, with the causal language modeling task). You can see a few examples [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) (pytorch) and [here](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling) (tensorflow).\r\n\r\nAs per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 I'm closing this issue, but feel free to reopen with queries that fit the criteria I described.", "Hi, I understand that I have to finetune the model with my embeddings. My problem is I am unable to understand how exactly to code (Not very handy with complex coding). So I wanted to know if there were any examples which I can refer to to understand the finetuning part and understand how to add my word embeddings to gpt2 model.", "The examples I linked above are the closest we have to the task you are describing, but they require some modification to run your use case :) Sadly, we don't have the capacity to further help you with your task -- try in the forums, maybe some other user tried to do a similar thing." ]
1,652
1,652
1,652
NONE
null
Hi, I am working on the huggingface gpt2 model. I have a word2vec model trained on a dataset (dimensions similar to gpt2, 768). Now I want to input these embeddings to gpt2. I understand I must use inputs_embeds to input the embeddings but I am little unclear about how exactly to do it. Any source or help would be appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17272/comments
https://api.github.com/repos/huggingface/transformers/issues/17272/events
https://github.com/huggingface/transformers/pull/17272
1,236,679,733
PR_kwDOCUB6oc432r6U
17,272
Fix wrong PT/TF categories in CI report
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652
1,652
1,652
COLLABORATOR
null
# What does this PR do? Current `notification_service.py` has ``` if re.search("_tf_", line): model_results[model]["failed"]["TensorFlow"][artifact_path["gpu"]] += 1 ``` which will put all `test_pt_tf_model_equivalence` under `TensorFlow` even it is from the PT (cross) tests, like ``` tests/models/albert/test_modeling_albert.py::AlbertModelTest::test_pt_tf_model_equivalence ``` This PR fixes this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17272/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17272", "html_url": "https://github.com/huggingface/transformers/pull/17272", "diff_url": "https://github.com/huggingface/transformers/pull/17272.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17272.patch", "merged_at": 1652772767000 }