url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/17071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17071/events
|
https://github.com/huggingface/transformers/pull/17071
| 1,224,604,602
|
PR_kwDOCUB6oc43QLWB
| 17,071
|
Add model UniTE to huggingface repository.
|
{
"login": "wanyu2018umac",
"id": 42405907,
"node_id": "MDQ6VXNlcjQyNDA1OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/42405907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wanyu2018umac",
"html_url": "https://github.com/wanyu2018umac",
"followers_url": "https://api.github.com/users/wanyu2018umac/followers",
"following_url": "https://api.github.com/users/wanyu2018umac/following{/other_user}",
"gists_url": "https://api.github.com/users/wanyu2018umac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wanyu2018umac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanyu2018umac/subscriptions",
"organizations_url": "https://api.github.com/users/wanyu2018umac/orgs",
"repos_url": "https://api.github.com/users/wanyu2018umac/repos",
"events_url": "https://api.github.com/users/wanyu2018umac/events{/privacy}",
"received_events_url": "https://api.github.com/users/wanyu2018umac/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17071). All of your documentation changes will be reflected on that endpoint.",
"Thank you for your PR. If I understand correctly, this adds a new type of sequence classification on XLMRoberta, but this won't work at all inside Transformers, since you are not following the API of other models:\r\n- the model should return a `ModelOuput` \r\n- it should accept the same arguments as other sequence classification models and return the same kind of outputs otherwise it just won't work with the `Trainer` or the `pipeline` function.\r\n\r\nAlso please follow the general guidelines for a [new model addition](https://huggingface.co/docs/transformers/add_new_model).\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,655
| 1,655
|
NONE
| null |
# What does this PR do?
This PR adds model UniTE to the repository of Huggingface.
Paper: [UniTE: Unified Translation Evaluation](https://arxiv.org/abs/2204.13346).
Former discussion is [here](https://github.com/huggingface/transformers/issues/16366).
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17071/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17071",
"html_url": "https://github.com/huggingface/transformers/pull/17071",
"diff_url": "https://github.com/huggingface/transformers/pull/17071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17071.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17070/events
|
https://github.com/huggingface/transformers/pull/17070
| 1,224,603,813
|
PR_kwDOCUB6oc43QLQn
| 17,070
|
fix: :bug: changing context of multiprocessing while decoding for Windows
|
{
"login": "elsheikh21",
"id": 26064109,
"node_id": "MDQ6VXNlcjI2MDY0MTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/26064109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsheikh21",
"html_url": "https://github.com/elsheikh21",
"followers_url": "https://api.github.com/users/elsheikh21/followers",
"following_url": "https://api.github.com/users/elsheikh21/following{/other_user}",
"gists_url": "https://api.github.com/users/elsheikh21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsheikh21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsheikh21/subscriptions",
"organizations_url": "https://api.github.com/users/elsheikh21/orgs",
"repos_url": "https://api.github.com/users/elsheikh21/repos",
"events_url": "https://api.github.com/users/elsheikh21/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsheikh21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17070). All of your documentation changes will be reflected on that endpoint.",
"Interesting! @elsheikh21 - it seems like `pyctcdecode` is not a big fan of `\"spawn\"` :-/ . Could you maybe open an issue there: https://github.com/kensho-technologies/pyctcdecode ? \r\n\r\nSorry I thought this would work, but apparently Kensho's `pyctcdecode` doesn't like it. Maybe let's just ask in their repo how to run the code on Windows :-) ",
"@patrickvonplaten \r\nYes, It caught me by surprise as well. Of course, I can and I did open [an issue](https://github.com/kensho-technologies/pyctcdecode/issues/65), I will be waiting for their response\r\n\r\nIn the meanwhile, I thought of the following\r\n- get context based on the user's OS, using `import sys; sys.platform`, refer to [documentation](https://docs.python.org/3/library/sys.html#sys.platform) -if needed.\r\n\r\n```\r\nimport sys\r\n\r\n# specify spawn for WindowsOS\r\ncontexts = { \r\n \"win32\": \"spawn\", \r\n \"cygwin\": \"spawn\"\r\n} \r\n# if the platform is not windows then use 'fork'\r\n_context = contexts.get(sys.platform, \"fork\")\r\npool = get_context(_context ).Pool(num_processes)\r\n```\r\n or wdyt?",
"Yeah this would be fine by me as well I think! But did you check that pyctcdecode then works correctly? \r\nTo me it really looks like a bug within pyctcdecode",
"Yes I opened the issue in pyctcdecode but did not start investigating what might be causing the error, yet I will do so and if ai found sth. I will update them",
"Hi all, I've merged PR in pyctcdecode (https://github.com/kensho-technologies/pyctcdecode/pull/68) to bypass this issue by either passing None instead of a pool or automatically detecting a pool made with a spawn context and running without multiprocessing. The issue is that the language model is saved as a class variable, which allows fork to work without reloading the model, but this doesn't get automatically populated for spawn, so the model is missing in the new processes. All of the things I've tried to actually use the pool with spawn require pickling or reloading the model which ends up hurting performance a lot. If anyone wants to try to figure out if there's a way to use spawn without also making everything really slow, please go ahead and discuss or make a PR in pyctcdecode",
"Thanks a lot for the fix @lopez86 ! @elsheikh21 I think we should then probably go for the same solution in Transformers no?",
"@patrickvonplaten \r\nsorry for not getting back to you earlier, I had some issues in my personal life.\r\n\r\nOkay, that seems like a good idea, but just to clarify how do u plan for me to change that in `transformers` package as well?",
"Could you try to add such a function to it to see if it works? https://github.com/kensho-technologies/pyctcdecode/pull/68#discussion_r894696237",
"@patrickvonplaten, as I understand, https://github.com/kensho-technologies/pyctcdecode/pull/68 completely ignores spawn contexts. This means that at least for now (until https://github.com/kensho-technologies/pyctcdecode/issues/65 is closed), we should not even get in the trouble of creating a spawn pool in Windows. There's probably an overhead when creating a pool that won't be used.",
"I agree @falcaopetri,\r\n\r\nShould we maybe just allow the user to pass `pool` ? and only when it's `None` we create it ourselves?",
"I guess so, @patrickvonplaten. Moreover, I think that users should be warned if only a `spawn `pool is available or if one was passed by the user, since `pyctcdecode` currently can't use such pool. I've also proposed adding a warning message within `pyctcdecode` (https://github.com/kensho-technologies/pyctcdecode/pull/78).\r\n\r\nAssuming that users can pass an active `pool` (#17879), we might warn them if a `spawn` one is passed (we can either count on the proposed `pyctcdecode`'s warning or add one within `Wav2Vec2ProcessorWithLM`).\r\nIf `None`, we could warn users when only `spawn` is available and call `pyctcdecode` with `None` (saving the creation of a pool that would otherwise be ignored by current `pyctcdecode` implementation). If `fork` is available we would keep current behavior.\r\n\r\nIf in the future `pyctcdecode` supports both `fork` and `spawn`, we might just roll back https://github.com/huggingface/transformers/pull/15247.",
"Sorry to have dropped the ball here a bit - @falcaopetri do you feel like opening a PR to allow passing `pool` or should I do it? :-)",
"Thanks for pinging me, @patrickvonplaten, and sorry for the delay.\r\nI've added some tests to my initial proposal and a `Tip` under `batch_decode`'s `pool` arg.\r\nI'll submit the PR till tomorrow.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,661
| 1,661
|
NONE
| null |
# What does this PR do?
- This PR aim is to fix the context for the multiprocessing used in `wav2vec_with_LM batch_decode()` to change from `fork` to `spawn` so it can run with non-Linux based systems as well.
Fixes # (issue)
- multiprocessing context when `batch_decode`
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
This is the [link for the GitHub issue](https://github.com/huggingface/transformers/issues/16898)
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models:
- Wav2Vec2 with LM
Library:
- benchmarks: @patrickvonplaten
Documentation: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17070/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17070",
"html_url": "https://github.com/huggingface/transformers/pull/17070",
"diff_url": "https://github.com/huggingface/transformers/pull/17070.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17070.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17069/events
|
https://github.com/huggingface/transformers/pull/17069
| 1,224,585,850
|
PR_kwDOCUB6oc43QIZ7
| 17,069
|
Added spanish translation of autoclass_tutorial.
|
{
"login": "duedme",
"id": 38573606,
"node_id": "MDQ6VXNlcjM4NTczNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/38573606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duedme",
"html_url": "https://github.com/duedme",
"followers_url": "https://api.github.com/users/duedme/followers",
"following_url": "https://api.github.com/users/duedme/following{/other_user}",
"gists_url": "https://api.github.com/users/duedme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duedme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duedme/subscriptions",
"organizations_url": "https://api.github.com/users/duedme/orgs",
"repos_url": "https://api.github.com/users/duedme/repos",
"events_url": "https://api.github.com/users/duedme/events{/privacy}",
"received_events_url": "https://api.github.com/users/duedme/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM! Just that little change in the `_toctree.yml`, @Duedme. Thanks!\r\n\r\nIs it ok if I merge when the change is made @sgugger?"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# Translation of autoclass_tutorial.mdx into spanish
I made the translation of autoclass_turorial.mdx into Spanish (fixes #15947). The document is located in the docs/source/es folder.
This PR also includes the translation of _toctree.yml to include autoclass_tutorial.
FYI @omarespejel
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17069/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17069",
"html_url": "https://github.com/huggingface/transformers/pull/17069",
"diff_url": "https://github.com/huggingface/transformers/pull/17069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17069.patch",
"merged_at": 1651691905000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17068/events
|
https://github.com/huggingface/transformers/pull/17068
| 1,224,525,510
|
PR_kwDOCUB6oc43P9LX
| 17,068
|
Add the auto_find_batch_size capability from Accelerate into Trainer
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@stas00 I'm getting a test failure on the metrics:\r\n\r\n```python\r\n tests/trainer/test_trainer.py:1426: in check_mem_metrics\r\n metrics = trainer.train().metrics\r\n src/transformers/trainer.py:1215: in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n src/transformers/trainer.py:1571: in _inner_training_loop\r\n self._memory_tracker.stop_and_update_metrics(metrics)\r\n src/transformers/trainer_utils.py:536: in stop_and_update_metrics\r\n stage = self.derive_stage()\r\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n \r\n self = <transformers.trainer_utils.TrainerMemoryTracker object at 0x7f929f787e10>\r\n \r\n def derive_stage(self):\r\n \"\"\"derives the stage/caller name automatically\"\"\"\r\n caller = inspect.currentframe().f_back.f_back.f_code.co_name\r\n if caller in self.stages:\r\n return self.stages[caller]\r\n else:\r\n raise ValueError(\r\n > f\"was called from {caller}, but only expect to be called from one of {self.stages.keys()}\"\r\n )\r\n E ValueError: was called from _inner_training_loop, but only expect to be called from one of dict_keys(['__init__', 'train', 'evaluate', 'predict'])\r\n```\r\n\r\nAny advice on how to approach a solution?",
"_The documentation is not available anymore as the PR was closed or merged._",
"This will overcome the problem:\r\n\r\n```\r\ndiff --git a/src/transformers/trainer_utils.py b/src/transformers/trainer_utils.py\r\nindex 22b44a2f0..d4c523249 100644\r\n--- a/src/transformers/trainer_utils.py\r\n+++ b/src/transformers/trainer_utils.py\r\n@@ -356,6 +356,7 @@ class TrainerMemoryTracker:\r\n stages = {\r\n \"__init__\": \"init\",\r\n \"train\": \"train\",\r\n+ \"_inner_training_loop\": \"train\",\r\n \"evaluate\": \"eval\",\r\n \"predict\": \"test\",\r\n }\r\n```\r\n",
"Please make sure all tests pass after resolving conflicts and before merging!",
"Any chance similar functionality could be supported for inference? 🙏 "
] | 1,651
| 1,657
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR introduces the `find_executable_batch_size` decorator into `Trainer`, so the training loop is repeated if a CUDA OOM is reached, lowering the batch size.
The API looks as so:
```python
trainer = Trainer()
trainer.train(auto_find_batch_size=True)
```
By default it is False, and requires `Accelerate` be installed to use.
Fixes # (issue)
Partially solves https://github.com/huggingface/transformers/issues/16987
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17068/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17068/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17068",
"html_url": "https://github.com/huggingface/transformers/pull/17068",
"diff_url": "https://github.com/huggingface/transformers/pull/17068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17068.patch",
"merged_at": 1652113758000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17067/events
|
https://github.com/huggingface/transformers/pull/17067
| 1,224,464,764
|
PR_kwDOCUB6oc43Pwem
| 17,067
|
MLflowCallback set experiment name
|
{
"login": "orieg",
"id": 55721,
"node_id": "MDQ6VXNlcjU1NzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/55721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orieg",
"html_url": "https://github.com/orieg",
"followers_url": "https://api.github.com/users/orieg/followers",
"following_url": "https://api.github.com/users/orieg/following{/other_user}",
"gists_url": "https://api.github.com/users/orieg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orieg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orieg/subscriptions",
"organizations_url": "https://api.github.com/users/orieg/orgs",
"repos_url": "https://api.github.com/users/orieg/repos",
"events_url": "https://api.github.com/users/orieg/events{/privacy}",
"received_events_url": "https://api.github.com/users/orieg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Oh actually, multiple checks in our CI are not running for some reason. Can you try sending empty commits to see if it triggers it?",
"@sgugger I tried an empty commit, a comment change on the py file, and a rebase. They all trigger only two checks. They seem to pass tho.",
"Yes, but for some reason, the whole battery of tests run by cicleCI is not launching (check any other PR to see there are actually 18 to 20 checks). I have no idea why they don't, and can't merge without being sure nothing is broken by the PR.",
"@sgugger no clue what's going on. I even tried a new PR in #17091 which also trigger only two CI jobs.",
"I have no idea what the problem is. Wrote to circleCI support to try to get some help.",
"Closing this PR in favor of #17091, which is running all the CI tests."
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR includes the following:
- Resolves #12841: uses the MLFLOW_EXPERIMENT_NAME environment variable and the mlflow.set_experiment() method to ensure the experiment is created if it does not exist already.
- Fixes #17066: Checks properly for an active run using mlflow.active_run() (Bug introduced in #16131).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17067/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17067",
"html_url": "https://github.com/huggingface/transformers/pull/17067",
"diff_url": "https://github.com/huggingface/transformers/pull/17067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17067.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17066/events
|
https://github.com/huggingface/transformers/issues/17066
| 1,224,450,332
|
I_kwDOCUB6oc5I-6Ec
| 17,066
|
Incorrect check for MLFlow active run in MLflowCallback
|
{
"login": "orieg",
"id": 55721,
"node_id": "MDQ6VXNlcjU1NzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/55721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orieg",
"html_url": "https://github.com/orieg",
"followers_url": "https://api.github.com/users/orieg/followers",
"following_url": "https://api.github.com/users/orieg/following{/other_user}",
"gists_url": "https://api.github.com/users/orieg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orieg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orieg/subscriptions",
"organizations_url": "https://api.github.com/users/orieg/orgs",
"repos_url": "https://api.github.com/users/orieg/repos",
"events_url": "https://api.github.com/users/orieg/events{/privacy}",
"received_events_url": "https://api.github.com/users/orieg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
### System Info
```shell
- mlflow==1.25.1
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.10.76-linuxkit-aarch64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.10.2 (False)
```
### Who can help?
Should be fixed by #17067
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Follow Training tutorial as per https://huggingface.co/docs/transformers/training
2. Change the training arguments to use `TrainingArguments(output_dir="test_trainer", report_to=['mlflow'], run_name="run0")`
3. On `trainer.train()` the MLFlow UI should report a run with a Run Name of `run0` which is not currently the case.
Cause of the Issue:
```
>> import mlflow
>> print(mlflow.active_run is None, mlflow.active_run() is None)
False True
```
In `src/transformers/integrations.py` the line `if self._ml_flow.active_run is None:` need to be replaced by `if self._ml_flow.active_run() is None:`
### Expected behavior
PR #14894 introduce support for run_name in the MLflowCallback. Though, this does not work as expected since the active run is checked using a method reference that always returns true. Bug introduced by #16131.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17066/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17065/events
|
https://github.com/huggingface/transformers/issues/17065
| 1,224,408,231
|
I_kwDOCUB6oc5I-vyn
| 17,065
|
symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
|
{
"login": "ryanrudes",
"id": 18452581,
"node_id": "MDQ6VXNlcjE4NDUyNTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18452581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryanrudes",
"html_url": "https://github.com/ryanrudes",
"followers_url": "https://api.github.com/users/ryanrudes/followers",
"following_url": "https://api.github.com/users/ryanrudes/following{/other_user}",
"gists_url": "https://api.github.com/users/ryanrudes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryanrudes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryanrudes/subscriptions",
"organizations_url": "https://api.github.com/users/ryanrudes/orgs",
"repos_url": "https://api.github.com/users/ryanrudes/repos",
"events_url": "https://api.github.com/users/ryanrudes/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryanrudes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Uninstalling a couple dependencies including the transformers package itself, and then installing everything within conda solved my problem. Specifically for anyone with the same problem:\r\n\r\n```shell\r\npip uninstall torch tokenizers transformers\r\nconda install pytorch\r\nconda install -c huggingface transformers\r\n```"
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
### System Info
**OS**: macOS Monterey Version 12.4 Beta
**Model**: MacBook Air (M1, 2020)
**Chip**: Apple M1
**Memory**: 8GB
```shell
Python 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:24:38)
[Clang 12.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers, flax, tensorflow, torch
/Users/ryanrudes/miniforge3/lib/python3.9/site-packages/jax/_src/lib/__init__.py:33: UserWarning: JAX on Mac ARM machines is experimental and minimally tested. Please see https://github.com/google/jax/issues/5501 in the event of problems.
warnings.warn("JAX on Mac ARM machines is experimental and minimally tested. "
>>> transformers.__version__
'4.18.0'
>>> flax.__version__
'0.4.1'
>>> tensorflow.__version__
'2.8.0'
>>> torch.__version__
'1.10.1'
```
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Importing anything from the library results in a "symbol not found" error. I am sure this issue has something to do with the Apple Silicon architecture.
Here's the stack trace:
```shell
>>> from transformers import *
Traceback (most recent call last):
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 857, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/Users/ryanrudes/miniforge3/envs/.../lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 72, in <module>
from tokenizers import AddedToken
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: dlopen(/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 857, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/Users/ryanrudes/miniforge3/envs/.../lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py", line 23, in <module>
from transformers.pipelines import Pipeline, pipeline
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 28, in <module>
from ..models.auto.configuration_auto import AutoConfig
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/models/layoutlm/__init__.py", line 22, in <module>
from .configuration_layoutlm import LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP, LayoutLMConfig
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/models/layoutlm/configuration_layoutlm.py", line 19, in <module>
from transformers import PretrainedConfig, PreTrainedTokenizer, TensorType
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 847, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 859, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.tokenization_utils because of the following error (look up to see its traceback):
dlopen(/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1037, in _handle_fromlist
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 845, in __getattr__
value = self._get_module(name)
File "/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 859, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.convert_graph_to_onnx because of the following error (look up to see its traceback):
Failed to import transformers.tokenization_utils because of the following error (look up to see its traceback):
dlopen(/Users/ryanrudes/Downloads/.../venv/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
```
Same issue when importing tokenizers:
```shell
Python 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:24:38)
[Clang 12.0.1 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from tokenizers import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ryanrudes/miniforge3/lib/python3.9/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: dlopen(/Users/ryanrudes/miniforge3/lib/python3.9/site-packages/tokenizers/tokenizers.cpython-39-darwin.so, 0x0002): symbol not found in flat namespace '__ZNSt8ios_base4InitC1Ev'
```
### Expected behavior
Obviously, the library is supposed to import without any errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17065/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17064/events
|
https://github.com/huggingface/transformers/pull/17064
| 1,224,353,944
|
PR_kwDOCUB6oc43PZsx
| 17,064
|
type hints for pytorch models
|
{
"login": "robotjellyzone",
"id": 36916536,
"node_id": "MDQ6VXNlcjM2OTE2NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/36916536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robotjellyzone",
"html_url": "https://github.com/robotjellyzone",
"followers_url": "https://api.github.com/users/robotjellyzone/followers",
"following_url": "https://api.github.com/users/robotjellyzone/following{/other_user}",
"gists_url": "https://api.github.com/users/robotjellyzone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robotjellyzone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robotjellyzone/subscriptions",
"organizations_url": "https://api.github.com/users/robotjellyzone/orgs",
"repos_url": "https://api.github.com/users/robotjellyzone/repos",
"events_url": "https://api.github.com/users/robotjellyzone/events{/privacy}",
"received_events_url": "https://api.github.com/users/robotjellyzone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 I have added the changes suggested by you & all checks are passing!",
"Looks great, thanks for the PR!"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #16059 :
Added type hints for pytorch models - `canine`, `convbert`, `convnext`, `encoder_decoder`, `gpt2`, `gptj`, `megatron_bert`, `mobilebert`, `perceiver`, `retribert`, `swin`, `transfo_xl` & `van`.
For the code quality, I ran **`make fixup`** and reformatted the codes & also resolved consistency problems across other models [which were - `decision_transformer`, `glpn`, `maskformer`, & `segformer`]
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17064/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17064",
"html_url": "https://github.com/huggingface/transformers/pull/17064",
"diff_url": "https://github.com/huggingface/transformers/pull/17064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17064.patch",
"merged_at": 1651749677000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17063/events
|
https://github.com/huggingface/transformers/pull/17063
| 1,224,273,567
|
PR_kwDOCUB6oc43PJEU
| 17,063
|
Make sure telemetry arguments are not returned as unused kwargs
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
As pointed out in #17056, the telemetry arguments are sometimes returned as unused kwargs. This is because `AutoConfig.from_pretrained` ends up using the `from_dict` method and not the `from_pretrained` method in most cases, and that `from_dict` method does not treat those kwargs.
Fixes #17056
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17063/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17063",
"html_url": "https://github.com/huggingface/transformers/pull/17063",
"diff_url": "https://github.com/huggingface/transformers/pull/17063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17063.patch",
"merged_at": 1651664877000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17062/events
|
https://github.com/huggingface/transformers/pull/17062
| 1,224,253,858
|
PR_kwDOCUB6oc43PE5O
| 17,062
|
Deprecate model templates
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot!"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
This PR officially deprecates the model templates and moves their test to a daily scheduled job.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17062/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17062",
"html_url": "https://github.com/huggingface/transformers/pull/17062",
"diff_url": "https://github.com/huggingface/transformers/pull/17062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17062.patch",
"merged_at": 1651671398000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17061/events
|
https://github.com/huggingface/transformers/issues/17061
| 1,224,198,777
|
I_kwDOCUB6oc5I98p5
| 17,061
|
End2End RAG training hangs
|
{
"login": "YovaKem",
"id": 29899597,
"node_id": "MDQ6VXNlcjI5ODk5NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/29899597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YovaKem",
"html_url": "https://github.com/YovaKem",
"followers_url": "https://api.github.com/users/YovaKem/followers",
"following_url": "https://api.github.com/users/YovaKem/following{/other_user}",
"gists_url": "https://api.github.com/users/YovaKem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YovaKem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YovaKem/subscriptions",
"organizations_url": "https://api.github.com/users/YovaKem/orgs",
"repos_url": "https://api.github.com/users/YovaKem/repos",
"events_url": "https://api.github.com/users/YovaKem/events{/privacy}",
"received_events_url": "https://api.github.com/users/YovaKem/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi this bug is caused by the laters RAY version. Actually, I fixed it in the original RAG repository :). \r\n\r\nPlease change this [line](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L692) similar to [this](line). \r\n\r\nIn other words, just add a name pace into the RAY init. :) ",
"Thanks a lot!",
"Hi @shamanez , Change which line similar to which line? the link ([this](https://github.com/huggingface/transformers/issues/line)) is broken.",
"I've updated the entire codebase. please check the latest version."
] | 1,651
| 1,657
| 1,651
|
NONE
| null |
### System Info
```shell
Python 3.9.0 (default, Nov 15 2020, 14:28:56)
...
>>> transformers.__version__
'4.17.0'
>>> torch.__version__
'1.7.1'
>>> ray.__version__
'1.11.0'
```
### Who can help?
@shamanez, thanks for a great example! I'm having trouble training a model end2end and wonder if you could help out. I have two GPUs available so I've set the arguments relevant for the end2end training to:
```
--gpus 1 \
--end2end \
--distributed_retriever ray \
--num_retrieval_workers 4 \
--index_gpus 1 \
--gpu_order [3,6]
```
This configuration doesn't seem to work since it results in `len(self.retrieval_workers)=0` which means that the call to `re_load()` in `RagRayDistributedRetriever` just hangs eternally.
I also tried setting `--gpu 2` above, but that breaks with error
```*** ValueError: Failed to look up actor with name 'retrieval_worker_0'. This could because 1. You are trying to look up a named actor you didn't create. 2. The named actor died. 3. You did not use a namespace matching the namespace of the actor.```
I don't quite know how ray works so I don't know if ranging `re_load()' to this would break it:
```
if len(self.retrieval_workers) > 0:
ray.get([worker.clear_object.remote() for worker in self.retrieval_workers])
# build the index object again
index = self._build_index(self.config)
ray.get(
[
worker.create_rag_retriever.remote(
self.config, self.question_encoder_tokenizer, self.generator_tokenizer, index
)
for worker in self.retrieval_workers
]
)
else:
self.index = self._build_index(self.config)
```
What are your thoughts? Thanks!
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Set end2end arguments as shown above.
### Expected behavior
```shell
Continuous training without hanging at function re_load()
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17061/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17060/events
|
https://github.com/huggingface/transformers/pull/17060
| 1,224,164,597
|
PR_kwDOCUB6oc43OyOB
| 17,060
|
Add LayoutLMv3
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just for the purpose to keep track of the current status.\r\n\r\nAs discussed offline I think the next step to \"solve\" the tokenization tests is to figure out how `[\"hello\", \"world\"]` is tokenized in the original code: is it [`0, 42891, 8331, 2]` (`['<s>', 'Ġhello', 'Ġworld', '</s>']`) or `[0, 20760, 232, 2]` (`['<s>', 'hello', \"world\", '</s>']`) or something else ? :blush: ",
"As seen [here](https://github.com/microsoft/unilm/blob/925de7a9ea500e992ec5de02ea193a5eb9d5aa26/layoutlmv3/examples/run_funsd_cord.py#L313), text is tokenized using RobertaTokenizer, where one provides `is_split_into_words=True`. Hence, [\"hello\", \"world\"] is tokenized as follows:\r\n\r\n```\r\nfrom transformers import RobertaTokenizer\r\n\r\ntokenizer = RobertaTokenizer.from_pretrained(\"microsoft/layoutlmv3-base\")\r\n\r\ntext = [\"hello\", \"world\"]\r\n\r\nencoding = tokenizer(text, is_split_into_words=True)\r\n```\r\nSo this results in [0, 20760, 232, 2].",
"Thanks for the clarification! I've opened a PR on your branch (https://github.com/NielsRogge/transformers/pull/38) which proposes several changes including 1) changing the default behaviour so that by default a space prefix is added and including all the changes needed to make it work and 2) some small changes to resolve several of the tests that were failing.\r\n\r\nI wonder if we shouldn't just remove the option to set `add_prefix_space` to False because the result will not be satisfactory for decoding and I'm not sure we want to do any fancy tricks to make it \"work\". (Or at least we should log a message to warn the user that the option is risky).",
"Hi @NielsRogge \r\n\r\nAs the issue #13554 and PR #17092, when `input_ids` is longer than model's `max_length`, it would be split into multiple inputs, but `pixel_values` still has 1 image. Are you going to fix this right now, or next PR?\r\n\r\nHow to reproduce\r\n```python\r\nfrom transformers import AutoProcessor, AutoModelForTokenClassification\r\nfrom datasets import load_dataset\r\nfrom PIL import Image\r\nprocessor = AutoProcessor.from_pretrained(\"microsoft/layoutlmv3-large\")\r\nprocessor.feature_extractor.apply_ocr = False\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"microsoft/layoutlmv3-large\")\r\n\r\nwords = ['hello' for i in range(1000)]\r\nboxes = [[0, 1, 2, 3] for i in range(1000)]\r\nencoding = processor(\r\n image, \r\n text=words, \r\n boxes=boxes,\r\n truncation=True,\r\n padding='max_length',\r\n return_overflowing_tokens=True, \r\n return_tensors=\"pt\"\r\n)\r\n\r\nprint(encoding['input_ids'].shape) # torch.Size([2, 512])\r\nprint(encoding['pixel_values'].shape) #torch.Size([1, 3, 224, 224])\r\noverflow_to_sample_mapping = encoding.pop('overflow_to_sample_mapping')\r\nmodel(**encoding) \r\n# ---> RuntimeError: Sizes of tensors must match except in dimension 1.\r\n# Expected size 4 but got size 1 for tensor number 1 in the list.\r\n```",
"Thank you so much for your fantastic work. I was wondering if you plan to include the object detection task in LayoutLMv3 as well. I noticed that the [PubLayNet fine-tuned model weights](https://huggingface.co/HYPJUDY/layoutlmv3-base-finetuned-publaynet) have already been uploaded to HuggingFace, but I couldn't find any documentation on this capability in this repository. ",
"> EDIT: Just realized these are the visual tokens... controlled via `add_visual_labels`\r\n\r\n@NielsRogge Thanks for this contribution! \r\nWhile testing the processor, I'm seeing extra padding on the resultant labels that I did not expect and have not experienced with older versions of layoutlmv2processor. \r\n\r\n```\r\nimport numpy as np\r\nfrom transformers.models.auto.processing_auto import AutoProcessor\r\n\r\nprocessor = AutoProcessor.from_pretrained(\r\n pretrained_model_name_or_path=\"microsoft/layoutlmv3-base\",\r\n use_fast=True,\r\n add_prefix_space=True,\r\n apply_ocr=False,\r\n)\r\n\r\n# not batched\r\nwords = [\"hello\", \"world\"]\r\nboxes = [[1, 2, 3, 4], [5, 6, 7, 8]]\r\nword_labels = [1, 2]\r\nimage = np.zeros((224, 224, 3), dtype=np.uint8)\r\nresults = processor(\r\n image, words, boxes=boxes, word_labels=word_labels, return_tensors=\"pt\"\r\n)\r\nfor k, v in results.items():\r\n print(k, v.size())\r\n\r\nlabels = results.labels.squeeze().tolist()\r\nprint(labels)\r\n```\r\noutput:\r\n```\r\ninput_ids torch.Size([1, 8])\r\nattention_mask torch.Size([1, 8])\r\nbbox torch.Size([1, 8, 4])\r\nlabels torch.Size([1, 205])\r\npixel_values torch.Size([1, 3, 224, 224])\r\n[-100, 1, -100, -100, -100, -100, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]\r\n```\r\n\r\nThis happens beyond maximum seq length as well... where the labels will have a dimension seq_length + ~197\r\nIs this expected? ",
"Hi @dcyoung,\r\n\r\nthanks for taking a look. Actually you make a great point; I implemented it as the original implementation (where the authors label all visual tokens with -100 and just add a classifier on top of the entire `sequence_output`), however it makes a lot of sense to just simplify the code in `LayoutLMv3ForTokenClassification` and not make the processor do this. \r\n\r\nThanks a lot!",
"And hi @sina-ehsani, \r\n\r\nunfortunately I'm (for now) not planning to add the object detection part, because the framework being used (Mask R-CNN) is a ridiculous amount of code and it's not straightforward - for now - to add this to the Transformers library (as there's a \"one model, one file\" philosophy). So I'd advise to use the original repository for that. \r\n\r\nIt may be that in the future we add this framework, but I'm actually much more a fan of simpler frameworks like DETR and YOLOS. It would be great if someone fine-tuned a [YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos) model initialized with the weights of the [Document Image Transformer (DiT)](https://huggingface.co/docs/transformers/model_doc/dit). I feel like you would get the same performance. ",
"Thank you so much for adding the model, I had a question on segment position embeddings. How do you create segment position embeddings during inference when the labels are unknown and are just bounding boxes from an ocr. In this [notebook](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb) the test set also contains segment level bounding box. I have trained a model on segment level embeddings on my use case and it doesn't perform well on token level 2D embeddings during inference.",
"> YOLOS\r\n\r\nThanks for the idea. I will have a go at this. \r\n\r\nMy understanding unilm repo uses Detectron2 (Mask-RCNN) for the backbone of Object Detection in LayoutLMv3 for benchmarking compatibility. Would it be possible to swap out the image backbone for a vision transformer in the LayoutLMv3 training. I saw in the paper:\r\n\r\n`LayoutLMv3 is the first multimodal model in Document AI\r\nthat does not rely on a pre-trained CNN or Faster R-CNN\r\nbackbone to extract visual features, which significantly saves\r\nparameters and eliminates region annotations.`\r\n\r\nMy understanding is that LayoutLMv3 is able to generalise better with the unsupervised pre-training over the MIM+MLM+WPA objectives. It also learns correlations between the text / visual inputs that it benefits with on downstream tasks. YOLOS wouldn't include this key text information in document layout anlaysis.\r\n\r\nPlease correct me if I am wrong... I am learning here.",
"@NielsRogge \r\n\r\n> \r\n\r\nThis thread has lead me to hacking a model that combines the YolosLoss and YolosObjectDetection head with the LayoutLMv3Model to build a LayoutLMv3ObjectDetection prediction head.\r\n\r\nChanges to the LayoutLMv3Config and LayoutLMv3FeatureExtractor had to be made to allow for this.\r\n\r\nThis approach avoids the Mask R-CNN discussed.\r\n\r\nIs this something you would be interested in reviewing and integrating if I open a PR? \r\n\r\nOr does it deviate too significantly from the research paper?"
] | 1,651
| 1,662
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR implements LayoutLMv3. LayoutLMv3 doesn't require a Detectron2 backbone anymore (yay!).
The PR also includes an example script that can be used to reproduce results of the paper.
Fixes #16914
To do:
- [x] fix remaining tokenizer tests. These are very black-boxy to me. Pinging @SaulLu here.
- [x] add model to doc tests
- [x] remove `is_detection` logic
- [x] Make sure the slow tests involving `PyTesseract` pass
- [x] Merge `add_layoutlmv3_simplify` branch
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17060/reactions",
"total_count": 24,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 9,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17060/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17060",
"html_url": "https://github.com/huggingface/transformers/pull/17060",
"diff_url": "https://github.com/huggingface/transformers/pull/17060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17060.patch",
"merged_at": 1653378825000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17059/events
|
https://github.com/huggingface/transformers/pull/17059
| 1,224,122,025
|
PR_kwDOCUB6oc43OpZ8
| 17,059
|
Remove Python and use v2 action
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
Fix the model templates GitHub job which was broken with the Python 3.6 removal.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17059/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17059",
"html_url": "https://github.com/huggingface/transformers/pull/17059",
"diff_url": "https://github.com/huggingface/transformers/pull/17059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17059.patch",
"merged_at": 1651587137000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17058/events
|
https://github.com/huggingface/transformers/issues/17058
| 1,223,989,644
|
I_kwDOCUB6oc5I9JmM
| 17,058
|
Chinese parentheses can't be handled by fast tokenizer
|
{
"login": "realjanpaulus",
"id": 22560883,
"node_id": "MDQ6VXNlcjIyNTYwODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22560883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realjanpaulus",
"html_url": "https://github.com/realjanpaulus",
"followers_url": "https://api.github.com/users/realjanpaulus/followers",
"following_url": "https://api.github.com/users/realjanpaulus/following{/other_user}",
"gists_url": "https://api.github.com/users/realjanpaulus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realjanpaulus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realjanpaulus/subscriptions",
"organizations_url": "https://api.github.com/users/realjanpaulus/orgs",
"repos_url": "https://api.github.com/users/realjanpaulus/repos",
"events_url": "https://api.github.com/users/realjanpaulus/events{/privacy}",
"received_events_url": "https://api.github.com/users/realjanpaulus/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still open.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,657
| 1,657
|
NONE
| null |
### Problem description
The chinese language has some special characters for parentheses: "(", ")" (= parentheses with an integrated whitespace). Both characters aren't part of the XLMRoBERTa Tokenizer, neither the fast nor the "slow" one. Interestingly enough, it is possible to add these characters to the tokenizer vocabulary but only for the "slow"/non-fast tokenizer variant.
### System Info
```shell
Package Version
------------------ ---------
certifi 2021.10.8
charset-normalizer 2.0.12
click 8.1.3
filelock 3.6.0
huggingface-hub 0.5.1
idna 3.3
joblib 1.1.0
numpy 1.22.3
packaging 21.3
pip 20.0.2
pkg-resources 0.0.0
pyparsing 3.0.8
PyYAML 6.0
regex 2022.4.24
requests 2.27.1
sacremoses 0.0.53
sentencepiece 0.1.96
setuptools 44.0.0
six 1.16.0
tokenizers 0.12.1
torch 1.11.0
tqdm 4.64.0
transformers 4.18.0
typing-extensions 4.2.0
urllib3 1.26.9
```
### Who can help?
@LysandreJik, @Sau
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Code to reproduce
```python
import sentencepiece
import transformers
from transformers import AutoTokenizer
print("Transformers version:", transformers.__version__)
print("----------------")
special_chinese_parantheses = "("
print(special_chinese_parantheses)
# "slow"
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base", use_fast=False)
print(tokenizer.tokenize(special_chinese_parantheses))
tokenizer.add_tokens(special_chinese_parantheses)
print(tokenizer.tokenize(special_chinese_parantheses))
print("----------------")
# fast
tokenizer_fast = AutoTokenizer.from_pretrained("xlm-roberta-base", use_fast=True)
print(tokenizer_fast.tokenize(special_chinese_parantheses))
tokenizer_fast.add_tokens(special_chinese_parantheses)
print(tokenizer_fast.tokenize(special_chinese_parantheses))
```
### Output
```sh
Transformers version: 4.18.0
----------------
(
['▁(']
['(']
----------------
['▁(']
['(']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17058/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 6
}
|
https://api.github.com/repos/huggingface/transformers/issues/17058/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17057/events
|
https://github.com/huggingface/transformers/pull/17057
| 1,223,949,412
|
PR_kwDOCUB6oc43OFDc
| 17,057
|
Rewrite TensorFlow train_step and test_step
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(Requesting reviews now that @gante is back)"
] | 1,651
| 1,652
| 1,652
|
MEMBER
| null |
Draft PR for a full rewrite of the TF train/test steps. I swear this will fix like 50% of our TF issues in one PR.
Current status:
- Correctly handles output mapping across most model classes for losses + metrics
- Keras metrics are back, even with the dummy loss. (!!!!)
- Keras metrics work correctly even for multi-output models (like QA)
- In most cases, users can pass tensors in either the input dict or the labels and the model will handle them correctly.
- No more errors when calling `fit()` when the model has nested output structure (e.g. the model outputting a `past` tuple)
What's left to do:
- [X] Models with multiple unusual outputs that do not match label names may still have issues with metrics. This is relatively uncommon. We support adding a property to those classes to tell Keras what to do with the labels, but we haven't added it to any models yet. (None are failing in tests, so hopefully we won't need to worry too much about this!)
- [x] Testing testing testing! I want to rerun all notebooks/examples and make sure the user experience is good.
- [X] CI testing - We need to make sure we don't regress on any of this
- [ ] Discoverability: After this is merged we should update notebooks/examples to show off the cool new features, and document our TF workflow/philosophy somewhere that new users will find.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17057/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17057",
"html_url": "https://github.com/huggingface/transformers/pull/17057",
"diff_url": "https://github.com/huggingface/transformers/pull/17057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17057.patch",
"merged_at": 1652794584000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17056/events
|
https://github.com/huggingface/transformers/issues/17056
| 1,223,938,690
|
I_kwDOCUB6oc5I89KC
| 17,056
|
AutoConfig.from_pretrained("model", return_unused_kwargs=True) returns `"_from_auto": True` field against specification
|
{
"login": "GabrielKP",
"id": 40501279,
"node_id": "MDQ6VXNlcjQwNTAxMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/40501279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GabrielKP",
"html_url": "https://github.com/GabrielKP",
"followers_url": "https://api.github.com/users/GabrielKP/followers",
"following_url": "https://api.github.com/users/GabrielKP/following{/other_user}",
"gists_url": "https://api.github.com/users/GabrielKP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GabrielKP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GabrielKP/subscriptions",
"organizations_url": "https://api.github.com/users/GabrielKP/orgs",
"repos_url": "https://api.github.com/users/GabrielKP/repos",
"events_url": "https://api.github.com/users/GabrielKP/events{/privacy}",
"received_events_url": "https://api.github.com/users/GabrielKP/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I can reproduce, thanks for flagging and for taking the time to give us a clear example of code that fails!\r\nI will try to dive into this and have a fix ready later today."
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.17.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
@sgugger
Small bug in which for some cases `{"_from_auto": True}` is returned [against specification](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/auto/configuration_auto.py#L646).
Seems to originate [here](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/auto/configuration_auto.py#L671) and/or [here](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/configuration_utils.py#L659)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Replicable with
```py
>>> from transformers import AutoConfig
>>> config, kwargs = AutoConfig.from_pretrained("bert-base-uncased", return_unused_kwargs=True)
>>> kwargs
{'_from_auto': True}
```
### Expected behavior
There should be no `"_from_auto": True` field in returned dict.
```py
>>> from transformers import AutoConfig
>>> config, kwargs = AutoConfig.from_pretrained("bert-base-uncased", return_unused_kwargs=True)
>>> kwargs
{}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17056/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17055/events
|
https://github.com/huggingface/transformers/pull/17055
| 1,223,319,797
|
PR_kwDOCUB6oc43MJ8y
| 17,055
|
Fix RNG reload in resume training from epoch checkpoint
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
This PR fixes the reproducibility in training when checkpoints are saved every epoch. The main reason it was failing (as pointed out in #17032) is that the RNG states were never reloaded. They need to be reloaded exactly before iterating through the new epoch, as the call to this will change the global PyTorch RNG (even if the dataloader uses its own generator...) The new test added makes sure this reproducibility is fully tested.
While debugging this, two issues occurred, which this PR also fixes.
1. There are multiple warnings for the computation of flos when the model is not an NLP model. This PR reduces it to one.
2. The test of this reproducibility is flaky on multiple GPUs because it relies on some randomness inside the model, but the PyTorch RNG will be called in random order between the two "copies" of the model executed by `DataParallel` (an issue that wouldn't be the case with `DistributedDataParallel` but we would need to execute the test via a launcher in that case). So in the test, we only do PyTorch randomness on one or zero GPU to fix this flakiness.
Fixes #17032
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17055/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17055",
"html_url": "https://github.com/huggingface/transformers/pull/17055",
"diff_url": "https://github.com/huggingface/transformers/pull/17055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17055.patch",
"merged_at": 1651588285000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17054/events
|
https://github.com/huggingface/transformers/pull/17054
| 1,223,261,712
|
PR_kwDOCUB6oc43L9Rp
| 17,054
|
[CodeParrot] Near-deduplication with jaccard similarity
|
{
"login": "liyongsea",
"id": 6381544,
"node_id": "MDQ6VXNlcjYzODE1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6381544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyongsea",
"html_url": "https://github.com/liyongsea",
"followers_url": "https://api.github.com/users/liyongsea/followers",
"following_url": "https://api.github.com/users/liyongsea/following{/other_user}",
"gists_url": "https://api.github.com/users/liyongsea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyongsea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyongsea/subscriptions",
"organizations_url": "https://api.github.com/users/liyongsea/orgs",
"repos_url": "https://api.github.com/users/liyongsea/repos",
"events_url": "https://api.github.com/users/liyongsea/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyongsea/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @lvwerra I agree with you. I will do that. The overall code is running now, here are the next steps:\r\n\r\n- [x] refactor the code to be used in preprocess.py and clean up\r\n- [x] document statistics and performance data in the PR\r\n- [x] use dataset.map to compute minhash\r\n\r\nI will do the deduplication of the validation set in another PR probably.\r\n~question, does dataset.map put the whole dataset in RAM? I imagine it is not a problem because preprocess.py is already doing so~",
"Hi @lvwerra there is one decision we need make, then the PR will be ready to review.\r\nI mentioned before, we could use dataset.map to compute the minhash. However, there is two steps in the deduplication:\r\n- compute minhash for each code file\r\n- add into MinHashLSH (can not be parallelized)\r\n\r\nIn previous function, a queue is used while adding into minhash. It would be difficult to do the same using dataset.map. So the dataset.map [implementation](https://github.com/huggingface/transformers/pull/17054/files#diff-c6c3b9ed9e98c7b3b8603011208c2db7c0cf08facbf2a75ec8f56dfea0242040R119) will be almost twice slow (to be confirmed ...) \r\n~I might prefer the dataset.map solution, which makes the code easier to read~\r\nFinally I choose the initial implementation, which reduce the computation time by half ",
"Here are some statistic and time performance data:\r\n\r\non the dataset lvwerra/codeparrot-clean\r\n~Execution time 13h: Execution time: 2:30:00 for make_duplicate_clusters, 11:00:00 for find_cluster_extremes~\r\n\r\nOrginal dataset size: 5361373 \r\nDuplicate cluster: 757938 \r\nFiles in duplicate cluster: 2677039 \r\nUnique files in duplicate cluster: 940857\r\nFiltered dataset size: 3625191 \r\n\r\n~I think the code is ready for review. If you need to generate a dataset, you can go ahead. I might still need more days to figure out how to do find_cluster_extremes better~\r\n\r\nPlease see the next message for update",
"multipro_find_extremes is done with multi processing ! This PR is ready for review\r\nExecution time ~3h: Execution time: 2:30:00 for make_duplicate_clusters, 1:00:00 for multipro_find_extremes\r\n\r\nOrginal dataset size: 5361373\r\nDuplicate cluster: 757938\r\nFiles in duplicate cluster: 2677039\r\nUnique files in duplicate cluster: 940857\r\nFiltered dataset size: 3625191\r\n@lvwerra when review, pay more attention to\r\n- [Here](https://github.com/huggingface/transformers/pull/17054/files#diff-c6c3b9ed9e98c7b3b8603011208c2db7c0cf08facbf2a75ec8f56dfea0242040R140) I use a global parameter to be able to do multi pro in a efficient way\r\n",
"We are good to go, welcome your thought @lvwerra \r\nI will try to run some last test"
] | 1,651
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR address the code duplication issue describe in this thread
https://twitter.com/miltos1/status/1497126435261083649?s=20&t=v5-vwaEtXLrgZ_GuZHrPKQ
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
# run the code
```
from datasets import load_dataset
from minhash_deduplication import deduplicate_dataset
ds = load_dataset("lvwerra/codeparrot-clean", split="train")
ds_dedup, duplicate_clusters = deduplicate_dataset(ds)
```
The function runs in 2:30 (make_duplicate_clusters) + 1:30 (find_extremes) on a 8 cores VM
```
Orginal dataset size: 5361373
Duplicate cluster: 757944
Files in duplicate cluster: 2677040
Unique files in duplicate cluster: 911947
Filtered dataset size: 3596280
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17054/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17054/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17054",
"html_url": "https://github.com/huggingface/transformers/pull/17054",
"diff_url": "https://github.com/huggingface/transformers/pull/17054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17054.patch",
"merged_at": 1655814216000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17053/events
|
https://github.com/huggingface/transformers/pull/17053
| 1,223,188,138
|
PR_kwDOCUB6oc43Ltjm
| 17,053
|
Make Trainer compatible with sharded checkpoints
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
The `Trainer` is currently incompatible with the new sharded checkpoint feature in two places:
- resuming from a checkpoint
- loading the best model at the end of training
In both cases, the model state dict is loaded back inside the model **but** there is no model save file if the model was above the default size for sharding, resulting in errors (as was pointed out by #16976 ).
This PR addresses this by:
1. Creating a new function `load_sharded_checkpoint` that does the same thing as `model.load_state_dict` for regular model files, but loads a sharded checkpoint (and errors in case of missing/unexpected keys when `strict=True`).
2. Use that function inside the Trainer in the two places mentioned above.
A test is added to make sure resuming works from a sharded checkpoint.
Fixes #16976
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17053/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17053",
"html_url": "https://github.com/huggingface/transformers/pull/17053",
"diff_url": "https://github.com/huggingface/transformers/pull/17053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17053.patch",
"merged_at": 1651586110000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17052/events
|
https://github.com/huggingface/transformers/issues/17052
| 1,223,149,438
|
I_kwDOCUB6oc5I58d-
| 17,052
|
ValueError: too many values to unpack (expected 2) using BERT to training
|
{
"login": "ksdihgfmata",
"id": 80060744,
"node_id": "MDQ6VXNlcjgwMDYwNzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/80060744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksdihgfmata",
"html_url": "https://github.com/ksdihgfmata",
"followers_url": "https://api.github.com/users/ksdihgfmata/followers",
"following_url": "https://api.github.com/users/ksdihgfmata/following{/other_user}",
"gists_url": "https://api.github.com/users/ksdihgfmata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksdihgfmata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksdihgfmata/subscriptions",
"organizations_url": "https://api.github.com/users/ksdihgfmata/orgs",
"repos_url": "https://api.github.com/users/ksdihgfmata/repos",
"events_url": "https://api.github.com/users/ksdihgfmata/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksdihgfmata/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
Preparing the dataset and dataloader and Defining the model but I get this error ValueError: too many values to unpack (expected 2).
```
### Who can help?
@LysandreJik, @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
class dataset(Dataset):
def __init__(self, dataframe, tokenizer, max_len):
self.len = len(dataframe)
self.data = dataframe
self.tokenizer = tokenizer
self.max_len = max_len
def __getitem__(self, index):
# step 1: get the sentence and word labels
sentence = self.data.sentence[index].strip().split()
word_labels = self.data.word_labels[index].split(",")
# step 2: use tokenizer to encode sentence (includes padding/truncation up to max length)
# BertTokenizerFast provides a handy "return_offsets_mapping" functionality for individual tokens
encoding = self.tokenizer(sentence,
#is_pretokenized=True,
return_offsets_mapping=True,
padding='max_length',
truncation=True,
max_length=self.max_len)
# step 3: create token labels only for first word pieces of each tokenized word
labels = [labels_to_ids[label] for label in word_labels]
# code based on https://huggingface.co/transformers/custom_datasets.html#tok-ner
# create an empty array of -100 of length max_length
encoded_labels = np.ones(len(encoding["offset_mapping"]), dtype=int) * -100
# set only labels whose first offset position is 0 and the second is not 0
i = 0
for idx, mapping in enumerate(encoding["offset_mapping"]):
if mapping[0] == 0 and mapping[1] != 0:
# overwrite label
encoded_labels[idx] = labels[i]
i += 1
# step 4: turn everything into PyTorch tensors
item = {key: torch.as_tensor(val) for key, val in encoding.items()}
item['labels'] = torch.as_tensor(encoded_labels)
return item
def __len__(self):
return self.len
train_size = 0.8
train_dataset = data.sample(frac=train_size,random_state=200)
test_dataset = data.drop(train_dataset.index).reset_index(drop=True)
train_dataset = train_dataset.reset_index(drop=True)
print("FULL Dataset: {}".format(data.shape))
print("TRAIN Dataset: {}".format(train_dataset.shape))
print("TEST Dataset: {}".format(test_dataset.shape))
training_set = dataset(train_dataset, tokenizer, MAX_LEN)
testing_set = dataset(test_dataset, tokenizer, MAX_LEN)
train_params = {'batch_size': TRAIN_BATCH_SIZE,
'shuffle': True,
'num_workers': 0
}
test_params = {'batch_size': VALID_BATCH_SIZE,
'shuffle': True,
'num_workers': 0
}
training_loader = DataLoader(training_set, **train_params)
testing_loader = DataLoader(testing_set, **test_params)
model = BertForTokenClassification.from_pretrained('bert-base-uncased', num_labels=len(labels_to_ids))
model.to(device)
inputs = training_set[2]
input_ids = inputs["input_ids"].unsqueeze(0)
attention_mask = inputs["attention_mask"].unsqueeze(0)
labels = inputs["labels"].unsqueeze(0)
input_ids = input_ids.to(device)
attention_mask = attention_mask.to(device)
labels = labels.to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
initial_loss = outputs[0]
initial_loss
And here is the error code:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-31-c8d1cd345a9a>](https://localhost:8080/#) in <module>()
8 labels = labels.to(device)
9
---> 10 outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
11 initial_loss = outputs[0]
12 initial_loss
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
948 raise ValueError("You have to specify either input_ids or inputs_embeds")
949
--> 950 batch_size, seq_length = input_shape
951 device = input_ids.device if input_ids is not None else inputs_embeds.device
952
ValueError: too many values to unpack (expected 2)
### Expected behavior
```shell
I will train my NER-BARTmodel
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17052/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17052/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17051/events
|
https://github.com/huggingface/transformers/issues/17051
| 1,223,112,039
|
I_kwDOCUB6oc5I5zVn
| 17,051
|
Collection of Tokenizer issues
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] |
[
"cc @LysandreJik @sgugger @SaulLu @Narsil @patil-suraj ",
"Internal thread: https://huggingface.slack.com/archives/C01N44FJDHT/p1647966224411599",
"Another one: https://github.com/huggingface/transformers/issues/16787#issuecomment-1100009727",
"It would be nice to add those to a project so that we may track the resolution of these issues.",
"Another one: https://github.com/huggingface/transformers/issues/16225\r\n",
"Another one: https://github.com/huggingface/transformers/issues/17595",
"https://github.com/huggingface/tokenizers/issues/1011"
] | 1,651
| 1,659
| null |
MEMBER
| null |
### System Info
```shell
Transformers + Tokenizers
```
### Who can help?
This Issue is a summary of multiple problems that we are currently encountering with Tokenizers. To solve them we'll need a more profound discussion of:
- To what extend fast and slow tokenizers should be aligned
- Whether all slow tokenizers should be kept
- How to treat special tokens
- Whether all internal methods of tokenizer should be exposed
Relevant issues/PRs:
https://github.com/huggingface/transformers/issues/15420
https://github.com/huggingface/transformers/issues/16336
https://github.com/huggingface/transformers/issues/16334
https://github.com/huggingface/transformers/issues/16337
https://github.com/huggingface/transformers/issues/15138
https://github.com/huggingface/transformers/issues/16339
https://github.com/huggingface/transformers/pull/15775
To community:
At the moment we sadly don't find the time to dive deeper here, but we're trying hard to allocate time to discuss the strategy here soon.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See issues above
### Expected behavior
```shell
Don't know yet
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17051/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17051/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/17050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17050/events
|
https://github.com/huggingface/transformers/pull/17050
| 1,223,096,503
|
PR_kwDOCUB6oc43LaTX
| 17,050
|
Allow all imports from transformers
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
This PR enables doing `from transformers import *` when just the base `transformers` install is done. This should always have been possible, but due to some errors in the imports, it currently failed with a `sentencepiece` import error. This PR fixes that for both the FNet and CPM tokenizers.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17050/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17050",
"html_url": "https://github.com/huggingface/transformers/pull/17050",
"diff_url": "https://github.com/huggingface/transformers/pull/17050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17050.patch",
"merged_at": 1651510060000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17049/events
|
https://github.com/huggingface/transformers/pull/17049
| 1,223,096,427
|
PR_kwDOCUB6oc43LaSm
| 17,049
|
Make the sacremoses dependency optional
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
Sacremoses is currently installed by default when installing `transformers`, but it should not be needed. This is an artifact of the past, and we have since introduced optional dependencies, which applies perfectly to this situation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17049/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17049",
"html_url": "https://github.com/huggingface/transformers/pull/17049",
"diff_url": "https://github.com/huggingface/transformers/pull/17049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17049.patch",
"merged_at": 1651510067000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17048/events
|
https://github.com/huggingface/transformers/pull/17048
| 1,223,046,016
|
PR_kwDOCUB6oc43LPs9
| 17,048
|
Fix hashing for deduplication in CodeParrot
|
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix hashing mechanism to be process independent. Typically `hash` doesn't generate the same hash when using different process. So this makes the maximum number of occurence of a text to be `num_proc` instead of `1` when deduplicating.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@lvwerra @loubnabnl
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17048/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17048/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17048",
"html_url": "https://github.com/huggingface/transformers/pull/17048",
"diff_url": "https://github.com/huggingface/transformers/pull/17048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17048.patch",
"merged_at": 1651646425000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17047/events
|
https://github.com/huggingface/transformers/pull/17047
| 1,223,016,938
|
PR_kwDOCUB6oc43LJiN
| 17,047
|
Add type hints for BERTGeneration
|
{
"login": "robsmith155",
"id": 44686932,
"node_id": "MDQ6VXNlcjQ0Njg2OTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/44686932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robsmith155",
"html_url": "https://github.com/robsmith155",
"followers_url": "https://api.github.com/users/robsmith155/followers",
"following_url": "https://api.github.com/users/robsmith155/following{/other_user}",
"gists_url": "https://api.github.com/users/robsmith155/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robsmith155/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robsmith155/subscriptions",
"organizations_url": "https://api.github.com/users/robsmith155/orgs",
"repos_url": "https://api.github.com/users/robsmith155/repos",
"events_url": "https://api.github.com/users/robsmith155/events{/privacy}",
"received_events_url": "https://api.github.com/users/robsmith155/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I wasn't completely sure on what to use for the `past_key_values` argument. I set it to `Optional[Tuple[Tuple[torch.FloatTensor]]]`, but let me know if this is wrong. Also, not sure if I should also add type hints for the `BertGenerationConfig` class?\r\n\r\nWhat you put here is fine, and type hints for `BertGenerationConfig` are nice but optional - if you want to do them you can, but the main thing we're interested in is the core model classes. Let me know either way - if you don't want to do it, this is ready to merge now!",
"Okay great, you can go ahead and merge it then. I'll run your notebook to see what else needs to be done and work on some of those instead. Cheers",
"Got it. Thanks for the PR!"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
I added type hints for the `BERTGenerationEncoder` and `BERTGenerationDecoder` classes as requested in [#16059](https://github.com/huggingface/transformers/issues/16059) and demonstrated in [#16074](https://github.com/huggingface/transformers/pull/16074).
I wasn't completely sure on what to use for the `past_key_values` argument. I set it to `Optional[Tuple[Tuple[torch.FloatTensor]]]`, but let me know if this is wrong. Also, not sure if I should also add type hints for the `BertGenerationConfig` class?
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17047/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17047",
"html_url": "https://github.com/huggingface/transformers/pull/17047",
"diff_url": "https://github.com/huggingface/transformers/pull/17047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17047.patch",
"merged_at": 1651749766000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17046/events
|
https://github.com/huggingface/transformers/pull/17046
| 1,223,013,332
|
PR_kwDOCUB6oc43LIxM
| 17,046
|
Fix no_trainer examples to properly calculate the number of samples
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @muellerzr, @sgugger, in case I specify the argument `max_train_steps` instead of `num_train_epochs` while launching the training script, I need to recalculate the `num_train_epochs` after `accelerate.prepare` instead of `max_train_steps` right? Am I missing something?",
"@kowndinya-renduchintala we already do this for you 😄 \r\n\r\nhttps://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py#L425"
] | 1,651
| 1,654
| 1,651
|
CONTRIBUTOR
| null |
# Fix number of samples for `no_trainer` scripts
## What does this add?
This PR fixes all of the no_trainer scripts to properly use the right number of training steps after the length of the dataloader was changed with `accelerator.prepare`
## Why is it needed?
Currently in a multi-process setup, the progress bar still shows the old number of samples. As a result the old number of steps before breaking is set at the original amount, even though the length of the dataloaders changed. The progress bar reflects this too.
Simplified example:
If the dataloader starts with 128 batches, if 2 GPUs are used then each dataloader has 64 batches. As a result the progress bar should use `64`, and the break condition needs to also know there is only 64. Both currently use 128 still
## What parts of the API does this impact?
### User-facing:
All scripts have a recalculation of the max_train_steps after `accelerate.prepare`
## Basic Usage Example(s):
```python
# Prepare everything with our `accelerator`.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# We need to recalculate our total training steps
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
```
## When would I use it, and when wouldn't I?
While this is always used, technically it is only needed when the number of nodes > 1.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17046/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17046",
"html_url": "https://github.com/huggingface/transformers/pull/17046",
"diff_url": "https://github.com/huggingface/transformers/pull/17046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17046.patch",
"merged_at": 1651506985000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17045/events
|
https://github.com/huggingface/transformers/pull/17045
| 1,222,968,604
|
PR_kwDOCUB6oc43K_TQ
| 17,045
|
Clean up setup.py
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
This PR cleans up a bit the README in two ways:
- Remove support for Python 3.6 as we said the current release was the last one with Python 3.6
- Clean up a bit the authors field, description and keywords to emphasize the multimodal support.
Since the Hugging Face team is growing, I propose to replace the authors field by something more generic than adding names, let me know if you have a better idea
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17045/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17045",
"html_url": "https://github.com/huggingface/transformers/pull/17045",
"diff_url": "https://github.com/huggingface/transformers/pull/17045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17045.patch",
"merged_at": 1651510698000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17044/events
|
https://github.com/huggingface/transformers/pull/17044
| 1,222,943,550
|
PR_kwDOCUB6oc43K581
| 17,044
|
Update no_trainer examples to use new logger
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"rereview for propagate sanity check then all good 🤗 "
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# Update `no_trainer` examples to use the new Accelerate logger
## What does this add?
- Accelerate recently added a [new logger](https://github.com/huggingface/accelerate/pull/337/) to help deal with repeat logs across all processes. If it should be logged on all, a new kwarg `main_process_only=False` should be passed in.
This helps also solve an annoyance users were pointing out about repeat logs leading to misunderstandings of how the internal API was acting on
## What parts of the API does this impact?
### User-facing:
The examples now show using the new `get_logger()` function from Accelerate
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17044/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17044",
"html_url": "https://github.com/huggingface/transformers/pull/17044",
"diff_url": "https://github.com/huggingface/transformers/pull/17044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17044.patch",
"merged_at": 1651506975000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17043/events
|
https://github.com/huggingface/transformers/pull/17043
| 1,222,866,350
|
PR_kwDOCUB6oc43Kpsl
| 17,043
|
[Trainer] Move logic for checkpoint loading into separate methods for easy overriding
|
{
"login": "calpt",
"id": 36051308,
"node_id": "MDQ6VXNlcjM2MDUxMzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calpt",
"html_url": "https://github.com/calpt",
"followers_url": "https://api.github.com/users/calpt/followers",
"following_url": "https://api.github.com/users/calpt/following{/other_user}",
"gists_url": "https://api.github.com/users/calpt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calpt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calpt/subscriptions",
"organizations_url": "https://api.github.com/users/calpt/orgs",
"repos_url": "https://api.github.com/users/calpt/repos",
"events_url": "https://api.github.com/users/calpt/events{/privacy}",
"received_events_url": "https://api.github.com/users/calpt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR does a small refactoring in the `Trainer` class, specifically it moves the logic for the following two steps out of the training loop into separate helper methods:
- loading a pre-existing checkpoint into the Trainer before the training starts is moved into the `_load_from_checkpoint()` method.
- loading the best evaluated model checkpoint after training has completed is moved into the `_load_best_model()` method.
The PR does not change any existing logic in any way.
## Motivation
In [our library](https://github.com/Adapter-Hub/adapter-transformers), we implement a custom Trainer class that subclasses your great built-in Trainer class. However, as we don't save full model checkpoints during training, the mentioned steps for checkpoint loading are not applicable to our use case. Moving this logic to separate methods would be super helpful to us (and potentially others), since we could easily override these helper methods without modifying the training loop itself.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
(cc @hSterz)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17043/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17043",
"html_url": "https://github.com/huggingface/transformers/pull/17043",
"diff_url": "https://github.com/huggingface/transformers/pull/17043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17043.patch",
"merged_at": 1651502438000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17042/events
|
https://github.com/huggingface/transformers/pull/17042
| 1,222,853,994
|
PR_kwDOCUB6oc43KnJs
| 17,042
|
Disable Flax GPU tests on push
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
The Flax GPU tests have been failing for more than a month on every commit they are run in master (error changed 20 days ago to an install error). This is making CI checks hard to read, so disabling those tests until someone really fixes them.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17042/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17042",
"html_url": "https://github.com/huggingface/transformers/pull/17042",
"diff_url": "https://github.com/huggingface/transformers/pull/17042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17042.patch",
"merged_at": 1651501553000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17041/events
|
https://github.com/huggingface/transformers/issues/17041
| 1,222,831,565
|
I_kwDOCUB6oc5I4u3N
| 17,041
|
No separation between Torch and TF examples in create_a_model.md
|
{
"login": "ignacioct",
"id": 56955040,
"node_id": "MDQ6VXNlcjU2OTU1MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/56955040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ignacioct",
"html_url": "https://github.com/ignacioct",
"followers_url": "https://api.github.com/users/ignacioct/followers",
"following_url": "https://api.github.com/users/ignacioct/following{/other_user}",
"gists_url": "https://api.github.com/users/ignacioct/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ignacioct/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ignacioct/subscriptions",
"organizations_url": "https://api.github.com/users/ignacioct/orgs",
"repos_url": "https://api.github.com/users/ignacioct/repos",
"events_url": "https://api.github.com/users/ignacioct/events{/privacy}",
"received_events_url": "https://api.github.com/users/ignacioct/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, thanks for your help in translating the docs! :)\r\n\r\nFrom my end on the `main` version of the docs, there are two separate blocks for PyTorch and TensorFlow content:\r\n\r\n",
"Oh, I was directly looking using https://github.com/huggingface/transformers/blob/main/docs/source/en/create_a_model.mdx, and there is no separation there I think (that should also be the main version). If that is handled in the docs, I can just go on and replace the text in the same fashion. \r\n\r\nThank you!"
] | 1,651
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
Hi!
While writing the Spanish translation of the guide to create a custom mode (#15947), I've come across something that I think it is not intentional. In the section Model, we have this first paragraph explaining how to start initializing and customizing a - custom - model.

And then, immediately after that, the texts repeats itself, but this time using TF models.

I think there is a missing subsection title separating both ways of doing the procedure, or the information is redundant and one of then should be removed. One way or the other, is strange to be reading this from top to bottom and see some text repeating itself without any clue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17041/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17040
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17040/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17040/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17040/events
|
https://github.com/huggingface/transformers/issues/17040
| 1,222,715,630
|
I_kwDOCUB6oc5I4Sju
| 17,040
|
error with Vision Transformer (ViT)
|
{
"login": "sunhaozhepy",
"id": 73462159,
"node_id": "MDQ6VXNlcjczNDYyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/73462159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunhaozhepy",
"html_url": "https://github.com/sunhaozhepy",
"followers_url": "https://api.github.com/users/sunhaozhepy/followers",
"following_url": "https://api.github.com/users/sunhaozhepy/following{/other_user}",
"gists_url": "https://api.github.com/users/sunhaozhepy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunhaozhepy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunhaozhepy/subscriptions",
"organizations_url": "https://api.github.com/users/sunhaozhepy/orgs",
"repos_url": "https://api.github.com/users/sunhaozhepy/repos",
"events_url": "https://api.github.com/users/sunhaozhepy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunhaozhepy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
There seems to be a problem with the HuggingFace Vision Transformer: it takes up all the memory in GPU and renders it impossible to train the model.
When I was dealing with the inference just like in `https://huggingface.co/docs/transformers/model_doc/vit` with a single image, it works well. But when I tried to do some fine-tuning with the MNIST dataset intergrated in PyTorch, thus with batches, suddenly it doesn't work anymore. Some details concerning my problem:
1. PyTorch MNIST (torchvision.datasets.MNIST) is not stocked in common image file formats (e.g. jpg., jpeg., png.) which forbids me from using ImageFolder;
2. and as the common dataloader can't handle batches of images, I had to enter my custom transform (which uses feature_extractor) as a parameter when loading the dataset. Sot it appears to me that the feature extractor is handling one image at a time.
Here's my code:
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision import datasets
from transformers import ViTFeatureExtractor, ViTForImageClassification
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
class ImageTransform:
def __init__(self):
pass
def __call__(self, image):
output = torch.from_numpy(feature_extractor(images=image.convert("RGB")).pixel_values[0])
return output
mnist_train_dataset = datasets.MNIST("/mnist_root", train=True, download=True, transform=ImageTransform())
mnist_test_dataset = datasets.MNIST("/mnist_root", train=False, download=True, transform=ImageTransform())
mnist_train_dataloader = DataLoader(mnist_train_dataset, batch_size=64, shuffle=True)
mnist_test_dataloader = DataLoader(mnist_test_dataset, batch_size=64, shuffle=False)
device = torch.cuda.current_device() if torch.cuda.is_available() else 'cpu'
print(f"using {device}.")
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224-in21k').to(device)
optimizer = optim.AdamW(model.parameters(), lr=1e-4)
loss_function = nn.CrossEntropyLoss()
num_epochs = 3
for epoch in range(num_epochs):
model.train()
epoch_loss = 0
for images, labels in mnist_train_dataloader:
optimizer.zero_grad()
images, labels = images.to(device), labels.to(device)
outputs = model(pixel_values=images)
loss = loss_function(outputs.logits, labels)
epoch_loss += loss * len(images)
loss.backward()
optimizer.step()
print(f"Epoch {epoch + 1}: Cross Entropy loss = {epoch_loss / len(mnist_train_dataset)}")
```
The error message:
```
Traceback (most recent call last):
File "main.py", line 56, in <module>
outputs = model(pixel_values=images)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 603, in forward
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 507, in forward
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 346, in forward
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 278, in forward
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 221, in forward
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\transformers-4.6.1-py3.8.egg\transformers\models\vit\modeling_vit.py", line 165, in forward
RuntimeError: CUDA out of memory. Tried to allocate 38.00 MiB (GPU 0; 2.00 GiB total capacity; 1.63 GiB already allocated; 0 bytes free; 1.70 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
I thought at first that it was due to a too big batch_size. I turned it into `batch_size=2` which is of course to highlight the effect, and the error message becomes:
```
C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:247: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:247: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "main.py", line 59, in <module>
loss.backward()
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "C:\Users\Sun\anaconda3\envs\squad\lib\site-packages\torch\autograd\__init__.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
```
And when I searched on Google, people is telling me that it was also due to a limit of allocation in GPU...
I really do hope that this can be solved as I can't quite figure out how to properly use the official model release of Google... Thank you very much.
BTW my GPU configuration:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 512.15 Driver Version: 512.15 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 Off | N/A |
| N/A 36C P0 N/A / N/A | 0MiB / 2048MiB | 3% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17040/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17039
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17039/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17039/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17039/events
|
https://github.com/huggingface/transformers/issues/17039
| 1,222,679,382
|
I_kwDOCUB6oc5I4JtW
| 17,039
|
Unable to reproduce gigawords results from google/pegasus-gigaword
|
{
"login": "xu1998hz",
"id": 30398952,
"node_id": "MDQ6VXNlcjMwMzk4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/30398952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xu1998hz",
"html_url": "https://github.com/xu1998hz",
"followers_url": "https://api.github.com/users/xu1998hz/followers",
"following_url": "https://api.github.com/users/xu1998hz/following{/other_user}",
"gists_url": "https://api.github.com/users/xu1998hz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xu1998hz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xu1998hz/subscriptions",
"organizations_url": "https://api.github.com/users/xu1998hz/orgs",
"repos_url": "https://api.github.com/users/xu1998hz/repos",
"events_url": "https://api.github.com/users/xu1998hz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xu1998hz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @xu1998hz,\r\n\r\nWe're trying to keep Transformers issues for bugs in the core library. Could you try to use the forum instead: https://discuss.huggingface.co/ ? :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
@patrickvonplaten
Hi Patrick,
I am trying to reproduce the PEGASUS results for gigaword. I used gigawords in datasets library and directly use its testing portion without further preprocessing. I used PegasusForConditionalGeneration, PegasusTokenizer (I used checkpoint from Google, google/pegasus-gigaword) to decode summary using default setting. However, my ROUGE score is bit deviated from the original paper reported (my results roug1,2,L: 28/12/25 vs 39.65/20.47/36.76). I wondered if my setup was incorrect.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17039/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17038
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17038/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17038/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17038/events
|
https://github.com/huggingface/transformers/pull/17038
| 1,222,647,480
|
PR_kwDOCUB6oc43J7cB
| 17,038
|
Fix `LayoutXLM` docstrings
|
{
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"ok got closed without a reason.",
"Hi @qqaatw,\r\n\r\nApologies this didn't get merged yet. I would like to re-open the PR, but seems like that's not possible anymore as the branch is deleted. Could you open a new PR?\r\n\r\nApologies again for how this was treated.",
"@NielsRogge I restored the branch, I don't know why it was deleted on my behalf.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @qqaatw,\r\n\r\nfeel free to re-open this PR and apply the suggestions",
"@NielsRogge I'm not able to re-open it. There is no button for re-opening."
] | 1,651
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
A follow-up PR for #16187.
Also fix a legacy issue that the `ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING` of both `layoutlmv2` and `layoutxlm` is incorrect, which should be like [this](https://github.com/huggingface/transformers/blob/ff846e9b28358e5741dea5058433f7bcf8e7de76/src/transformers/tokenization_utils_base.py#L1311-L1363) instead of a copy of `ENCODE_KWARGS_DOCSTRING`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
@NielsRogge @LysandreJik
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17038/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17038",
"html_url": "https://github.com/huggingface/transformers/pull/17038",
"diff_url": "https://github.com/huggingface/transformers/pull/17038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17038.patch",
"merged_at": 1658303397000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17037
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17037/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17037/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17037/events
|
https://github.com/huggingface/transformers/issues/17037
| 1,222,631,983
|
I_kwDOCUB6oc5I3-Iv
| 17,037
|
Make DETR `pixel_values` input optional
|
{
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
### Feature request
Currently the `pixel_values` input of `DetrModel` and `DetrForObjectDetection` is a required argument, which is nearly useless when `encoder_outputs` is specified.
Therefore, I propose to make `pixel_values` optional and infer batch size for subsequent uses from the `encoder_outputs` when it is specified. We may also need to add a new optional `position_embeddings` argument for the decoder since the backbone is no longer used and no longer produces the embeddings in this case.
The same approach can be seen at many models, e.g. Bert, which also has its `input_ids` optional:
https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/models/bert/modeling_bert.py#L912-L914
The only issue is that in `DetrForSegmentation`, `pixel_values` is required for producing feature maps and reconstructing the predicted mask. So the proposal is not applicable for this model.
### Motivation
Described above.
### Your contribution
Can make a PR.
@NielsRogge What do you think :) ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17037/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17036
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17036/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17036/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17036/events
|
https://github.com/huggingface/transformers/pull/17036
| 1,222,618,951
|
PR_kwDOCUB6oc43J1fG
| 17,036
|
[Flax(Speech)EncoderDecoder] Fix bug in `decoder_module`
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
The current use of `decoder_module` assumes that `encoder_hidden_states` is the fourth positional argument of the decoder's call method. We see that this is indeed true of the two current Flax decoder models: [`FlaxGPT2LMHeadModel`](https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/models/gpt2/modeling_flax_gpt2.py#L691) and [`FlaxBartForCausalLM`](https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/models/bart/modeling_flax_bart.py#L1911). However, for other possible decoder models, such as the work-in-progress [`FlaxBertForCausalLM`](https://github.com/huggingface/transformers/blob/9c9e49bd3aeb3f84c0d61b7f0fdca8ea853ac5a1/src/transformers/models/bert/modeling_flax_bert.py#L1545), there may be additional positional arguments (such as `token_type_ids` or `head_mask`) **prior** to `encoder_hidden_states`. To handle this more general case, we should not assume `encoder_hidden_states` is necessarily the fourth positional argument, and should instead pass it as a _key-word argument_.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17036/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17036",
"html_url": "https://github.com/huggingface/transformers/pull/17036",
"diff_url": "https://github.com/huggingface/transformers/pull/17036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17036.patch",
"merged_at": 1651489605000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17035
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17035/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17035/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17035/events
|
https://github.com/huggingface/transformers/pull/17035
| 1,222,601,645
|
PR_kwDOCUB6oc43Jxx2
| 17,035
|
[FlaxGenerate] Fix bug in `decoder_start_token_id`
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
In Python, `bool` is a subclass of `int`, and `False` has the value `0`. We observe this by calling the `__bool__` method of `0`:
```python
print((0).__bool__())
print((1).__bool__())
```
```
False
True
```
https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/generation_flax_utils.py#L266-L268
In the preceding lines of code, if `decoder_start_token_id` has the value `0` (valid):
- `if decoder_start_token_id` will be `False`
- `decoder_start_token_id` will be set to `self.config.decoder_start_token_id`
The correct behaviour should be that if `decoder_start_token_id` has the value `0`, it remains set to `0`, and not changed to `self.config.decoder_start_token_id`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17035/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17035",
"html_url": "https://github.com/huggingface/transformers/pull/17035",
"diff_url": "https://github.com/huggingface/transformers/pull/17035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17035.patch",
"merged_at": 1651482327000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17034
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17034/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17034/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17034/events
|
https://github.com/huggingface/transformers/pull/17034
| 1,222,597,320
|
PR_kwDOCUB6oc43Jw3E
| 17,034
|
Move test model folders
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, @stas00 Thank you for the feedbacks.\r\n\r\n- Regarding `difficult to read` (the long command): totally agreed! I had the same feeling, and thought might be a good idea to create a tiny python script and just call it. Otherwise, we can use what you proposed above (after some tests).\r\n- About `parents[4]`: thank you for the information!\r\n- I can check `TestCasePlus` later.\r\n\r\nI would prefer to merge as it is now, and work on these points in another PR. The main reason is that I ran the full suite of tests, the results look all good, and would like to merge with a version that has been fully tested :-)",
"Merged now (after rebase on main for the merged `flax_bert` and `yolos` PRs)."
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
As discussed offline, this PR moves model specific test folders (e.g. `tests/bert`) to `tests/models` (e.g. `tests/models/bert`)
In addition to the necessary changes on `import`, the following changes are made:
- In some test files regarding processors (tokenizer/feature extractor, etc.), change
```
SAMPLE_ROBERTA_CONFIG = os.path.join(os.path.dirname(os.path.abspath(__file__)), ".../fixtures/dummy-config.json")
```
to
```
SAMPLE_ROBERTA_CONFIG = get_tests_dir("fixtures/dummy-config.json")
```
(see [the commit](https://github.com/huggingface/transformers/pull/17034/commits/ee9956cf4181932c821d4c3c28677ac33660496a))
- The changes (**to be reviewed particularly**)
- `.circleci/config.yml`
- `.github/workflows/self-scheduled.yml`
- `src/transformers/commands/add_new_model.py`
- `src/transformers/commands/add_new_model_like.py`
- `utils/check_repo.py`
- `utils/notification_service.py`
- `utils/test_fetcher.py`
### Remarks:
- The `self-push` result is [here](https://github.com/huggingface/transformers/actions/runs/2256959215)
- The slack report job has `Artifact was not found, job was probably canceled.`, but this issue exists for some time. My plan is to continue the task of changing self-push report format (and fix this issue)
- The `run_tests_flax_gpu` failure is just the same as in other runs. This is not in the scope of this PR.
- The scheduled CI (partial) result is [here](https://github.com/huggingface/transformers/actions/runs/2254833118). The report is available on Slack.
- On the GitHub Actions page, the jobs have name like `Model tests (models/albert, single-gpu-docker)`. It becomes a bit long (with `models/`).
- Same for the Slack report
```
0 | 0 | 3 | 0 | 0 | models_auto
```
- So far I only ran a subset of models. From the results, I think the PR is ready. We can run a full suite of tests before merge.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17034/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17034",
"html_url": "https://github.com/huggingface/transformers/pull/17034",
"diff_url": "https://github.com/huggingface/transformers/pull/17034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17034.patch",
"merged_at": 1651581722000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17033
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17033/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17033/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17033/events
|
https://github.com/huggingface/transformers/issues/17033
| 1,222,392,661
|
I_kwDOCUB6oc5I3DtV
| 17,033
|
Multi GPU training crashes when running run_mlm_wwm.py
|
{
"login": "conan1024hao",
"id": 50416856,
"node_id": "MDQ6VXNlcjUwNDE2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conan1024hao",
"html_url": "https://github.com/conan1024hao",
"followers_url": "https://api.github.com/users/conan1024hao/followers",
"following_url": "https://api.github.com/users/conan1024hao/following{/other_user}",
"gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions",
"organizations_url": "https://api.github.com/users/conan1024hao/orgs",
"repos_url": "https://api.github.com/users/conan1024hao/repos",
"events_url": "https://api.github.com/users/conan1024hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/conan1024hao/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@conan1024hao \r\nSorry I don't know more details about multi gpu training, but you should make sure your code works well in single GPU.\r\nAnd then you could try code like this:\r\n```python\r\nexport CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7\r\n\r\npython -m torch.distributed.launch --nproc_per_node 8 run_mlm_wwm.py \\\r\n --model_type bert \\\r\n --tokenizer_name tokenizer.json \\\r\n --train_file mrph_train.txt \\\r\n --validation_file mrph_test.txt \\\r\n --train_ref_file ref_train.txt \\\r\n --validation_ref_file ref_test.txt \\\r\n --config_overrides=\"pad_token_id=2,hidden_size=512,num_attention_heads=8,num_hidden_layers=4\" \\\r\n --max_seq_length 128 \\\r\n --fp16 \\\r\n --per_device_train_batch_size 256 \\\r\n --per_device_eval_batch_size 256 \\\r\n --gradient_accumulation_steps 2 \\\r\n --max_steps 500000 \\\r\n --save_steps 1000 \\\r\n --save_total_limit 5 \\\r\n --do_train \\\r\n --do_eval \\\r\n```",
"@wlhgtc Thank you for your advice. There does exist some bug info which will not be printed when in multi GPU mode. However, after I making sure it can run in single GPU, this error still exist. I will keep this issue open for a solution in the future.",
"@wlhgtc An update. I found that multi GPU crash when running `add_chinese_references()`. I ran the whole script successfully after I made the dataset much more smaller. A temprory solution will be preprocessing and saving the tokenized dataset locally by CPU and then start training by multi GPU.",
"> @wlhgtc An update. I found that multi GPU crash when running `add_chinese_references()`. I ran the whole script successfully after I made the dataset much more smaller. A temprory solution will be preprocessing and saving the tokenized dataset locally by CPU and then start training by multi GPU.\r\n\r\nyeah and I met the same problem. This operation of \"add_column\" needs huge memory, related to some issue in `datasets` [this](https://github.com/huggingface/datasets/issues/1825).\r\nThere are two ways:\r\n1. preprocess ref files and merge all info(\"input_ids\",...,\"chinese_ref\") to a json file, avoid tokenized dataset all the time.\r\n2. `datasets.set_transform(tokenize_function)` to lazy load your dataset.\r\n\r\nHope it could help."
] | 1,651
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
### System Info
```shell
I am running this script on a 8 A100 cards cluster.
gcc/11.2.0
python/3.8/3.8.13
cuda/11.3/11.3.1
cudnn/8.2/8.2.4
nccl/2.9/2.9.9-1
accelerate 0.7.1
datasets 2.1.0
huggingface-hub 0.5.1
protobuf 3.20.1
sentencepiece 0.1.96
tokenizers 0.12.1
torch 1.11.0+cu113
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113
transformers 4.18.0
```
### Who can help?
@wlhgtc Sorry to bother you again, please check this issue if you have time🙏.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Dataset example
My dataset is Chinese, Japanese and Korean's Wikipedia. And I generate ref files for not only Chinese but for all whole words.
```
mrph_train.txt
統一 獄中 者 組合
統一 獄中 者 組合 ( とういつ ごくちゅう しゃく みあい ) は 、 日本 の 刑務所 に 在監 して いる 受刑 者 に よって 結成 さ れた 組織 。 現在 、 日本 で 唯一 の 「 囚人 組合 」 組織 である 。
沿革 .
明治 時代 以降 、 日本 の 刑務所 で は 受刑 者 自身 が 行 刑 の 運営 に あたる 「 囚人 自治 」 を 認めて い ない 。 これ は 江戸 時代 の 伝馬 町 牢 屋敷 の ように 受刑 者 の 代表 である 牢 名主 が 牢獄 を 仕切る こと で 、 結果 と して 受刑 者 の 処遇 が 劣悪 化 した こと に 対する 反省 から 来て いる 。
ref_train.txt
[2, 4, 7]
[2, 4, 7, 10, 11, 12, 14, 15, 16, 17, 19, 20, 22, 23, 28, 31, 32, 35, 37, 39, 41, 45, 46, 48, 51, 53, 56, 59, 62, 66, 68, 71, 73, 74]
[2]
[2, 4, 6, 9, 12, 13, 17, 20, 26, 29, 30, 33, 35, 39, 40, 43, 46, 49, 51, 54, 58, 61, 62, 64, 68, 70, 71, 74, 77, 80, 81, 83, 87, 90, 92, 96, 99, 102, 104, 107, 108, 110, 112, 114, 116]
```
### Command
```shell
torchrun --nproc_per_node 8 run_mlm_wwm.py \
--model_type bert \
--tokenizer_name tokenizer.json \
--train_file mrph_train.txt \
--validation_file mrph_test.txt \
--train_ref_file ref_train.txt \
--validation_ref_file ref_test.txt \
--config_overrides="pad_token_id=2,hidden_size=512,num_attention_heads=8,num_hidden_layers=4" \
--max_seq_length 128 \
--fp16 \
--per_device_train_batch_size 256 \
--per_device_eval_batch_size 256 \
--gradient_accumulation_steps 2 \
--max_steps 500000 \
--save_steps 1000 \
--save_total_limit 5 \
--do_train \
--do_eval \
```
### Change in `run_mlm_wwm.py`
- To use my own tokenizer, I changed
```python3
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name, **tokenizer_kwargs
)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path, **tokenizer_kwargs
)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
```
to
```python3
tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
```
### Expected behavior
```shell
### Bug info
After loading dataset, it should begin training, but PyTorch crashed at this time.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380593 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380595 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380596 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380597 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380598 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380599 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2380600 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 1 (pid: 2380594) of binary: /local/9884269.1.gpua/work/bin/python3
Traceback (most recent call last):
File "/local/9884269.1.gpua/work/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/local/9884269.1.gpua/work/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
```
### Have tried
- use `gloo` for torch's backend instead of `nccl` ❌
- use torch1.10.0 instead of 1.11.0 ❌
- use V100 cluster instead of A100 ❌
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17033/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17032
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17032/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17032/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17032/events
|
https://github.com/huggingface/transformers/issues/17032
| 1,222,230,297
|
I_kwDOCUB6oc5I2cEZ
| 17,032
|
[Trainer]: Resume training with `save_strategy="epoch"` does not load RNG state
|
{
"login": "atreyasha",
"id": 35427332,
"node_id": "MDQ6VXNlcjM1NDI3MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/35427332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atreyasha",
"html_url": "https://github.com/atreyasha",
"followers_url": "https://api.github.com/users/atreyasha/followers",
"following_url": "https://api.github.com/users/atreyasha/following{/other_user}",
"gists_url": "https://api.github.com/users/atreyasha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atreyasha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atreyasha/subscriptions",
"organizations_url": "https://api.github.com/users/atreyasha/orgs",
"repos_url": "https://api.github.com/users/atreyasha/repos",
"events_url": "https://api.github.com/users/atreyasha/events{/privacy}",
"received_events_url": "https://api.github.com/users/atreyasha/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thanks for the fully reproducible example, which will become a new test in our CI :-) This was a bit painful to debug, but the PR above should solve the issue.",
"Thanks @sgugger for the quick response"
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.15.36-1-lts-x86_64-with-glibc2.33
- Python version: 3.8.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I provide a MWE for this issue by forking `transformers` and writing a failing test case. This can be reproduced via the steps below:
1. `git clone https://github.com/atreyasha/transformers`
2. Create a virtual environment and install the `[dev-torch]` extras
3. `pytest tests/trainer/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_randomness_from_epoch`
**Edit**: I removed the forked repository as the diff has been incorporated in the PR mentioned below.
Here is the relevant test snippet where I added `save_strategy="epoch"` and adjusted the checkpoint number to reflect the steps in one epoch:
```python
@require_torch_non_multi_gpu
def test_resume_training_with_randomness_from_epoch(self):
# This test will fail flakily for more than 1 GPUs since the result will be slightly more different
# TODO: investigate why it fails for 2 GPUs?
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
train_dataset = RegressionDataset(length=128)
eval_dataset = RegressionDataset()
config = RegressionModelConfig(a=0, b=2)
model = RegressionRandomPreTrainedModel(config)
tmp_dir = self.get_auto_remove_tmp_dir()
args = RegressionTrainingArguments(tmp_dir, save_strategy="epoch", learning_rate=0.1)
trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset)
trainer.train()
(a, b) = trainer.model.a.item(), trainer.model.b.item()
model = RegressionRandomPreTrainedModel(config)
trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset)
trainer.train(resume_from_checkpoint=os.path.join(tmp_dir, "checkpoint-16"))
(a1, b1) = trainer.model.a.item(), trainer.model.b.item()
self.assertAlmostEqual(a, a1, delta=1e-8)
self.assertAlmostEqual(b, b1, delta=1e-8)
```
This should produce an error because the regression variables are not the same or similar:
```console
> self.assertAlmostEqual(a, a1, delta=1e-8)
E AssertionError: 2.0825276374816895 != 2.081479072570801 within 1e-08 delta (0.0010485649108886719 difference)
```
### Cause
The RNG state is only loaded when resuming a checkpoint that completed non-zero steps in the current epoch. If the checkpoint was saved at the end of the epoch, `steps_trained_in_current_epoch` would be `0` for the new epoch and the saved RNG state would not be loaded.
https://github.com/huggingface/transformers/blob/da47c264f9a881f5db5f6fbb59a30c95e428571f/src/transformers/trainer.py#L1423-L1435
### Possible fix
Check if the checkpoint to resume is a whole-number multiple of steps per epoch. If this is true, then load the RNG state once before entering the `epoch_iterator` loop above.
### Expected behavior
The test case above should pass, meaning that the regression variables should be the same or similar (within the delta).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17032/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17031
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17031/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17031/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17031/events
|
https://github.com/huggingface/transformers/issues/17031
| 1,222,190,494
|
I_kwDOCUB6oc5I2SWe
| 17,031
|
Training a tokenizer - add argument for preprocessing the input
|
{
"login": "pepi99",
"id": 45050191,
"node_id": "MDQ6VXNlcjQ1MDUwMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/45050191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepi99",
"html_url": "https://github.com/pepi99",
"followers_url": "https://api.github.com/users/pepi99/followers",
"following_url": "https://api.github.com/users/pepi99/following{/other_user}",
"gists_url": "https://api.github.com/users/pepi99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pepi99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pepi99/subscriptions",
"organizations_url": "https://api.github.com/users/pepi99/orgs",
"repos_url": "https://api.github.com/users/pepi99/repos",
"events_url": "https://api.github.com/users/pepi99/events{/privacy}",
"received_events_url": "https://api.github.com/users/pepi99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have also posted my question in the huggingface forum: https://discuss.huggingface.co/t/save-tokenizer-with-argument/17389",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
### Feature request
I am training my huggingface tokenizer on my own corpora, and I want to save it with a preprocessing step. That is, if I pass some text to it, I want it to apply the preprocessing and then tokenize the text, instead of explicitly preprocessing it before that. A good example is BERTweet: https://github.com/VinAIResearch/BERTweet and their `tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)` (here normalization=True indicates that the input will be preprocessed according to some function). I want the same to apply when I train a tokenizer with a custom preprocessing function. My code is:
from pathlib import Path
from tokenizers import ByteLevelBPETokenizer
def preprocess(text):
return text
paths = [str(x) for x in Path('data').glob('*.txt')]
tokenizer = ByteLevelBPETokenizer()
tokenizer.train(files=paths, vocab_size=50_000, min_frequency=2,
special_tokens=['<s>', '<pad>', '</s>', '<unk>', '<mask>'])
tokenizer.save_model('CustomBertTokenizer')
Now, when I load the tokenizer:
from transformers import RobertaTokenizerFast
sentence = 'Hey'
tokenizer = RobertaTokenizerFast.from_pretrained('CustomBertTokenizer')
tokenizer(sentence)
I want `sentence` to be preprocessed with the `preprocess` function, and then tokenized. So I want to pass like an argument : preprocessing=True, or something like that. How can I do it?
How can I achieve this?
### Motivation
.
### Your contribution
.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17031/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17030
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17030/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17030/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17030/events
|
https://github.com/huggingface/transformers/pull/17030
| 1,222,139,266
|
PR_kwDOCUB6oc43IQnP
| 17,030
|
Added XLM onnx config
|
{
"login": "nandwalritik",
"id": 48522685,
"node_id": "MDQ6VXNlcjQ4NTIyNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nandwalritik",
"html_url": "https://github.com/nandwalritik",
"followers_url": "https://api.github.com/users/nandwalritik/followers",
"following_url": "https://api.github.com/users/nandwalritik/following{/other_user}",
"gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions",
"organizations_url": "https://api.github.com/users/nandwalritik/orgs",
"repos_url": "https://api.github.com/users/nandwalritik/repos",
"events_url": "https://api.github.com/users/nandwalritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/nandwalritik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello thanks for the PR it looks really clean!! 🤗 \r\n\r\nIf you have time, it could be nice to upload a converted XLM model to `ONNXConfig for all` organisation on Hugging Face's hub.",
"> Thanks for this clean PR @nandwalritik fire ! Apart from a small comment about the formatting changes, this LGTM :)\r\n> \r\n> Could you please confirm that the slow tests pass with:\r\n> \r\n> ```\r\n> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k \"xlm\"\r\n> ```\r\n@lewtun The test cases are successfully passing on running `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k \"xlm\"`.",
"Thanks for checking the tests pass @nandwalritik ! Could you please rebase on `main` to account for a recent refactoring that was done to order the model names in `features.py` alphabetically?",
"Hey @nandwalritik are you struggling with the commits or rebasing a branch ?",
"> Hey @nandwalritik are you struggling with the commits or rebasing a branch ?\r\n\r\nI just saw that there were merge conflicts, since `main` was updated, so I rebased again. Did I rebased incorrectly?\r\nSteps which I followed to rebase:- \r\n* Fetch and merge upstream\r\n* git pull origin main\r\n* git checkout featureBranch\r\n* git rebase main\r\nAnd then I solved the merge conflicts manually wherever were required.",
"Yes it seems that there is more than your commits attached to this PR",
"Nice it seems to be better !\r\n",
"Thanks again for your contribution!"
] | 1,651
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added XLM OnnxConfig to make this model available for conversion.
@ChainYo
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/16308
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- ~~[ ] Did you write any new necessary tests?~~
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17030/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17030",
"html_url": "https://github.com/huggingface/transformers/pull/17030",
"diff_url": "https://github.com/huggingface/transformers/pull/17030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17030.patch",
"merged_at": 1654003566000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17029
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17029/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17029/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17029/events
|
https://github.com/huggingface/transformers/pull/17029
| 1,222,138,692
|
PR_kwDOCUB6oc43IQgS
| 17,029
|
add `mobilebert` onnx configs
|
{
"login": "manandey",
"id": 6687858,
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manandey",
"html_url": "https://github.com/manandey",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"repos_url": "https://api.github.com/users/manandey/repos",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @manandey thanks for the PR, it looks really clean. Did you try to convert one MobileBERT model with this config?\r\n\r\nIt could be nice to upload a converted MobileBERT model of your choice to the `ONNXConfig for all` if you have time.",
"Hi @lewtun, I tried to address the fixes you had suggested, and the tests are passing after running `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k \"mobilebert\" `. :) "
] | 1,651
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds MobileBert OnnxConfig to make this model available for conversion. #16308
## Who can review?
@lewtun @LysandreJik
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17029/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17029",
"html_url": "https://github.com/huggingface/transformers/pull/17029",
"diff_url": "https://github.com/huggingface/transformers/pull/17029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17029.patch",
"merged_at": 1652107013000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17028
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17028/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17028/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17028/events
|
https://github.com/huggingface/transformers/issues/17028
| 1,221,941,227
|
I_kwDOCUB6oc5I1Vfr
| 17,028
|
Adding a ISSUE_TEMPLATE for the translation of docs
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I closed the issue for the moment.",
"This issue was mentioned in a previous community issue #17404 for translating into Italian 🇮🇹.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
### Feature request
Users can create issues to translate the docs to several languages. This technique worked for the translation of [the course](https://github.com/huggingface/course/issues) (cc @lewtun).
Since we have [several docs to translate](https://github.com/huggingface/transformers/issues/15947) a template would be adequate.
Currently, the [ISSUE TEMPLATES](https://github.com/huggingface/transformers/tree/main/.github/ISSUE_TEMPLATE) of the Transformers library are .yml. However, I would prefer to write a PR with a MD (similar to the [one in the Course](https://github.com/huggingface/course/blob/main/.github/ISSUE_TEMPLATE/translations.md)). We do not need to ask info from the issue writer, maybe a field to ask if they are willing to take leadership of the translation they are proposing.
### Motivation
Allowing the users to create their own issues (and possibly take ownership/leadership) would allow for a faster translation.
### Your contribution
If this is accepted I can create a PR with the issue template.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17028/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17027
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17027/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17027/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17027/events
|
https://github.com/huggingface/transformers/pull/17027
| 1,221,849,766
|
PR_kwDOCUB6oc43HWEp
| 17,027
|
Add XLNet OnnxConfig
|
{
"login": "sijunhe",
"id": 11987277,
"node_id": "MDQ6VXNlcjExOTg3Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11987277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sijunhe",
"html_url": "https://github.com/sijunhe",
"followers_url": "https://api.github.com/users/sijunhe/followers",
"following_url": "https://api.github.com/users/sijunhe/following{/other_user}",
"gists_url": "https://api.github.com/users/sijunhe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sijunhe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sijunhe/subscriptions",
"organizations_url": "https://api.github.com/users/sijunhe/orgs",
"repos_url": "https://api.github.com/users/sijunhe/repos",
"events_url": "https://api.github.com/users/sijunhe/events{/privacy}",
"received_events_url": "https://api.github.com/users/sijunhe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @sijunhe Nice PR, but could you rebase tre branch to avoid getting all the recent commits on this PR ?",
"Hi @sijunhe thanks for this PR! Indeed as @ChainYo suggests, could you please rebase on `main` so that it is a bit easier to review the changes from your PR?",
"Opps! Sorry about that. Merged! @lewtun @ChainYo ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17027). All of your documentation changes will be reflected on that endpoint.",
"Any progress here? @lewtun ",
"Thanks for the review folks. \r\n\r\nI tried what @lewtun suggested about stripping the kwargs but I couldn't really make it work. \r\n`model.forward = forward_without_kwargs(model.forward)` means `forward_without_kwargs` would need to change the input signature of `model.forward` and I didn't know if python can do that. If I try to return a new function based on `model.forward`, the call then becomes a infinite recursion.\r\n\r\nInstead I took @patrickvonplaten's suggestion and replace `**kwargs` with a single `use_cache` arg.",
"> Thanks for the review folks.\r\n> \r\n> I tried what @lewtun suggested about stripping the kwargs but I couldn't really make it work. `model.forward = forward_without_kwargs(model.forward)` means `forward_without_kwargs` would need to change the input signature of `model.forward` and I didn't know if python can do that. If I try to return a new function based on `model.forward`, the call then becomes a infinite recursion.\r\n> \r\n> Instead I took @patrickvonplaten's suggestion and replace `**kwargs` with a single `use_cache` arg.\r\n\r\nSince it's an edge case I'm ok with this! Thanks for making the change @sijunhe - what do you think @LysandreJik @sgugger \r\nwe should add to the doc string that the param is deprecated as well I guess",
"No, the param is not documented since it's deprecated, and it should stay that way IMO.",
"If I'm not mistaken, can't we define a wrapper function to strip out `**kwargs` from the function signature? This is roughly what I had in mind to handle the forward pass:\r\n\r\n```python\r\nfrom transformers import AutoModel\r\nimport inspect\r\nimport functools\r\n\r\ndef forward_without_kwargs(forward):\r\n @functools.wraps(forward)\r\n def wrapper(*args, **kwargs):\r\n return forward(*args, **kwargs)\r\n\r\n # Override signature and strip out kwargs\r\n sig = inspect.signature(forward)\r\n sig = sig.replace(parameters=tuple(sig.parameters.values())[:-1])\r\n wrapper.__signature__ = sig\r\n\r\n return wrapper\r\n\r\n# Load an XLNet checkpoint\r\nmodel = AutoModel.from_pretrained(\"xlnet-base-cased\")\r\n# Has kwargs\r\ninspect.signature(model.forward)\r\n# Has no kwargs\r\nmodel.forward = forward_without_kwargs(model.forward)\r\ninspect.signature(model.forward)\r\n```\r\n\r\nThis function could live in `onnx/utils.py` and then be called within the `export_pytorch()` function by checking if `kwargs` is present in the model's forward signature and stripping it out if so.\r\n\r\nOf course, this would also need to be tested properly - just an idea :)",
"> If I'm not mistaken, can't we define a wrapper function to strip out `**kwargs` from the function signature? This is roughly what I had in mind to handle the forward pass:\r\n\r\nAlso fine with me",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
1. Add XLNet OnnxConfig to make this model available for conversion.
2. In order to make the onnx export work, I had to remove the `**kwargs` argument in the `forward` function of the `XLNet` models. Seems like the `**kwargs` was on deprecation warning anyway and removing it didn't break any tests. Here is the reproduction and the error log of the OnnxExport if the `**kwargs` argument doesn't get removed.
```
from typing import Mapping, OrderedDict
from pathlib import Path
from transformers.onnx import OnnxConfig, export
from transformers import AutoTokenizer, AutoModel, AutoConfig
class XLNetOnnxConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
if self.task == "multiple-choice":
dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"}
else:
dynamic_axis = {0: "batch", 1: "sequence"}
return OrderedDict(
[
("input_ids", dynamic_axis),
("attention_mask", dynamic_axis),
("token_type_ids", dynamic_axis)
]
)
config = AutoConfig.from_pretrained("xlnet-base-cased")
onnx_config = XLNetOnnxConfig(config, task="sequence-classification")
onnx_path = Path("model.onnx")
base_model = AutoModel.from_pretrained("xlnet-base-cased")
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [1], in <module>
28 base_model = AutoModel.from_pretrained("xlnet-base-cased")
29 tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
---> 31 onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
File /opt/homebrew/lib/python3.9/site-packages/transformers/onnx/convert.py:116, in export(tokenizer, model, config, opset, output)
113 config.patch_ops()
115 # export can works with named args but the dict containing named args as to be last element of the args tuple
--> 116 export(
117 model,
118 (model_inputs,),
119 f=output.as_posix(),
120 input_names=list(config.inputs.keys()),
121 output_names=onnx_outputs,
122 dynamic_axes={name: axes for name, axes in chain(config.inputs.items(), config.outputs.items())},
123 do_constant_folding=True,
124 use_external_data_format=config.use_external_data_format(model.num_parameters()),
125 enable_onnx_checker=True,
126 opset_version=opset,
127 )
129 config.restore_ops()
131 return matched_inputs, onnx_outputs
File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/__init__.py:316, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
38 r"""
39 Exports a model into ONNX format. If ``model`` is not a
40 :class:`torch.jit.ScriptModule` nor a :class:`torch.jit.ScriptFunction`, this runs
(...)
312 model to the file ``f`` even if this is raised.
313 """
315 from torch.onnx import utils
--> 316 return utils.export(model, args, f, export_params, verbose, training,
317 input_names, output_names, operator_export_type, opset_version,
318 _retain_param_name, do_constant_folding, example_outputs,
319 strip_doc_string, dynamic_axes, keep_initializers_as_inputs,
320 custom_opsets, enable_onnx_checker, use_external_data_format)
File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:107, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
102 if use_external_data_format is not None:
103 warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
104 "PyTorch release. The code will work as it is False if models are not larger than 2GB, "
105 "Otherwise set to False because of size limits imposed by Protocol Buffers.")
--> 107 _export(model, args, f, export_params, verbose, training, input_names, output_names,
108 operator_export_type=operator_export_type, opset_version=opset_version,
109 do_constant_folding=do_constant_folding, example_outputs=example_outputs,
110 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs,
111 custom_opsets=custom_opsets, use_external_data_format=use_external_data_format)
File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:724, in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, use_external_data_format, onnx_shape_inference)
720 dynamic_axes = {}
721 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
723 graph, params_dict, torch_out = \
--> 724 _model_to_graph(model, args, verbose, input_names,
725 output_names, operator_export_type,
726 example_outputs, val_do_constant_folding,
727 fixed_batch_size=fixed_batch_size,
728 training=training,
729 dynamic_axes=dynamic_axes)
731 # TODO: Don't allocate a in-memory string for the protobuf
732 defer_weight_export = export_type is not ExportTypes.PROTOBUF_FILE
File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:493, in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, example_outputs, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, dynamic_axes)
490 if isinstance(args, (torch.Tensor, int, float, bool)):
491 args = (args, )
--> 493 graph, params, torch_out, module = _create_jit_graph(model, args)
495 params_dict = _get_named_param_dict(graph, params)
497 graph = _optimize_graph(graph, operator_export_type,
498 _disable_torch_constant_prop=_disable_torch_constant_prop,
499 fixed_batch_size=fixed_batch_size, params_dict=params_dict,
500 dynamic_axes=dynamic_axes, input_names=input_names,
501 module=module)
File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:437, in _create_jit_graph(model, args)
435 return graph, params, torch_out, None
436 else:
--> 437 graph, torch_out = _trace_and_get_graph_from_model(model, args)
438 state_dict = _unique_state_dict(model)
439 params = list(state_dict.values())
File /opt/homebrew/lib/python3.9/site-packages/torch/onnx/utils.py:388, in _trace_and_get_graph_from_model(model, args)
381 def _trace_and_get_graph_from_model(model, args):
382
383 # A basic sanity check: make sure the state_dict keys are the same
384 # before and after running the model. Fail fast!
385 orig_state_dict_keys = _unique_state_dict(model).keys()
387 trace_graph, torch_out, inputs_states = \
--> 388 torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
389 warn_on_static_input_change(inputs_states)
391 if orig_state_dict_keys != _unique_state_dict(model).keys():
File /opt/homebrew/lib/python3.9/site-packages/torch/jit/_trace.py:1166, in _get_trace_graph(f, args, kwargs, strict, _force_outplace, return_inputs, _return_inputs_states)
1164 if not isinstance(args, tuple):
1165 args = (args,)
-> 1166 outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
1167 return outs
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/jit/_trace.py:127, in ONNXTracedModule.forward(self, *args)
124 else:
125 return tuple(out_vars)
--> 127 graph, out = torch._C._create_graph_by_tracing(
128 wrapper,
129 in_vars + module_state,
130 _create_interpreter_name_lookup_fn(),
131 self.strict,
132 self._force_outplace,
133 )
135 if self._return_inputs:
136 return graph, outs[0], ret_inputs[0]
File /opt/homebrew/lib/python3.9/site-packages/torch/jit/_trace.py:118, in ONNXTracedModule.forward.<locals>.wrapper(*args)
116 if self._return_inputs_states:
117 inputs_states.append(_unflatten(in_args, in_desc))
--> 118 outs.append(self.inner(*trace_inputs))
119 if self._return_inputs_states:
120 inputs_states[0] = (inputs_states[0], trace_inputs)
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1090, in Module._slow_forward(self, *input, **kwargs)
1088 recording_scopes = False
1089 try:
-> 1090 result = self.forward(*input, **kwargs)
1091 finally:
1092 if recording_scopes:
TypeError: forward() takes from 1 to 14 positional arguments but 15 were given
```
Fixes #16308
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #16308
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ChainYo for the OnnxConfig
@patrickvonplaten and @sgugger for the changes in `modeling_xlnet.py`
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17027/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17027",
"html_url": "https://github.com/huggingface/transformers/pull/17027",
"diff_url": "https://github.com/huggingface/transformers/pull/17027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17027.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17026
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17026/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17026/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17026/events
|
https://github.com/huggingface/transformers/issues/17026
| 1,221,830,139
|
I_kwDOCUB6oc5I06X7
| 17,026
|
Bert: relative_key position embedding causes error for long sequences
|
{
"login": "cedricrupb",
"id": 32569892,
"node_id": "MDQ6VXNlcjMyNTY5ODky",
"avatar_url": "https://avatars.githubusercontent.com/u/32569892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cedricrupb",
"html_url": "https://github.com/cedricrupb",
"followers_url": "https://api.github.com/users/cedricrupb/followers",
"following_url": "https://api.github.com/users/cedricrupb/following{/other_user}",
"gists_url": "https://api.github.com/users/cedricrupb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cedricrupb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cedricrupb/subscriptions",
"organizations_url": "https://api.github.com/users/cedricrupb/orgs",
"repos_url": "https://api.github.com/users/cedricrupb/repos",
"events_url": "https://api.github.com/users/cedricrupb/events{/privacy}",
"received_events_url": "https://api.github.com/users/cedricrupb/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.9.2
- Platform: Linux-5.14.15-arch1-1-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Copy paste script from below and run
2. Script:
```python
import torch
from transformers import BertConfig, BertModel
config = {
'hidden_size': 512,
'num_attention_heads': 8,
'position_embedding_type': 'relative_key',
'max_seq_length': 10,
'max_position_embeddings': 10
}
encoder_config = BertConfig(**config)
model = BertModel(encoder_config)
batch_size, src_len = 1, 11
x = torch.zeros(batch_size, src_len).int()
model(input_ids=x)
```
### Expected behavior
```shell
Since relative attention is used (Shaw et al.) the script should run without any errors.
However, the script breaks because at least two implementations details (in the PyTorch implementation) prevent this use case:
1. Token type ids are buffered for a specific max. length:
https://github.com/huggingface/transformers/blob/ede5e041911afed37c8284a980342d4a2625b1d5/src/transformers/models/bert/modeling_bert.py#L223
2. The distance in self-attention is not clipped to the maximum distance (as in Shaw et al.):
https://github.com/huggingface/transformers/blob/ede5e041911afed37c8284a980342d4a2625b1d5/src/transformers/models/bert/modeling_bert.py#L328
There is currently no apparent way to prevent this (especially when the model is trained).
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17026/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17025
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17025/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17025/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17025/events
|
https://github.com/huggingface/transformers/issues/17025
| 1,221,826,334
|
I_kwDOCUB6oc5I05ce
| 17,025
|
force_words_ids not working
|
{
"login": "ZonglinY",
"id": 48231194,
"node_id": "MDQ6VXNlcjQ4MjMxMTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/48231194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZonglinY",
"html_url": "https://github.com/ZonglinY",
"followers_url": "https://api.github.com/users/ZonglinY/followers",
"following_url": "https://api.github.com/users/ZonglinY/following{/other_user}",
"gists_url": "https://api.github.com/users/ZonglinY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZonglinY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZonglinY/subscriptions",
"organizations_url": "https://api.github.com/users/ZonglinY/orgs",
"repos_url": "https://api.github.com/users/ZonglinY/repos",
"events_url": "https://api.github.com/users/ZonglinY/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZonglinY/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I suspect this is a version issue. The [constrained beam search](https://huggingface.co/blog/constrained-beam-search) wasn't introduced until 4.17 so if you are using an older version, that might be why it didn't work. Your code worked for me on 4.18 but not on 4.15. ",
"Thanks @sijunhe! I change the version to 4.18 and it works. \r\nIn addition to use force_word_id to make sure the generation contains some specific words, I'd like the forced words shown in one sentence in a generation, at best in a specified order. Would you be so kind to give me some advice on whether there's parameter in generate() function that can help me do this? Or I have to modify generate() function from its source code? Thanks!",
"> I'd like the forced words shown in one sentence in a generation\r\n\r\nI think this is the current behavior. As long as you are not using the Disjunctive Constraints, all the input_ids listed in `forced_word_id` should show up in the generation. \r\n\r\n > at best in a specified order\r\n\r\nI don't think the current `generation()` supports this yet. However, it is mentioned in the blog post that I lined above as future work, something like a `OrderedConstraint` that would inherit from the `Constraint` class. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nThis could be a problem or not based on how `Phrasalconstraints` is implemented. I am using transformer==4.18.\r\nI observe that the forced words do not always appear in my generations. My guess is that the chance of having forced words in generation is limited by `num_beams`, as I find higher `num_beams` gives me more generations with forced words.\r\n\r\nI also notice that if a forced word is present in the prompt (or starting text), then basically it will not be forced to be generated again? Is that right? \r\n\r\nCan you please provide some insights?"
] | 1,651
| 1,657
| 1,654
|
NONE
| null |
### System Info
```shell
No inception occurs.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import (GPT2LMHeadModel, GPT2Tokenizer, GPT2Config)
m = GPT2LMHeadModel.from_pretrained('gpt2')
t = GPT2Tokenizer.from_pretrained('gpt2')
prompt = "I drink cocacola."
input = t(prompt, return_tensors="pt")
bad_words = t("alcohol", add_prefix_space=True, add_special_tokens=False).input_ids
force_words = t("very sweet", add_prefix_space=True, add_special_tokens=False).input_ids
print("bad_words: ", bad_words)
print("force_words: ", force_words)
gen = m.generate(**input, do_sample=True, temperature=0.9, num_beams = 10, top_p=1.0, bad_words_ids = [bad_words], force_words_ids=[force_words], max_length=100)
gen = t.batch_decode(gen)
if_exist_very = 'very' in gen
if_exist_sweet = 'sweet' in gen
print("gen: ", gen)
print("if_exist_very: ", if_exist_very)
print("if_exist_sweet: ", if_exist_sweet)
### Expected behavior
```shell
Hi,
I tried to use generate() with force_words_ids. But it does not work. bad_words_ids seems to work though.
Here are the outputs:
gen: ["I drink cocacola. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't drink coca. I don't"]
if_exist_very: False
if_exist_sweet: False
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17025/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17024
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17024/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17024/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17024/events
|
https://github.com/huggingface/transformers/pull/17024
| 1,221,783,741
|
PR_kwDOCUB6oc43HJ7P
| 17,024
|
Clean up vision tests
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
This is a follow-up of #16799. It took me way too long to realize I don't need to overwrite `test_attention_outputs` and `test_hidden_states_outputs` as I can just set the `seq_length` attribute of the ModelTester. 😂
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17024/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17024",
"html_url": "https://github.com/huggingface/transformers/pull/17024",
"diff_url": "https://github.com/huggingface/transformers/pull/17024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17024.patch",
"merged_at": 1651501738000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17023
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17023/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17023/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17023/events
|
https://github.com/huggingface/transformers/issues/17023
| 1,221,759,200
|
I_kwDOCUB6oc5I0pDg
| 17,023
|
wavlm s3prl emotion recognition
|
{
"login": "sciai-ai",
"id": 52277510,
"node_id": "MDQ6VXNlcjUyMjc3NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/52277510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sciai-ai",
"html_url": "https://github.com/sciai-ai",
"followers_url": "https://api.github.com/users/sciai-ai/followers",
"following_url": "https://api.github.com/users/sciai-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/sciai-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sciai-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sciai-ai/subscriptions",
"organizations_url": "https://api.github.com/users/sciai-ai/orgs",
"repos_url": "https://api.github.com/users/sciai-ai/repos",
"events_url": "https://api.github.com/users/sciai-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/sciai-ai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @sciai-ai! You can find the rough model conversion script is here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/wavlm/convert_wavlm_original_s3prl_checkpoint_to_pytorch.py\r\n\r\nThe command is:\r\n```bash\r\npython convert_wavlm_original_s3prl_checkpoint_to_pytorch.py \\\r\n --base_model_name \"microsoft/wavlm-base (depends on your base model)\" \\\r\n --config_path \"hf_model/config.json (should be modified by hand, probably just add id2label and label2id fields to the base WavLM config.json)\" \\\r\n --checkpoint_path \"path/to/s3prl/dev-best.ckpt\" \\\r\n --model_dump_path \"hf_model/output/dir/\"\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
Hi
I have trained a downstream emotion recognition task using s3prl wavlm where a checkpoint has been saved `dev-best.ckpt`. The inference setup in s3prl is not ideal requiring batches of wav files split by session rather a single wav file which is useful in production endpoints.
@anton-l can you please share how you ported the wav2vec2-er s3prl model to do inference below.

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17023/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17022
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17022/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17022/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17022/events
|
https://github.com/huggingface/transformers/pull/17022
| 1,221,747,942
|
PR_kwDOCUB6oc43HDBs
| 17,022
|
update docs of length_penalty
|
{
"login": "manandey",
"id": 6687858,
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manandey",
"html_url": "https://github.com/manandey",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"repos_url": "https://api.github.com/users/manandey/repos",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,660
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates the docs of `length_penalty` fixing the issues mentioned in #16930 .
c.c. @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17022/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17022",
"html_url": "https://github.com/huggingface/transformers/pull/17022",
"diff_url": "https://github.com/huggingface/transformers/pull/17022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17022.patch",
"merged_at": 1651482078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17021
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17021/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17021/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17021/events
|
https://github.com/huggingface/transformers/pull/17021
| 1,221,741,773
|
PR_kwDOCUB6oc43HB53
| 17,021
|
Added es version of language_modeling.mdx doc
|
{
"login": "jQuinRivero",
"id": 55513213,
"node_id": "MDQ6VXNlcjU1NTEzMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/55513213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jQuinRivero",
"html_url": "https://github.com/jQuinRivero",
"followers_url": "https://api.github.com/users/jQuinRivero/followers",
"following_url": "https://api.github.com/users/jQuinRivero/following{/other_user}",
"gists_url": "https://api.github.com/users/jQuinRivero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jQuinRivero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jQuinRivero/subscriptions",
"organizations_url": "https://api.github.com/users/jQuinRivero/orgs",
"repos_url": "https://api.github.com/users/jQuinRivero/repos",
"events_url": "https://api.github.com/users/jQuinRivero/events{/privacy}",
"received_events_url": "https://api.github.com/users/jQuinRivero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@omarespejel Could you confirm this is good to merge?"
] | 1,651
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes(#15947)
Added spanish version of language_modeling.mdx documentation file.
### Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17021/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17021",
"html_url": "https://github.com/huggingface/transformers/pull/17021",
"diff_url": "https://github.com/huggingface/transformers/pull/17021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17021.patch",
"merged_at": 1652324696000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17020
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17020/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17020/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17020/events
|
https://github.com/huggingface/transformers/pull/17020
| 1,221,729,921
|
PR_kwDOCUB6oc43G_if
| 17,020
|
add torch.no_grad when in eval mode
|
{
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes (https://github.com/huggingface/transformers/issues/17019)
add `torch.no_grad` in some `run_xxx_no_trainer.py` file when in eval mode
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17020/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17020",
"html_url": "https://github.com/huggingface/transformers/pull/17020",
"diff_url": "https://github.com/huggingface/transformers/pull/17020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17020.patch",
"merged_at": 1651492160000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17019
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17019/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17019/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17019/events
|
https://github.com/huggingface/transformers/issues/17019
| 1,221,729,490
|
I_kwDOCUB6oc5I0hzS
| 17,019
|
Missing torch.no_grad in run_xxx_no_trainer.py
|
{
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
### System Info
```shell
None
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
missing `with torch.no_grad():` in some `run_xxx_no_trainner.py` file.
### Expected behavior
```shell
add `with torch.no_grad():`.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17019/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17018
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17018/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17018/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17018/events
|
https://github.com/huggingface/transformers/pull/17018
| 1,221,696,621
|
PR_kwDOCUB6oc43G4z1
| 17,018
|
Fix typo in RetriBertTokenizer docstring
|
{
"login": "mpoemsl",
"id": 37959974,
"node_id": "MDQ6VXNlcjM3OTU5OTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37959974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpoemsl",
"html_url": "https://github.com/mpoemsl",
"followers_url": "https://api.github.com/users/mpoemsl/followers",
"following_url": "https://api.github.com/users/mpoemsl/following{/other_user}",
"gists_url": "https://api.github.com/users/mpoemsl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpoemsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpoemsl/subscriptions",
"organizations_url": "https://api.github.com/users/mpoemsl/orgs",
"repos_url": "https://api.github.com/users/mpoemsl/repos",
"events_url": "https://api.github.com/users/mpoemsl/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpoemsl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes typo in RetriBertTokenizer docstring.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17018/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17018",
"html_url": "https://github.com/huggingface/transformers/pull/17018",
"diff_url": "https://github.com/huggingface/transformers/pull/17018.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17018.patch",
"merged_at": 1651492101000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17017
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17017/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17017/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17017/events
|
https://github.com/huggingface/transformers/pull/17017
| 1,221,695,601
|
PR_kwDOCUB6oc43G4m7
| 17,017
|
Add missing RetriBERT tokenizer tests
|
{
"login": "mpoemsl",
"id": 37959974,
"node_id": "MDQ6VXNlcjM3OTU5OTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37959974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpoemsl",
"html_url": "https://github.com/mpoemsl",
"followers_url": "https://api.github.com/users/mpoemsl/followers",
"following_url": "https://api.github.com/users/mpoemsl/following{/other_user}",
"gists_url": "https://api.github.com/users/mpoemsl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpoemsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpoemsl/subscriptions",
"organizations_url": "https://api.github.com/users/mpoemsl/orgs",
"repos_url": "https://api.github.com/users/mpoemsl/repos",
"events_url": "https://api.github.com/users/mpoemsl/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpoemsl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @SaulLu, can you give me some guidance on how to proceed?\r\n\r\nThe CI is throwing the following error:\r\n```Make sure the names of these test files match the name of the module or utils they are testing, or adapt the constant `SPECIAL_MODULE_TO_TEST_MAP` in `utils/tests_fetcher.py` to add them. If your test file is triggered separately and is not supposed to be run by the regular CI, add it to the `EXPECTED_TEST_FILES_NEVER_TOUCHED` constant instead.```\r\n\r\nHowever, I think the naming is correct (`test_tokenization_retribert.py` tests `tokenization_retribert.py`) and adding it to neither constants makes sense to me. I'm also unable to reproduce this locally with `pytest`. Do you have any idea what triggered this?",
"The CI error seems to me to come from the fact that 3 days ago there was a re-organisation of the test folder (https://github.com/huggingface/transformers/pull/17034). \r\n\r\nTo solves this, I suggest to1) merge the latest changes to main in your branch and 2) move the tests you added to conform to the new organisation (`tests/retribert` -> `tests/models/retribert`).\r\n\r\nKeep me updated! :smile: \r\n\r\n",
"Hi @SaulLu, thank you very much for the reply. I think I've got it now! Can you take another look?",
"My pleasure! I would like to keep contributing and I was wondering if you could help me with a question related to that @SaulLu. \r\n\r\nI noticed that the `RetriBERT` model itself is missing test files as well and I would like to write those. How do I make sure that no one else is writing them concurrently? Do I open an issue or perhaps a WIP pull request? I have already checked that there currently is no open issue or pull request related to this."
] | 1,651
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Addresses issue [#16627](https://github.com/huggingface/transformers/issues/16627).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
@SaulLu
## Notes
1. There was no folder `tests/retribert/` yet, so I created one and put an `__init__.py` in it. Is there anything else I have to do for these tests to get picked up by CI?
2. `RetriBertTokenizer` is identical to `BertTokenizer`, so I mostly just duplicated `BertTokenizationTest`. Is that fine or should I rather a) write new tests from scratch or b) figure out a way to reuse the code in `BertTokenizationTest`?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17017/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17017",
"html_url": "https://github.com/huggingface/transformers/pull/17017",
"diff_url": "https://github.com/huggingface/transformers/pull/17017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17017.patch",
"merged_at": 1652274248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17016
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17016/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17016/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17016/events
|
https://github.com/huggingface/transformers/issues/17016
| 1,221,631,824
|
I_kwDOCUB6oc5I0J9Q
| 17,016
|
Optionally return past key values from generate
|
{
"login": "dblakely",
"id": 20539855,
"node_id": "MDQ6VXNlcjIwNTM5ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/20539855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dblakely",
"html_url": "https://github.com/dblakely",
"followers_url": "https://api.github.com/users/dblakely/followers",
"following_url": "https://api.github.com/users/dblakely/following{/other_user}",
"gists_url": "https://api.github.com/users/dblakely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dblakely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dblakely/subscriptions",
"organizations_url": "https://api.github.com/users/dblakely/orgs",
"repos_url": "https://api.github.com/users/dblakely/repos",
"events_url": "https://api.github.com/users/dblakely/events{/privacy}",
"received_events_url": "https://api.github.com/users/dblakely/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hello, I have the exactly same issue! Could you please share an implementation of yours that return past key value when using generate?",
"> Hello, I have the exactly same issue! Could you please share an implementation of yours that return past key value when using generate?\r\n\r\nHi, what we do is something like this:\r\n\r\n```python\r\nclass CustomSampleMixin:\r\n \"\"\"Have an HF model return past_key_values from generate by inheriting this mixin. \r\n For example:\r\n ```\r\n class YourModel(CustomSampleMixin, T5ForConditionalGeneration)\r\n ```\r\n \"\"\"\r\n\r\n def sample(self, all_the_normal_args, output_past_key_values):\r\n \"\"\"Custom sample method that's mostly copied from Huggingface's generation_utils.sample method\r\n \"\"\"\r\n ...\r\n\r\n past_key_values = None\r\n while True:\r\n # forward pass to get next token\r\n outputs = self(**model_inputs, return_dict=True)\r\n past_key_values = outputs.past_key_values\r\n ...\r\n\r\n if return_dict_in_generate:\r\n return CustomGenerationOutput(\r\n sequences=input_ids,\r\n scores=scores,\r\n decoder_attentions=decoder_attentions,\r\n cross_attentions=cross_attentions,\r\n decoder_hidden_states=decoder_hidden_states,\r\n past_key_values=past_key_values if output_past_key_values else None,\r\n encoder_outputs=model_kwargs[\"encoder_outputs\"],\r\n )\r\n\r\n return input_ids\r\n```\r\n\r\nThen when we're using it:\r\n\r\n```python\r\npast_key_values = None\r\nwhile True:\r\n model_kwargs[\"past\"] = past_key_values # Note that HF calls it \"past\" in the `model_kwargs`\r\n outputs = model.generate(output_past_key_values=True, **model_kwargs)\r\n past_key_values = outputs.past_key_values\r\n # post-process outputs, post-process past_key_values\r\n ...\r\n```\r\n\r\nThis approach works for our purposes but does mean that we need to copy and maintain a lot of extra code from Huggingface. Plus, if you want to get `past_key_values` from `beam_sample`, `greedy_search`, etc instead of just `sample`, you have to make custom versions of each of those as well.",
"Thank you for sharing!",
"Hi @patrickvonplaten, what do you think about this idea?",
"I'd actually be fine with adding this to main generate, maybe already by default as soon as `return_dict_in_generate` is set to True, not sure if we necessarily need a new `output_...` input arg. @gante what do you think?",
"@patil-suraj what do you think here?",
"I'm fine with returning `past_key_values` from `generate` since we already allow to return other model outputs like attentions and hidden_states. And a new argument is not necessary IMO, since the model always returns past when `use_cache=True` (the default case), so a new argument to control this won't be necessary.",
"I agree, we can return `past_key_values` when `use_cache=True`. \r\n\r\nIt will, however, be an API change (adds a field to the output, which is an `OrderedDict` subclass), so any user iterating over the output will be impacted. I suspect it is a very uncommon use case, and thus the utility of exposing `past_key_values` exceeds potential pain points. WDYT @patil-suraj @patrickvonplaten? \r\n\r\nIf you agree, I can add this to my to-do list.",
"Think it's fine to extend the len of the tuple / `ModelOutput`, we don't consider this a breaking change. @patil-suraj do you want to give it a try to implement this? @gante you could then fully focus on finishing TF generate :heart_eyes: ",
"Will open a PR for it this week :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Associated PR hasn't really been merged. So this issue cannot be closed.",
"Hey folks 👋 \r\n\r\n#25086 was merged.\r\n\r\nIf you install from `main` and add `return_dict_in_generate=True` to `generate`, `past_key_values` will be part of the output, assuming your model is configured with `use_cache=True` (the default).\r\n\r\nYou can then pass `past_key_values` to `generate` to continue generating!"
] | 1,651
| 1,698
| 1,657
|
CONTRIBUTOR
| null |
### Feature request
The idea would be to optionally return `past_key_values` inside the generation objects (`SampleEncoderDecoderOutput`, etc). This could be controlled by a flag called `output_past_key_values` that's passed to `generate` and then forwarded to `sample`, etc.
### Motivation
Perhaps this is niche, but my team and I need often need to obtain the past keys and values when generating in order to manipulate them a bit and then feed them back in for subsequent calls to `generate`. We currently do this with a custom version of `sample`, but this results in us having to copy and paste a lot of code. Would it be possible to allow `past_key_values` to be optionally returned by `generate`?
### Your contribution
If you all approve of the feature idea, I'd be able to implement it and submit a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17016/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17015
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17015/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17015/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17015/events
|
https://github.com/huggingface/transformers/pull/17015
| 1,221,510,515
|
PR_kwDOCUB6oc43GYoc
| 17,015
|
Result of new doc style with fixes
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
This PR shows the changes in Transformers that will be occasioned by the new release of `doc-builder` with some fixes in the style command.
Code quality will fail util the next release of `hf-doc-builder` this PR will be merged just after.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17015/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17015",
"html_url": "https://github.com/huggingface/transformers/pull/17015",
"diff_url": "https://github.com/huggingface/transformers/pull/17015.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17015.patch",
"merged_at": 1651268535000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17014
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17014/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17014/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17014/events
|
https://github.com/huggingface/transformers/pull/17014
| 1,221,506,834
|
PR_kwDOCUB6oc43GX2h
| 17,014
|
Replace dict/BatchEncoding instance checks by Mapping
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
We have several instance checks in the code base for `(dict, BatchEncoding)` (because `BatchEncoding` is a `UserDict` which is not an instance of `dict`). Those all miss the newer `BatchFeatures` (which is another `UserDict`) as was pointed out in #16983
In Accelerate we use the more general `Mapping` from `collections.abc` for those checks (which catches any kind of `dict`), this PR suggest to do this here too.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17014/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17014",
"html_url": "https://github.com/huggingface/transformers/pull/17014",
"diff_url": "https://github.com/huggingface/transformers/pull/17014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17014.patch",
"merged_at": 1651267253000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17013
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17013/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17013/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17013/events
|
https://github.com/huggingface/transformers/pull/17013
| 1,221,469,107
|
PR_kwDOCUB6oc43GQHp
| 17,013
|
Fix code examples for doctests
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the work.\r\n\r\nOther than the `>>>` things, there are 2 failures when I ran it.\r\n\r\nFor audio pipeline:\r\n```\r\nExpected:\r\n [{'label': 'calm', 'score': 0.1315},\r\n {'label': 'neutral', 'score': 0.1307},\r\n {'label': 'sad', 'score': 0.1274},\r\n {'label': 'fearful', 'score': 0.1261},\r\n {'label': 'happy', 'score': 0.1242}]\r\nGot:\r\n [{'score': 0.1315, 'label': 'calm'}, {'score': 0.1307, 'label': 'neutral'}, {'score': 0.1274, 'label': 'sad'}, {'score': 0.1261, 'label': 'fearful'}, {'score': 0.1242, 'label': 'happy'}]\r\n\r\n```\r\n(this is just a format issue I think)\r\n\r\nFor vision pipeline:\r\n```\r\nExpected:\r\n [{'score': 0.4403, 'label': 'lynx, catamount'}, {'score': 0.0343, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0321, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0235, 'label': 'Egyptian cat'}, {'score': 0.023, 'label': 'tiger cat'}]\r\nGot:\r\n [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}]\r\n```\r\n~~(This might be due to some random ops. I remembered I have similar situations before. I can take a look too.)~~\r\nI get deterministic results, which is on Ubuntu 20.04. It's not very clear why the result is different than the previous one in the doc. I also get the same results on my local Windows machine. Maybe we could just update the values, cc @sgugger?\r\n\r\n",
"I don't know why you ask me @ydshieh this is not my PR ;-) ",
"> I don't know why you ask me @ydshieh this is not my PR ;-)\r\n\r\nI know. Just to make sure you are also fine with my suggestion about `just update the values`. But I guess I should be more confident 😄 "
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
This PR fixes some code examples to pass the doctests for the pipeline and `AutoClass` tutorials.
I was unable to pass the audio code examples on my local machine because soundfile is not supported on M1 yet. I was able to run and reproduce the code snippets in Colab though so I think they should also pass on the CI.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17013/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17013",
"html_url": "https://github.com/huggingface/transformers/pull/17013",
"diff_url": "https://github.com/huggingface/transformers/pull/17013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17013.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17012
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17012/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17012/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17012/events
|
https://github.com/huggingface/transformers/pull/17012
| 1,221,369,942
|
PR_kwDOCUB6oc43F7VM
| 17,012
|
Add a check on config classes docstring checkpoints
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
A follow-up for #16900: add a test to make sure all config classes have at least one valid checkpoint (unless explicitly specified to ignore).
By `valid`, it only means the format is valid, i.e. of the form `[XXX](https://huggingface.co/XXX)` with `XXX` being any string.
A more strict verification could be implemented by trying to load the config. But maybe it is a bit too much?
Also fix 2 more config classes without valid checkpoint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17012/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17012",
"html_url": "https://github.com/huggingface/transformers/pull/17012",
"diff_url": "https://github.com/huggingface/transformers/pull/17012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17012.patch",
"merged_at": 1651308047000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17011
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17011/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17011/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17011/events
|
https://github.com/huggingface/transformers/pull/17011
| 1,221,158,986
|
PR_kwDOCUB6oc43FLNW
| 17,011
|
Revert "Updating variable names. (#16445)"
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
This reverts commit 4f3a14e3c235c8b6b8cd2f5bc448a0cffacddf61.
# What does this PR do?
Broke `main`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17011/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17011",
"html_url": "https://github.com/huggingface/transformers/pull/17011",
"diff_url": "https://github.com/huggingface/transformers/pull/17011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17011.patch",
"merged_at": 1651249605000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17010
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17010/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17010/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17010/events
|
https://github.com/huggingface/transformers/issues/17010
| 1,220,982,789
|
I_kwDOCUB6oc5IxrgF
| 17,010
|
[Data2Vec] Incompatibility with the original implementation
|
{
"login": "arxyzan",
"id": 38841793,
"node_id": "MDQ6VXNlcjM4ODQxNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/38841793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arxyzan",
"html_url": "https://github.com/arxyzan",
"followers_url": "https://api.github.com/users/arxyzan/followers",
"following_url": "https://api.github.com/users/arxyzan/following{/other_user}",
"gists_url": "https://api.github.com/users/arxyzan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arxyzan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arxyzan/subscriptions",
"organizations_url": "https://api.github.com/users/arxyzan/orgs",
"repos_url": "https://api.github.com/users/arxyzan/repos",
"events_url": "https://api.github.com/users/arxyzan/events{/privacy}",
"received_events_url": "https://api.github.com/users/arxyzan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @patrickvonplaten @NielsRogge ",
"> Also I noticed that the encoders used for HF Data2Vec are not exactly the same models I mentioned above and there are some minor differences. The reason I'm wondering this, is because I was trying to copy the weights from your models to apply them to my own models in [my own repo](https://github.com/AryanShekarlaban/data2vec-pytorch) and found out that I can't due to those incompatibilities.\r\n\r\nCan you elaborate on this? We converted the weights from the original repo, so they should be equivalent to the original implementation.",
"Hello @NielsRogge, sorry for the delayed response.\r\nSeems like I made a mistake regarding mismatch between architectures! Perhaps I loaded incorrect models using AutoModel. Today I reviewed all three models thoroughly and found no mismatch.\r\nBut how about my first question? What was your intent behind reimplementing 3 models for data2vec while they're exactly the same as RoBERTa, BEiT and Wav2Vec2 which are already present in the transformers package?\r\n\r\nThanks,\r\nAryan\r\n",
"Regarding the fact that some minor differences exist in model architectures, what I attempted to do is that I tried to load weights directly from data2vec checkpoints to existing encoder models as below:\r\n1. Loaded state dict from `facebook/data2vec-text-base` checkpoint into `roberta-base` and all keys matched successfully.\r\n2. Loaded state dict from `facebook/data2vec-vision-base` checkpoint into `microsoft/beit-base-patch16-224` and got IncompatibleKeys warning:\r\n`\r\n_IncompatibleKeys(missing_keys=['encoder.relative_position_bias.relative_position_bias_table', 'encoder.relative_position_bias.relative_position_index', 'layernorm.weight', 'layernorm.bias'], unexpected_keys=['pooler.layernorm.weight', 'pooler.layernorm.bias', 'encoder.layer.0.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.0.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.1.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.1.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.2.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.2.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.3.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.3.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.4.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.4.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.5.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.5.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.6.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.6.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.7.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.7.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.8.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.8.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.9.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.9.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.10.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.10.attention.attention.relative_position_bias.relative_position_index', 'encoder.layer.11.attention.attention.relative_position_bias.relative_position_bias_table', 'encoder.layer.11.attention.attention.relative_position_bias.relative_position_index'])\r\n`\r\n\r\n\r\n3. Loaded state dict from `facebook/data2vec-audio-base` checkpoint into `facebook/wav2vec2-base` and got IncompatibleKeys warning:\r\n`\r\n_IncompatibleKeys(missing_keys=['encoder.pos_conv_embed.conv.bias', 'encoder.pos_conv_embed.conv.weight_g', 'encoder.pos_conv_embed.conv.weight_v'], unexpected_keys=['feature_extractor.conv_layers.1.layer_norm.weight', 'feature_extractor.conv_layers.1.layer_norm.bias', 'feature_extractor.conv_layers.2.layer_norm.weight', 'feature_extractor.conv_layers.2.layer_norm.bias', 'feature_extractor.conv_layers.3.layer_norm.weight', 'feature_extractor.conv_layers.3.layer_norm.bias', 'feature_extractor.conv_layers.4.layer_norm.weight', 'feature_extractor.conv_layers.4.layer_norm.bias', 'feature_extractor.conv_layers.5.layer_norm.weight', 'feature_extractor.conv_layers.5.layer_norm.bias', 'feature_extractor.conv_layers.6.layer_norm.weight', 'feature_extractor.conv_layers.6.layer_norm.bias', 'encoder.pos_conv_embed.layers.0.conv.weight', 'encoder.pos_conv_embed.layers.0.conv.bias', 'encoder.pos_conv_embed.layers.1.conv.weight', 'encoder.pos_conv_embed.layers.1.conv.bias', 'encoder.pos_conv_embed.layers.2.conv.weight', 'encoder.pos_conv_embed.layers.2.conv.bias', 'encoder.pos_conv_embed.layers.3.conv.weight', 'encoder.pos_conv_embed.layers.3.conv.bias', 'encoder.pos_conv_embed.layers.4.conv.weight', 'encoder.pos_conv_embed.layers.4.conv.bias'])\r\n`\r\n@NielsRogge ",
"For BEiT, the problem was that there are some differences in the config; In order to load weights with no errors these values must be set in config:\r\n```python\r\n...\r\nbeit_config = BeitConfig(use_relative_position_bias=False,\r\n use_mean_pooling=False, \r\n use_shared_relative_position_bias=True)\r\n``` \r\nSo in terms of architecutre, `transformers.models.BEiTModel` and `transformers.models.Data2VecVisionModel` are the same, but for `Wav2Vec2Model `vs `Data2VecAudioModel` it's not the same case! they're actually different in terms of design so I'd have to use another technique to transfer weights from `Data2VecAudio` to `Wav2Vec2`.\r\nI know that the reason is that the same case exists in `fairseq` too. There are some design differences between data2vec-audio and wav2vec2, so in order to transfer weights from there you had to make those changes to the `Data2VecAudioModel` codes.",
"> But how about my first question? What was your intent behind reimplementing 3 models for data2vec while they're exactly the same as RoBERTa, BEiT and Wav2Vec2 which are already present in the transformers package?\r\n\r\nWe're planning to add `Data2VecAudioForPretraining` etc, which is why the implementations were duplicated. ",
"Cool! looking forward to that.\r\nThanks for putting your time replying. \r\nI'm closing this issue."
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
Hello dear HuggingFace team!
According to the original paper, data2vec is not an actual model but more of a self-distilling training strategy. It takes an encoder model as backbone (RoBERTa for text, BEiT for vision, wav2vec for audio as mentioned in the paper) and pre-trains the encoder (student) to predict representations extracted from the EMA instance of the encoder (teacher), meaning the encoder can be any Transformer-based encoder model.
After pretraining, in order to finetune or get predictions, the encoder itself is what matters and data2vec is of no use! (as seen [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec#finetuning-data2vec-text-on-glue))
I reviewed data2vec implementation in HF transformers and noticed that you decided to use static encoders (BERT for text, BEiT for vision and wav2vec2 for audio) so for example, using Data2VecVisionModel for any task would be the same as using BEiTModel.
Also I noticed that the encoders used for HF Data2Vec are not exactly the same models I mentioned above and there are some minor differences. The reason I'm wondering this, is because I was trying to copy the weights from your models to apply them to my own models in [my own repo](https://github.com/AryanShekarlaban/data2vec-pytorch) and found out that I can't due to those incompatibilities.
So my question is, what was the purpose behind all this? and did you train all those models or copied the weights from the original checkpoints in fairseq?
Regards,
Aryan
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17010/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17009
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17009/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17009/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17009/events
|
https://github.com/huggingface/transformers/issues/17009
| 1,220,982,647
|
I_kwDOCUB6oc5Ixrd3
| 17,009
|
[Data2Vec] Incompatibility with the original implementation
|
{
"login": "arxyzan",
"id": 38841793,
"node_id": "MDQ6VXNlcjM4ODQxNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/38841793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arxyzan",
"html_url": "https://github.com/arxyzan",
"followers_url": "https://api.github.com/users/arxyzan/followers",
"following_url": "https://api.github.com/users/arxyzan/following{/other_user}",
"gists_url": "https://api.github.com/users/arxyzan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arxyzan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arxyzan/subscriptions",
"organizations_url": "https://api.github.com/users/arxyzan/orgs",
"repos_url": "https://api.github.com/users/arxyzan/repos",
"events_url": "https://api.github.com/users/arxyzan/events{/privacy}",
"received_events_url": "https://api.github.com/users/arxyzan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,651
| 1,651
| 1,651
|
NONE
| null |
Hello dear HuggingFace team!
According to the original paper, data2vec is not an actual model but more of a self-distilling training strategy. It takes an encoder model as backbone (RoBERTa for text, BEiT for vision, wav2vec for audio as mentioned in the paper) and pre-trains the encoder (student) to predict representations extracted from the EMA instance of the encoder (teacher), meaning the encoder can be any Transformer-based encoder model.
After pretraining, in order to finetune or get predictions, the encoder itself is what matters and data2vec is of no use! (as seen [here](https://github.com/pytorch/fairseq/tree/main/examples/data2vec#finetuning-data2vec-text-on-glue))
I reviewed data2vec implementation in HF transformers and noticed that you decided to use static encoders (BERT for text, BEiT for vision and wav2vec2 for audio) so for example, using Data2VecVisionModel for any task would be the same as using BEiTModel.
Also I noticed that the encoders used for HF Data2Vec are not exactly the same models I mentioned above and there are some minor differences. The reason I'm wondering this, is because I was trying to copy the weights from your models to apply them to my own models in [my own repo](https://github.com/AryanShekarlaban/data2vec-pytorch) and found out that I can't due to those incompatibilities.
So my question is, what was the purpose behind all this? and did you train all those models or copied the weights from the original checkpoints in fairseq?
Regards,
Aryan
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17009/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17008
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17008/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17008/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17008/events
|
https://github.com/huggingface/transformers/pull/17008
| 1,220,653,957
|
PR_kwDOCUB6oc43DX2k
| 17,008
|
Add Data2Vec for Vision in TF
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I used these steps for styling: https://github.com/huggingface/transformers/pull/16255#discussion_r830432539. \r\n\r\nOn my end, when I am running `make style` I get the following:\r\n\r\n```\r\n...\r\n\r\ndoc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source\r\nOverwriting content of src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py.\r\nOverwriting content of src/transformers/models/luke/modeling_luke.py.\r\nOverwriting content of src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py.\r\nOverwriting content of src/transformers/models/tapas/modeling_tapas.py.\r\nOverwriting content of src/transformers/models/tapas/modeling_tf_tapas.py.\r\nOverwriting content of src/transformers/models/data2vec/modeling_tf_data2vec_vision.py.\r\nOverwriting content of src/transformers/models/t5/modeling_flax_t5.py.\r\nOverwriting content of src/transformers/models/t5/modeling_t5.py.\r\nOverwriting content of src/transformers/models/t5/modeling_tf_t5.py.\r\nOverwriting content of src/transformers/models/rag/modeling_rag.py.\r\nOverwriting content of src/transformers/models/rag/retrieval_rag.py.\r\nOverwriting content of src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py.\r\nOverwriting content of src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py.\r\nOverwriting content of src/transformers/models/encoder_decoder/modeling_encoder_decoder.py.\r\nOverwriting content of src/transformers/models/xlm/modeling_xlm.py.\r\nOverwriting content of src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py.\r\nOverwriting content of src/transformers/models/imagegpt/modeling_imagegpt.py.\r\nOverwriting content of src/transformers/models/longformer/modeling_longformer.py.\r\nOverwriting content of src/transformers/models/xlnet/modeling_xlnet.py.\r\nOverwriting content of src/transformers/models/xlnet/modeling_tf_xlnet.py.\r\nOverwriting content of src/transformers/models/gpt2/modeling_tf_gpt2.py.\r\nOverwriting content of src/transformers/models/prophetnet/modeling_prophetnet.py.\r\nOverwriting content of src/transformers/models/realm/modeling_realm.py.\r\nOverwriting content of src/transformers/models/openai/modeling_tf_openai.py.\r\nOverwriting content of src/transformers/models/openai/modeling_openai.py.\r\nOverwriting content of docs/source/en/model_doc/luke.mdx.\r\nOverwriting content of docs/source/en/model_doc/bert-generation.mdx.\r\nCleaned 27 files!\r\n```\r\n\r\nThe [CI console](https://app.circleci.com/pipelines/github/huggingface/transformers/38997/workflows/f017efba-6409-4669-835d-e463043d3ea0/jobs/436193) is also suggestive of this change. **Should I add these cleaned files to the PR?**\r\n\r\n",
"Make sure you update `hf-doc-builder` to its latest version with `pip install hf-doc-builder -U`. We had a new release last week to fix some bugs in the example styling in our docs :-)",
"> Make sure you update `hf-doc-builder` to its latest version with `pip install hf-doc-builder -U`. We had a new release last week to fix some bugs in the example styling in our docs :-)\r\n\r\n@sgugger I see that `hf-doc-builder` is already up to date on my end (`Version: 0.3.0`).",
"You'll probably need to rebase your PR on master to get the changes in the setup for the quality check to pass (otherwise the CI uses the cached installed libraries).",
"> You'll probably need to rebase your PR on master to get the changes in the setup for the quality check to pass (otherwise the CI uses the cached installed libraries).\r\n\r\nThanks, @sgugger! I first rebased my main with the upstream and then merged the main into the PR branch. And then I force-pushed. Let's see. ",
"Hi, @sayakpaul \r\n\r\n- You can ignore `Model templates runner / run_tests_templates (pull_request)`. (You can even cancel that workflow run)\r\n- I have just merged a (big) PR that moved model test folders, like `tests/bert` to `tests/models/bert`. When you have time, could you\r\n - pull the changes (from upstream main) to your main\r\n - **rebase** your working branch on the main (better to avoid using `merge` in this case, I believe)\r\n - move your new test file `tests/data2vec/test_modeling_tf_data2vec_vision.py` from `tests/data2vec` to `tests/models/data2vec`\r\n - You might need to fix a few lines of `import`. \r\n - For example, `from ..test_configuration_common import ConfigTester` --> `from ...test_configuration_common import ConfigTester`\r\n\r\nplease? 🙏 Thank you!\r\n",
"@ydshieh after rebasing, won't I need to merge the main into my PR branch so that it has the full effect?",
"> @ydshieh after rebasing, won't I need to merge the main into my PR branch so that it has the full effect?\r\n\r\nIn order to incorporate the changes in main into your PR branch, you can either use `merge` or `rebase`. I am in favor of using `rebase` as it might be cleaner in some cases (won't introduce a lot of file changes).\r\n\r\nOnce you have latest changes from upstream main in your local main, you can **checkout to your PR branch**, and do something like\r\n\r\n```\r\ngit rebase main\r\n```\r\n(sometimes there might be conflicts to fix, but I think there won't be conflict in this case)\r\n\r\nThen you will have to force push.",
"@ydshieh oops looks like I have made things worse instead of making them work. I am not sure how I can revert to a mergeable state now. Any suggestion?",
"> @ydshieh oops looks like I have made things worse instead of making them work. I am not sure how I can revert to a mergeable state now. Any suggestion?\r\n\r\nLet me give it a try - I am definitely NOT a Git Pro 😢 \r\n(No guarantee though - hope 🤞 )\r\n\r\nCould you let me know what steps you have done, please?",
"I just followed your suggestions:\r\n\r\n* Rebased my main with the upstream main.\r\n* Checked out to my PR branch and ran `git rebase main`. \r\n* Made the necessary changes you suggested regarding moving the test file. \r\n\r\nI think you mistakenly made a push to my PR branch which is what may have caused the unnecessary changes to reflect in this PR. \r\n\r\n\r\n\r\nI am happy to work on the necessary steps per your suggestions too. ",
"> I just followed your suggestions:\r\n> \r\n> * Rebased my main with the upstream main.\r\n> * Checked out to my PR branch and ran `git rebase main`.\r\n> * Made the necessary changes you suggested regarding moving the test file.\r\n> \r\n> I think you mistakenly made a push to my PR branch which is what may have caused the unnecessary changes to reflect in this PR.\r\n> \r\n> \r\n> \r\n> I am happy to work on the necessary steps per your suggestions too.\r\n\r\nHi. That is the merge of my PR into main. I didn't merge that one into your PR. I am not sure why it appears like this and also confused. (Maybe it's somehow related to the merges have done). Let me try to figure out a way. Sorry about this.",
"> Hi. That is the merge of my PR into main. I didn't merge that one into your PR. I am not sure why it appears like this and also confused. (Maybe it's somehow related to the merges have done). Let me try to figure out a way. Sorry about this.\r\n\r\n@ydshieh here's what I am thinking:\r\n\r\n* Revert to https://github.com/huggingface/transformers/pull/17008/commits/247a6c53dc6a64664ff58c862319116aff359d9c. \r\n* Follow [your suggestions](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) again. \r\n* Push the changes. ",
"I am going to force push and see if it works 🙏 ",
"Force push where?",
"To this PR, if you are OK with it. Please let me know, thanks.",
"> > Hi. That is the merge of my PR into main. I didn't merge that one into your PR. I am not sure why it appears like this and also confused. (Maybe it's somehow related to the merges have done). Let me try to figure out a way. Sorry about this.\r\n> \r\n> @ydshieh here's what I am thinking:\r\n> \r\n> * Revert to [247a6c5](https://github.com/huggingface/transformers/commit/247a6c53dc6a64664ff58c862319116aff359d9c).\r\n> * Follow [your suggestions](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) again.\r\n> * Push the changes.\r\n\r\nHi, I think we need to get to\r\n\r\n```\r\n[fix: tests due to removal of to_2tuple().](https://github.com/huggingface/transformers/pull/17008/commits/a0714e210c4f7da0f1321e50259ccf4fb40020ef)\r\n```\r\n\r\nand see what we can do to incorporate the main, that's what I am trying now.",
"Actually the commit I was referring to, it had the bits and pieces (like styling nits of the upstream files). ",
"It may just work, let's see @ydshieh ",
"If we revert to `[247a6c5](https://github.com/huggingface/transformers/commit/247a6c53dc6a64664ff58c862319116aff359d9c).`, we will still get a lot of changed file showing up in your PR.\r\n\r\nI am able to get something cleaner like\r\n\r\n\r\n<img width=\"399\" alt=\"Screenshot 2022-05-03 162316\" src=\"https://user-images.githubusercontent.com/2521628/166472233-75ccdf82-c653-4512-9131-9708a0015962.png\">\r\n\r\nby just revert to `ddd6b1c`, which I think it is the close to your PR with the changes from main in the clean way.\r\nLet me know if you want to try it by yourself, otherwise I can push to this PR.",
"Sounds good. Let me know the steps. ",
"Here is what I would try\r\n(always a good idea to have a backup)\r\n```\r\ngit checkout -b tf-data2vec-backup\r\n```\r\nThen\r\n```\r\ngit checkout tf-data2vec\r\ngit reset --hard ddd6b1c3\r\ngit push --force-with-lease\r\n```\r\nOnce the commit history is clean on PR page, we can see if there is any style issues to fix. By that time, things should be easy.",
"@ydshieh fingers crossed 🤞",
"@sgugger, this is the step I need someone from the 🤗. team to perform. After that, I will remove `from_pt=True` from the code and will test.\r\n\r\n> TF weight uploading to Hub (to be done by someone from the 🤗 team)",
"Will look into this. It's just for the checkpoint `facebook/data2vec-vision-base-ft1k` right? Or is there another one?",
"> Will look into this. It's just for the checkpoint `facebook/data2vec-vision-base-ft1k` right? Or is there another one?\r\n\r\nThere are four in the Facebook organization: [data2vec-vision](https://huggingface.co/models?sort=downloads&search=data2vec-vision)",
"@ydshieh forgot to say: THANK YOU VERY MUCH.",
"Currently, the follow checkpoint crashes (after the two suggestions I have made on the PR):\r\n```\r\nfrom transformers import TFAutoModel\r\n\r\ntf_model = TFAutoModel.from_pretrained(\"facebook/data2vec-vision-base\", from_pt=True)\r\n```\r\nSame for \"facebook/data2vec-vision-large\", therefore I can't convert those checkpoints (and it looks like something needs fixing?)\r\n\r\nHere is the traceback:\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n/tmp/ipykernel_2749758/3199004601.py in <module>\r\n----> 1 tf_model = TFAutoModel.from_pretrained(\"facebook/data2vec-vision-large\", from_pt=True)\r\n\r\n~/git/transformers/src/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 444 elif type(config) in cls._model_mapping.keys():\r\n 445 model_class = _get_model_class(config, cls._model_mapping)\r\n--> 446 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)\r\n 447 raise ValueError(\r\n 448 f\"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\\n\"\r\n\r\n~/git/transformers/src/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1794 \r\n 1795 # Load from a PyTorch checkpoint\r\n-> 1796 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)\r\n 1797 \r\n 1798 # we might need to extend the variable scope for composite models\r\n\r\n~/git/transformers/src/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)\r\n 122 logger.info(f\"PyTorch checkpoint contains {sum(t.numel() for t in pt_state_dict.values()):,} parameters\")\r\n 123 \r\n--> 124 return load_pytorch_weights_in_tf2_model(\r\n 125 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys\r\n 126 )\r\n\r\n~/git/transformers/src/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)\r\n 153 \r\n 154 if tf_inputs is not None:\r\n--> 155 tf_model(tf_inputs, training=False) # Make sure model is built\r\n 156 # Adapt state dict - TODO remove this and update the AWS weights files instead\r\n 157 # Convert old format to new format if needed from a PyTorch state_dict\r\n\r\n~/anaconda3/lib/python3.9/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)\r\n 65 except Exception as e: # pylint: disable=broad-except\r\n 66 filtered_tb = _process_traceback_frames(e.__traceback__)\r\n---> 67 raise e.with_traceback(filtered_tb) from None\r\n 68 finally:\r\n 69 del filtered_tb\r\n\r\n~/git/transformers/src/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)\r\n 381 main_input = fn_args_and_kwargs.pop(main_input_name, None)\r\n 382 unpacked_inputs = input_processing(func, self.config, main_input, **fn_args_and_kwargs)\r\n--> 383 return func(self, **unpacked_inputs)\r\n 384 \r\n 385 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This\r\n\r\n~/git/transformers/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py in call(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict, training)\r\n 893 ) -> Union[tuple, TFData2VecVisionModelOutputWithPooling]:\r\n 894 \r\n--> 895 outputs = self.data2vec_vision(\r\n 896 pixel_values=pixel_values,\r\n 897 bool_masked_pos=bool_masked_pos,\r\n\r\n~/git/transformers/src/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)\r\n 381 main_input = fn_args_and_kwargs.pop(main_input_name, None)\r\n 382 unpacked_inputs = input_processing(func, self.config, main_input, **fn_args_and_kwargs)\r\n--> 383 return func(self, **unpacked_inputs)\r\n 384 \r\n 385 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This\r\n\r\n~/git/transformers/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py in call(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict, training)\r\n 712 embedding_output = self.embeddings(pixel_values, bool_masked_pos, training=training)\r\n 713 \r\n--> 714 encoder_outputs = self.encoder(\r\n 715 embedding_output,\r\n 716 head_mask=head_mask,\r\n\r\n~/git/transformers/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py in call(self, hidden_states, head_mask, output_attentions, output_hidden_states, return_dict)\r\n 625 layer_head_mask = head_mask[i] if head_mask is not None else None\r\n 626 \r\n--> 627 relative_position_bias = self.relative_position_bias() if self.relative_position_bias is not None else None\r\n 628 layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions, relative_position_bias)\r\n 629 \r\n\r\nValueError: Exception encountered when calling layer \"encoder\" (type TFData2VecVisionEncoder).\r\n\r\nThe first argument to `Layer.call` must always be passed.\r\n\r\nCall arguments received:\r\n • hidden_states=tf.Tensor(shape=(3, 197, 1024), dtype=float32)\r\n • head_mask=['None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None']\r\n • output_attentions=False\r\n • output_hidden_states=False\r\n • return_dict=True\r\n```\r\n\r\nI have converted `facebook/data2vec-vision-base-ft1k` and am doing `facebook/data2vec-vision-large-ft1k` now.",
"@sgugger thanks for providing the update. Let me check from my end once. "
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
This PR adds the data2vec [1] model for vision in TensorFlow.
**Todo**:
~* Fix cross-loading.~
~* Add integration test.~
~* Add remaining tests.~
~* Rest of the files remaining for the PR.~
~* TF weight uploading to Hub (to be done by someone from the 🤗 team)~
## Notes
* This PR does not add `...ForSegmentation`. This can be done in a separate PR I think.
* Locally, I ran the tests using: `RUN_SLOW=1 python -m pytest tests/data2vec/test_modeling_tf_data2vec_vision.py`.
## References
[1] data2vec: https://arxiv.org/abs/2202.03555
@sgugger @Rocketknight1 @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17008/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17008",
"html_url": "https://github.com/huggingface/transformers/pull/17008",
"diff_url": "https://github.com/huggingface/transformers/pull/17008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17008.patch",
"merged_at": 1651666105000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17007
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17007/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17007/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17007/events
|
https://github.com/huggingface/transformers/pull/17007
| 1,220,461,917
|
PR_kwDOCUB6oc43CroZ
| 17,007
|
use scale=1.0 in floats_tensor called in speech model testers
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for fixing all the tests!"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
Fix the failure of `Speech2TextModelTest.test_pt_tf_model_equivalence`. This is caused by
https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/tests/speech_to_text/test_modeling_speech_to_text.py#L134-L136
where the `input_features` get a large magnitude of `1e2` (from `self.vocab_size=99`).
(probably this happens because we just copied the `input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)` from NLP models?)
I changed it to `scale=1.0`, but need @patrickvonplaten's expertise **to make sure there was no particular reason to use `self.vocab_size`.**
### Details
Current speech model testers have
```
def prepare_config_and_inputs(self):
input_values = floats_tensor([self.batch_size, self.seq_length], self.vocab_size)
```
The ` self.vocab_size` argument is the `scale`, so the generated dummy `input_values` has the magnitude of `self.vocab_size`.
For `Speech2TextModelTester`, we have `vocab_size=99`.
Furthermore, `Speech2TextEncoder` has
https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L705
and from the tester's `hidden_size=16,` we get `embed_scale=4`.
The `input_features` goes through the conv layer(s) and being scaled:
https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L767-L768
On `CPU` however, the conv layers of PT/TF gives diff. with a magnitude of `1e-7` for input values with 1s. So with the above 2 scalings, this error becomes `4e-5`, and the PT/TF equiv. test fails.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17007/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17007",
"html_url": "https://github.com/huggingface/transformers/pull/17007",
"diff_url": "https://github.com/huggingface/transformers/pull/17007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17007.patch",
"merged_at": 1651236093000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17006
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17006/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17006/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17006/events
|
https://github.com/huggingface/transformers/issues/17006
| 1,220,200,277
|
I_kwDOCUB6oc5IusdV
| 17,006
|
Transfomers Pipline: Batching does not work for Sentence-Pair Text Classification
|
{
"login": "maximilianreimer",
"id": 6999824,
"node_id": "MDQ6VXNlcjY5OTk4MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6999824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximilianreimer",
"html_url": "https://github.com/maximilianreimer",
"followers_url": "https://api.github.com/users/maximilianreimer/followers",
"following_url": "https://api.github.com/users/maximilianreimer/following{/other_user}",
"gists_url": "https://api.github.com/users/maximilianreimer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximilianreimer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximilianreimer/subscriptions",
"organizations_url": "https://api.github.com/users/maximilianreimer/orgs",
"repos_url": "https://api.github.com/users/maximilianreimer/repos",
"events_url": "https://api.github.com/users/maximilianreimer/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximilianreimer/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I am on version `4.18.0` and the following code works with and without GPU\r\n\r\n```\r\nfrom transformers import pipeline\r\nmodel = pipeline(\r\n task=\"text-classification\", \r\n model=\"roberta-large-mnli\", # does also happen for our own fine-tunes roberta models\r\n device=0 # does also happen on CPU\r\n) \r\nn_samples = 1000\r\nsample = ['The earth is not flat.', 'Physicists will find it shocking, but there are plenty of people around the world who genuinely believe the Earth is flat...']\r\ndata = [sample]* n_samples\r\n\r\nmodel([sample], padding=True)\r\nmodel(data, batch_size=2, padding=True)\r\n```",
"Hi @maximilianreimer,\r\n\r\nAs @sijunhe said, can you try upgrading your `transformers` version just because a lot has been done to improve the batching in more recent version.\r\n\r\nFor everyone here also another nice to have is to change the format from `list` to a `generator` which will iterate over results without having to maintain the list of all results (it also allows you to store results as they come in, allowing you to recover if sample number 10_014 fails for instance instead of having to rerun the whole thing).\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\nmodel = pipeline(\r\n task=\"text-classification\",\r\n model=\"roberta-large-mnli\", # does also happen for our own fine-tunes roberta models\r\n device=0, # does also happen on CPU\r\n)\r\nn_samples = 1000\r\nsample = [\r\n \"The earth is not flat.\",\r\n \"Physicists will find it shocking, but there are plenty of people around the world who genuinely believe the Earth is flat...\",\r\n]\r\n\r\n\r\ndef data():\r\n for i in range(n_samples):\r\n yield sample\r\n\r\n\r\nout = model([sample], padding=True)\r\nfor out in model(data(), batch_size=2, padding=True):\r\n print(out)\r\n```\r\n\r\nJust a nice to have but should definitely help when processing large amounts of data.\r\n",
"Thanks for the helpful comments. Updating seems to fix the issue for me as well!",
"Closing this then."
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.6.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Using GPU in script?: yes / no (Both)
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run:
```python
from transformers import pipeline
model = pipeline(
task="text-classification",
model="roberta-large-mnli", # does also happen for our own fine-tunes roberta models
device=0 # does also happen on CPU
)
n_samples = 10000
sample = ['The earth is not flat.', 'Physicists will find it shocking, but there are plenty of people around the world who genuinely believe the Earth is flat...']
data = [sample]* n_samples
model([sample]) # works
model(data, batch_size=1) # results in the follwing error
```
Output
```
Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-2-62b2d5a9a338>](https://localhost:8080/#) in <module>()
6
7 model([sample])
----> 8 model(data, batch_size=1)
14 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py](https://localhost:8080/#) in forward(self, hidden_states)
347 def forward(self, hidden_states):
348 hidden_states = self.dense(hidden_states)
--> 349 hidden_states = self.intermediate_act_fn(hidden_states)
350 return hidden_states
351
RuntimeError: CUDA out of memory. Tried to allocate 5.34 GiB (GPU 0; 14.76 GiB total capacity; 9.35 GiB already allocated; 4.00 GiB free; 9.37 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Expected behavior
```shell
The pipeline should run even with a very large list of inputs if the batch size is low enough.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17006/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17005
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17005/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17005/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17005/events
|
https://github.com/huggingface/transformers/pull/17005
| 1,220,110,496
|
PR_kwDOCUB6oc43BbNm
| 17,005
|
Added option to modify config parameter used by Tesseract in LayoutLMV2/LayoutXLM Processor
|
{
"login": "kelvinAI",
"id": 10686779,
"node_id": "MDQ6VXNlcjEwNjg2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kelvinAI",
"html_url": "https://github.com/kelvinAI",
"followers_url": "https://api.github.com/users/kelvinAI/followers",
"following_url": "https://api.github.com/users/kelvinAI/following{/other_user}",
"gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions",
"organizations_url": "https://api.github.com/users/kelvinAI/orgs",
"repos_url": "https://api.github.com/users/kelvinAI/repos",
"events_url": "https://api.github.com/users/kelvinAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/kelvinAI/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nSorry for the late reply. I'll review now.",
"LayoutLMv2FeatureExtractor constructor must be modified to accept tesseract_config instead of tess_config for this change. Hang on, I'll work on it.",
"Tried to rebase and merge with upstream but it is now changing too many files. I've created a fresh new PR here https://github.com/huggingface/transformers/pull/17733\r\n"
] | 1,651
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Giving user option to set config parameter used by Tesseract when performing feature extraction. Eg. to change psm levels while performing transcription by passing in '--psm 10' to config parameter while invoking image_to_data
It is shown that changing the psm values greatly influences the end result of LayoutLMV2/XLM, and the specific psm value is different depending on the document formatting. Refer : [PSM](https://github.com/tesseract-ocr/tesseract/issues/434)
```python
pytesseract.image_to_data(image, lang=lang, output_type="dict", config="--psm 10")
```
Users can now set the tesseract config parameter during Processor initialization, like so:
```python
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", ocr_lang="eng", tesseract_config="--psm 5")
```
## Before submitting
- [❌] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [✔️] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [❌] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [✔️] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [❌] Did you write any new necessary tests?
Feel free to modify as needed.
Thanks
@NielsRogge @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17005/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17005",
"html_url": "https://github.com/huggingface/transformers/pull/17005",
"diff_url": "https://github.com/huggingface/transformers/pull/17005.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17005.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17004
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17004/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17004/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17004/events
|
https://github.com/huggingface/transformers/pull/17004
| 1,219,823,201
|
PR_kwDOCUB6oc43AYYi
| 17,004
|
Add translating guide
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the comments @sgugger! \r\n\r\nI moved the `_toctree.yml` tip to a part where it would be more relevant. Please let me know if you would prefer it in another part 🤗",
"LGTM!"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
Add a translation guide so users have all the information they need to (1) contribute to a language that's already being translated, or (2) start their own issue for translating into a new language.
# Next step
Create a Translation Template for new issues (for example, [this template for Portuguese](https://github.com/huggingface/transformers/issues/16824) with all the docs that should be translated). I can do this.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17004/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17004",
"html_url": "https://github.com/huggingface/transformers/pull/17004",
"diff_url": "https://github.com/huggingface/transformers/pull/17004.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17004.patch",
"merged_at": 1651358618000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17003
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17003/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17003/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17003/events
|
https://github.com/huggingface/transformers/issues/17003
| 1,219,735,615
|
I_kwDOCUB6oc5Is7A_
| 17,003
|
BertEmbeddings import missing for Torch in __init__ file
|
{
"login": "seanbenhur",
"id": 43300345,
"node_id": "MDQ6VXNlcjQzMzAwMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43300345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanbenhur",
"html_url": "https://github.com/seanbenhur",
"followers_url": "https://api.github.com/users/seanbenhur/followers",
"following_url": "https://api.github.com/users/seanbenhur/following{/other_user}",
"gists_url": "https://api.github.com/users/seanbenhur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seanbenhur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seanbenhur/subscriptions",
"organizations_url": "https://api.github.com/users/seanbenhur/orgs",
"repos_url": "https://api.github.com/users/seanbenhur/repos",
"events_url": "https://api.github.com/users/seanbenhur/events{/privacy}",
"received_events_url": "https://api.github.com/users/seanbenhur/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Feel free to open a PR to fix this :)",
"I think in that case it's the TF and Jax versions which shouldn't add the embeddings to the main init. Those are not meant to be accessed from the main init, but to be accessed from their respective modules:\r\n\r\n```py\r\n>>> from transformers.models.bert.modeling_bert import BertEmbeddings\r\n```\r\n\r\nWe took that decision so that the internal may be modified without breaking to public root API. These have very rarely been updated, however.\r\n\r\nRemoving the Flax and TF imports from the init isn't an option either, unfortunately, as it would result in a breaking change for users that do use it.",
"Agree with you, Can I close this issue then?",
"Yes, thanks for opening it in the first place!"
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`from transformers import BertEmbeddings`
raises
```
----> 1 from transformers import BertEmbeddings
ImportError: cannot import name 'BertEmbeddings' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)
```
### Expected behavior
[BertEmbeddings](https://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/models/bert/modeling_bert.py#L182) is a class in Bert Model file that can be used for creating BertEmbeddings, the class is not imported in[ __init__ ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/__init__.py)file, so the user may not be able to import it. While the TF and jax version are imported it
```
It should be imported
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17003/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17002
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17002/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17002/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17002/events
|
https://github.com/huggingface/transformers/issues/17002
| 1,219,496,112
|
I_kwDOCUB6oc5IsAiw
| 17,002
|
HuggingFace/BigBird RuntimeError: Internal: src/sentencepiece_processor.cc
|
{
"login": "jtfields",
"id": 45608735,
"node_id": "MDQ6VXNlcjQ1NjA4NzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/45608735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jtfields",
"html_url": "https://github.com/jtfields",
"followers_url": "https://api.github.com/users/jtfields/followers",
"following_url": "https://api.github.com/users/jtfields/following{/other_user}",
"gists_url": "https://api.github.com/users/jtfields/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jtfields/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtfields/subscriptions",
"organizations_url": "https://api.github.com/users/jtfields/orgs",
"repos_url": "https://api.github.com/users/jtfields/repos",
"events_url": "https://api.github.com/users/jtfields/events{/privacy}",
"received_events_url": "https://api.github.com/users/jtfields/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I'm not too sure what's the root cause of this, but I've created a [google colab](https://colab.research.google.com/drive/1x12Bc6aDU9sLOCI99bGKXh9zUQzKDcaR?usp=sharing) reproducing the bug - it seems like the vocab file path is not being passed in properly when it's not in the VOCAB_FILE_NAME mapping as well as a potential workaround. \r\n\r\n@jtfields do you think the workaround could work? Instantize your BigBirdTokenizer outside of Raj in your local, then save the pretrained tokenizer into a directory and copy the files into a directory on Raj, then instantize from that directory instead and but with an AutoTokenizer (cells 7-9).",
"Thank you for the fast response on this bug. I'm trying the workaround you provided but finding that Spacy is not very cooperative due to the length of my essays. I first had to develop a workaround for the max_length of 100,000 and now have a handle_filename_too_long error. Is there another option besides spacy which isn't so restrictive?",
"I was able to tokenize the essays in Google Colab and copy these files to Marquette's Raj supercomputer. However, this bug should stay open until a fix is available to run the tokenization files locally.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,656
| 1,656
|
NONE
| null |
### System Info
```shell
I'm able run the HuggingFace/BigBird code for a binary classification on a proprietary essay dataset in Google Colab with no errors. I wanted to access more powerful GPU's and converted the code from .ipynb to .py to run on Marquette's supercomputer (called Raj). Raj does not allow me to access the roberta model remotely so I changed the first line of code below to the second for local access (and also copied the bigbird-roberta-base files to Raj):
tokenizer = BigBirdTokenizer.from_pretrained('google/bigbird-roberta-base')
tokenizer = BigBirdTokenizer.from_pretrained('<my user path on Raj>/bigbird-roberta-base')
However, this gives me the following error:
RuntimeError: Internal: src/sentencepiece_processor.cc(890) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
I did confirm that sentencepiece 0.1.96 is installed and I'm using Python version 3.6.8.
Any help or suggestions is appreciated!
```
### Who can help?
@ydshieh, @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import BigBirdTokenizer, BigBirdForSequenceClassification
print('Loading tokenizer...')
tokenizer = BigBirdTokenizer.from_pretrained(<my user path>/bigbird-roberta-base')
# Tokenize all of the sentences and map the tokens to thier word IDs.
input_ids = []
# Record the length of each sequence (in terms of BERT tokens).
lengths = []
print('Tokenizing comments...')
# For every sentence...
for sen in train.data:
# Report progress.
if ((len(input_ids) % 1000) == 0):
print(' Read {:,} comments.'.format(len(input_ids)))
# `encode` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
encoded_sent = tokenizer.encode(
str(sen), # Sentence to encode. Added str due to error.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
)
# Add the encoded sentence to the list.
input_ids.append(encoded_sent)
# Record the non-truncated length.
lengths.append(len(encoded_sent))
print('DONE.')
print(' Min length: {:,} tokens'.format(min(lengths)))
print(' Max length: {:,} tokens'.format(max(lengths)))
print('Median length: {:,} tokens'.format(np.median(lengths)))
### Expected behavior
```shell
The first print statement should generate:
Tokenizing comments...
Read 0 comments.
DONE.
454 comments
The second group of three print statements should generate:
Min length: 90 tokens
Max length: 995 tokens
Median length: 826.5 tokens
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17002/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17001
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17001/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17001/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17001/events
|
https://github.com/huggingface/transformers/issues/17001
| 1,219,413,498
|
I_kwDOCUB6oc5IrsX6
| 17,001
|
Text Generation for decoder
|
{
"login": "XinhaoMei",
"id": 58569453,
"node_id": "MDQ6VXNlcjU4NTY5NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/58569453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XinhaoMei",
"html_url": "https://github.com/XinhaoMei",
"followers_url": "https://api.github.com/users/XinhaoMei/followers",
"following_url": "https://api.github.com/users/XinhaoMei/following{/other_user}",
"gists_url": "https://api.github.com/users/XinhaoMei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XinhaoMei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XinhaoMei/subscriptions",
"organizations_url": "https://api.github.com/users/XinhaoMei/orgs",
"repos_url": "https://api.github.com/users/XinhaoMei/repos",
"events_url": "https://api.github.com/users/XinhaoMei/events{/privacy}",
"received_events_url": "https://api.github.com/users/XinhaoMei/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@patrickvonplaten seems like the best person to answer your question!",
"Hey @XinhaoMei \r\n\r\n> However, when doing inference such as doing greedy search and beam search, the encoder outouts can't be passed to the decoder if I want use the generate function or greedy_search function.\r\n\r\nI think they can be passed to the decoder. Could you post a codesnippet that shows what doesn't work for your case? :-)",
"Hi @patrickvonplaten, thanks for your quick reply.\r\nHere are my code for the defination of the model:\r\n`\r\n\r\n def __init__(self, config):\r\n super().__init__()\r\n\r\n self.encoder = set_encoder(config)\r\n\r\n self.tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n if config.hugging_face.pretrain:\r\n decoder_config = BertConfig(is_decoder=True,\r\n add_cross_attention=True)\r\n self.decoder = AutoModelForCausalLM.from_pretrained(\"bert-base-uncased\", config=decoder_config)\r\n else:\r\n decoder_config = BertConfig(is_decoder=True,\r\n add_cross_attention=True,\r\n num_attention_heads=4,\r\n num_hidden_layers=2)\r\n self.decoder = AutoModelForCausalLM.from_config(decoder_config)\r\n\r\n self.pad_token = self.tokenizer.pad_token_id\r\n\r\n self.loss_func = nn.CrossEntropyLoss(ignore_index=self.pad_token)\r\n\r\n def generate_greedy(self, audio_src):\r\n audio_feats = self.encoder(audio_src)\r\n audio_feats = audio_feats.transpose(0, 1)\r\n input_ids = torch.zeros((audio_feats.shape[0], 1)).long().to(self.decoder.device)\r\n input_ids[:, 0] = 101\r\n outputs = self.decoder.generate(input_ids=input_ids,\r\n encoder_hidden_states=audio_feats,\r\n do_sample=False,\r\n max_length=30)\r\n output_captions = self.tokenizer.batch_decode(outputs, skip_special_tokens=True)\r\n return output_captions\r\n\r\n def generate_beam(self, audio_src, beam_size=3):\r\n audio_feats = self.encoder(audio_src)\r\n audio_feats = audio_feats.transpose(0, 1)\r\n input_ids = torch.zeros((audio_feats.shape[0], 1)).long().to(self.decoder.device)\r\n input_ids[:, 0] = 101\r\n outputs = self.decoder.generate(input_ids=input_ids,\r\n encoder_hidden_states=audio_feats,\r\n num_beams=beam_size,\r\n do_sample=False,\r\n max_length=30)\r\n output_captions = self.tokenizer.batch_decode(outputs, skip_special_tokens=True)\r\n return output_captions\r\n\r\n def forward(self, audio_src, caption):\r\n tokenized = self.tokenizer(caption, add_special_tokens=True,\r\n padding=True, return_tensors='pt')\r\n input_ids = tokenized['input_ids'].to(self.decoder.device)\r\n attention_mask = tokenized['attention_mask'].to(self.decoder.device)\r\n audio_feats = self.encoder(audio_src)\r\n audio_feats = audio_feats.transpose(0, 1)\r\n outputs = self.decoder(input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n encoder_hidden_states=audio_feats\r\n )\r\n logits = outputs.logits[:, :-1, :]\r\n labels = input_ids[:, 1:]\r\n loss = self.loss_func(logits.reshape(-1, self.decoder.config.vocab_size), labels.reshape(-1))\r\n return loss` \r\n\r\nThe encoder is my own CNN.\r\nThe training is defined in forward function and it can be trained properly by passing the encoder outputs as encoder_hidden_states to the decoder.\r\nBut in another two functions for text generation using the generate() function, I found the encoder outputs cannot be passed into the decoder usiing generate(). It generates the same sentences for all different encoder outputs.\r\nThanks for your time!",
"Hey @XinhaoMei,\r\n\r\nSorry we sadly cannot help too much with custom code as this is outside of the scope of Transformers.\r\nCould you try to make use of the forum instead: https://discuss.huggingface.co/ ? :-)",
"> Hey @XinhaoMei,\r\n> \r\n> Sorry we sadly cannot help too much with custom code as this is outside of the scope of Transformers. Could you try to make use of the forum instead: https://discuss.huggingface.co/ ? :-)\r\n\r\nThank you for your reply. In fact, I have solved it by modifying some code in the Transformer library. \r\nAnyway, thanks a lot!",
"Hi @XinhaoMei, how did you solve it?"
] | 1,651
| 1,696
| 1,651
|
NONE
| null |
### Feature request
BERT and most Transformer models can be used as the decoder with the cross-attention layer randomly intialized if we set is_decoder True. We could use these model as decoder in an encoder-decoder framework while the encoder is our own defined model, and use the model for multi-modal text generation tasks.
For my case, I am doing audio captioning and I want to use the AutoModelForCausalLM as the decoder. The model can be trained properly now, by passing the outputs of our own encoder as encoder_hidden_states to the decoder.
However, when doing inference such as doing greedy search and beam search, the encoder outouts can't be passed to the decoder if I want use the generate function or greedy_search function.
I think this could be improved. Thus huggingface models can be used in more multi-modal text generation tasks when we want to use our own encoder.
### Motivation
In this way, we can use huggingface models for more multi-modal text-geberation tasks with freedom to incorprate them with our own models.
### Your contribution
In my case, I solve this problem by modifying the prepare_inputs_for_generation() function in BertLMHeadModel and add the encoder output to the return dict as "encoder_hidden_states". Then call model.greedy_search() and model.beam_search() for text generation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17001/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17000
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17000/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17000/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17000/events
|
https://github.com/huggingface/transformers/issues/17000
| 1,219,162,653
|
I_kwDOCUB6oc5IqvId
| 17,000
|
Padding vs truncation logging mixup
|
{
"login": "mrtoronto",
"id": 34576341,
"node_id": "MDQ6VXNlcjM0NTc2MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34576341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrtoronto",
"html_url": "https://github.com/mrtoronto",
"followers_url": "https://api.github.com/users/mrtoronto/followers",
"following_url": "https://api.github.com/users/mrtoronto/following{/other_user}",
"gists_url": "https://api.github.com/users/mrtoronto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrtoronto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrtoronto/subscriptions",
"organizations_url": "https://api.github.com/users/mrtoronto/orgs",
"repos_url": "https://api.github.com/users/mrtoronto/repos",
"events_url": "https://api.github.com/users/mrtoronto/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrtoronto/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Correct! Would you like to open a PR to patch it?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/tokenization_utils_base.py#L1470
Looks like this error should probably say truncation side instead of padding side.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17000/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16999
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16999/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16999/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16999/events
|
https://github.com/huggingface/transformers/pull/16999
| 1,219,120,747
|
PR_kwDOCUB6oc4294Z5
| 16,999
|
Refactor all require decorators to use skipUnless when possible
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you 🚀 for this PR, and also for pinning me so that I can learn!"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
I was refactoring the Accelerate tests today and I noticed we use `if .. else..` as a conditional for skipping tests based on imports. Unittest has `skipUnless` and `skipIf`, letting us simplify those decorators to be one line.
E.g.:
```python
if not _run_slow_tests:
return unittest.skip("test is slow")(test_case)
else:
return test_case
```
Can be:
```python
return unittest.skipUnless(_run_slow_tests, "test is slow")(test_case)
```
(Adding you as a reviewer for this Sylvain, unsure who else should be added)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16999/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16999",
"html_url": "https://github.com/huggingface/transformers/pull/16999",
"diff_url": "https://github.com/huggingface/transformers/pull/16999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16999.patch",
"merged_at": 1651236939000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16998
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16998/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16998/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16998/events
|
https://github.com/huggingface/transformers/issues/16998
| 1,219,113,876
|
I_kwDOCUB6oc5IqjOU
| 16,998
|
Question on model_max_length (DeBERTa-V3)
|
{
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"It's likely an error! Do you want to open a discussion on the model repo directly? https://huggingface.co/microsoft/deberta-v3-base/discussions/new",
"i get the same result 1000000000000000019884624838656",
"I'm seeing the same for the 125m and 350m OPT tokenizers (haven't checked the larger ones):\r\n\r\n```python\r\n>>> AutoTokenizer.from_pretrained(\"facebook/opt-350m\")\r\nPreTrainedTokenizer(name_or_path='facebook/opt-350m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken(\"<pad>\", rstrip=False, lstrip=False, single_word=False, normalized=True)})\r\n>>> AutoTokenizer.from_pretrained(\"facebook/opt-125m\")\r\nPreTrainedTokenizer(name_or_path='facebook/opt-125m', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken(\"<pad>\", rstrip=False, lstrip=False, single_word=False, normalized=True)})\r\n```\r\n\r\nIs this definitely a bug?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"deberta v3 uses relative position embeddings which means it isn't limited to the typical 512 token limit.\r\n\r\nAs taken from [section A.5 in their paper](https://arxiv.org/pdf/2006.03654.pdf):\r\n> With relative position bias, we choose to truncate the maximum relative distance to k as in equation 3.\r\nThus in each layer, each token can attend directly to at most (2k - 1) tokens and itself. By stacking\r\nTransformer layers, each token in the l-th layer can attend to at most (2k-1)*l tokens implicitly.\r\nTaking DeBERTa_large as an example, where k = 512, L = 24, in theory, the maximum sequence\r\nlength that can be handled is 24,528.\r\n\r\n\r\nThat being said, it will start to slow down a ton once the sequence length gets bigger than 512",
"Yes, I thought this might be the case, however, the same is true for deberta v2 if I remember correctly and the answer for that is different. What I was asking in the original post is why the the difference between v2 and v3. Thanks for clarifying part of the question/answer. \r\n",
"I meant to add to my last post:\r\nThe max length of 1000000000000000019884624838656 is typically an error when the max length is not specified in the tokenizer config file.\r\n\r\nThere was a discussion about it here: https://huggingface.co/google/muril-base-cased/discussions/1\r\nAnd the solution was to modify the tokenizer config file: https://huggingface.co/google/muril-base-cased/discussions/2",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This is still an issue with the config file and/or config file parser.",
"@bcdarwin \r\n\r\nWhat is the issue? "
] | 1,651
| 1,679
| 1,660
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.3
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
```
### Who can help?
@LysandreJik @SaulLu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm interested in finding out the max sequence length that a model can be run with. After some code browsing, my current understanding that this is a property stored in the tokenizer `model_max_length`.
I wrote a simple script to load a tokenzier for a pretrained model and print the model max length. This is the important part:
```
# initialize the tokenizer to be able to print model_max_length
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast_tokenizer,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
logger.info(f"Model max length {tokenizer.model_max_length}")
```
I used this to print max seq length for models such as BERT, RoBERTa, etc. All with expected results. For DeBERTa, I get confusing results.
If I run my script with DeBERTA-v3 as follows:
```
python check_model_max_len.py --model_name microsoft/deberta-v3-large --output_dir ./tmp --cache_dir ./tmp/cache
```
I get `Model max length 1000000000000000019884624838656`
If I understand correctly, this is a large integer used for models that can support "infinite" size lengths.
If I run my script with `--model_name microsoft/deberta-v2-xlarge`, I get `Model max length 512`
I don't understand if this is a bug or a feature :) My understanding is that the main difference between DeBERTa V2 and V3 is the use of ELECTRA style discriminator during MLM pretraining in V3. I don't understand why this difference would lead to a difference in supported max sequence lengths between the two models.
I also don't understand why some properties are hardcoded in the python files, e.g.,
```
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
"microsoft/deberta-v2-xlarge": 512,
"microsoft/deberta-v2-xxlarge": 512,
"microsoft/deberta-v2-xlarge-mnli": 512,
"microsoft/deberta-v2-xxlarge-mnli": 512,
}
```
I would expect these to be in the config files for the corresponding models.
### Expected behavior
```shell
I would expect the max supported lengths for DeBERTa-V2 and DeBERTa-V3 models to be the same. Unless, I'm missing something. Thanks for your help!
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16998/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16997
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16997/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16997/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16997/events
|
https://github.com/huggingface/transformers/pull/16997
| 1,219,081,387
|
PR_kwDOCUB6oc429v-A
| 16,997
|
Update README to latest release
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
The main README (and its variants) all have multiple links to released models that point to the main doc and not the stable doc. This is because we did the last two releases on branches different from the main one, so the README cleaned by our tools was not set on the main branch.
This PR fixes that and adds instructions in our release guide.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16997/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16997",
"html_url": "https://github.com/huggingface/transformers/pull/16997",
"diff_url": "https://github.com/huggingface/transformers/pull/16997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16997.patch",
"merged_at": 1651169864000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16996
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16996/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16996/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16996/events
|
https://github.com/huggingface/transformers/pull/16996
| 1,219,062,517
|
PR_kwDOCUB6oc429r8E
| 16,996
|
Fix savedir for by epoch in translation example
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes up the `no_trainer` translation example to properly save the `by_epoch` to the right directory (before it saved to step, causing a slow test failure)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16996/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16996",
"html_url": "https://github.com/huggingface/transformers/pull/16996",
"diff_url": "https://github.com/huggingface/transformers/pull/16996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16996.patch",
"merged_at": 1651168185000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16995
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16995/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16995/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16995/events
|
https://github.com/huggingface/transformers/pull/16995
| 1,219,002,070
|
PR_kwDOCUB6oc429fUb
| 16,995
|
[FlaxBert] Add ForCausalLM
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Looks good to me - @sanchit-gandhi could you check though which models don't pass with `1e-5` and ideally why?\r\n> \r\n> Overall `4e-2` is fine for me though cc @ydshieh what do you think?\r\n\r\nKeep `1e-5` is much better, because so far I can always find some issues when I find something higher than `1e-5` (well, sometimes it took quite some time to figure out)"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds cross-attention blocks to the following module classes:
- FlaxBertModule
- FlaxRobertaModule (in part through copying FlaxBertModule)
- FlaxBigBirdModule (in part through copying FlaxBertModule)
- FlaxElectraModule (in part through copying FlaxBertModule)
Adds the following ForCausalLM model classes:
- FlaxBertForCausalLM
- FlaxRobertaForCausalLM (in part through copying FlaxBertForCausalLM)
- FlaxBigBirdForCausalLM (in part through copying FlaxBertForCausalLM)
- FlaxElectraForCausalLM (in part through copying FlaxBertForCausalLM)
Adds the following model tests:
- FlaxRobertaForCausalLM
- FlaxBigBirdForCausalLM
- FlaxElectraForCausalLM
Note: FlaxBertForCausalLM is excluded due to the name mismatch with the PyTorch equivalent BertLMHeadModel. It is implicitly tested through the FlaxRobertaForCausalLM model tests, as well as in the following encoder-decoder model tests:
- Bert-2-Bert (encoder-decoder)
- Wav2Vec2-2-Bert (speech encoder-decoder)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16995/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16995",
"html_url": "https://github.com/huggingface/transformers/pull/16995",
"diff_url": "https://github.com/huggingface/transformers/pull/16995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16995.patch",
"merged_at": 1651569980000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16994
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16994/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16994/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16994/events
|
https://github.com/huggingface/transformers/pull/16994
| 1,218,958,840
|
PR_kwDOCUB6oc429WdF
| 16,994
|
[WIP] data2vec jax
|
{
"login": "BirgerMoell",
"id": 1704131,
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BirgerMoell",
"html_url": "https://github.com/BirgerMoell",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions",
"organizations_url": "https://api.github.com/users/BirgerMoell/orgs",
"repos_url": "https://api.github.com/users/BirgerMoell/repos",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"received_events_url": "https://api.github.com/users/BirgerMoell/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @BirgerMoell, thanks for jumping on this so quickly! Looks like a solid start on getting the new Data2Vec2Audio feature extractor written in JAX. Feel free to ask me any questions, more than happy to lend a hand! :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @BirgerMoell, thanks again for jumping in on this one! Am happy to help with the JAX/Flax model port - see my previous comment for how to efficiently copy over the skeleton code from FlaxWav2Vec2! If busy, let's maybe close this one for now and re-open when there's time to look into it a bit more?"
] | 1,651
| 1,655
| 1,655
|
NONE
| null |
# What does this PR do?
This adds data2vec flax model. Work in progress, just an initial draft.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16994/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16994",
"html_url": "https://github.com/huggingface/transformers/pull/16994",
"diff_url": "https://github.com/huggingface/transformers/pull/16994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16994.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16993
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16993/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16993/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16993/events
|
https://github.com/huggingface/transformers/pull/16993
| 1,218,873,806
|
PR_kwDOCUB6oc429D-c
| 16,993
|
Rename to reflect framework pattern AutoModelXxx -> TFAutoModelXxx
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the fix, @amyeroberts 🚀 ",
"@amyeroberts You can now merge the PR (`Squash and merge` button)"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
Fixes a small bug to make sure a TFAutoModel class keeps the TF naming pattern when being updated with `auto_class_update`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16993/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16993/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16993",
"html_url": "https://github.com/huggingface/transformers/pull/16993",
"diff_url": "https://github.com/huggingface/transformers/pull/16993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16993.patch",
"merged_at": 1651165914000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16992
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16992/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16992/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16992/events
|
https://github.com/huggingface/transformers/issues/16992
| 1,218,822,442
|
I_kwDOCUB6oc5IpcEq
| 16,992
|
Undocumented distributed inference behaviour for `run_summarization.py`
|
{
"login": "alexcoca",
"id": 30216068,
"node_id": "MDQ6VXNlcjMwMjE2MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/30216068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcoca",
"html_url": "https://github.com/alexcoca",
"followers_url": "https://api.github.com/users/alexcoca/followers",
"following_url": "https://api.github.com/users/alexcoca/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcoca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcoca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcoca/subscriptions",
"organizations_url": "https://api.github.com/users/alexcoca/orgs",
"repos_url": "https://api.github.com/users/alexcoca/repos",
"events_url": "https://api.github.com/users/alexcoca/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcoca/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] |
[
"I'm not sure where the bug is. It sounds like you have a question, which should be asked on the [forums](https://discuss.huggingface.co/).\r\n\r\n`Trainer.predict` will return predictions in the same order as the underlying dataset, regardless of the setup you're using to get your predictions. I don't feel it requires documentation as it's the intended behavior of the method. There would be a warning if it was not the case.",
"Hi @sgugger, I apologise for raising this issue. I was expecting to find info about async behaviour in docs and did not.\r\n\r\nI just finished reading the code and I see that `L174` in `trainer_pt_utils.py` calls `torch.distributed.all_gather` with `async_op=False` so we preserve the order as you say.\r\n\r\nDo you think it would be worth expanding the docs? We could show how to run distributed inference for `run_summarization.py` or add one sentence in the part of the docs that tells us how to run distributed training with a short paragraph on how to do distributed inference with a note \"the order of the underlying dataset will be preserved\"?\r\n\r\nIf you don't think this adds value, feel free to close the issue straight away.\r\n\r\nThank you for your helpful answer.",
"If you feel the doc needs to be expanded, I'm happy to review a PR, I just told you why I thought it wasn't worth mentioning when I wrote the current doc of `Trainer.predict` ;-)\r\nBut adding some lines on how to run distributed inference are more than welcome!",
"Ok, I'll add that to a TODO as this should be a quick one. Let's label this as WIP as I expect this would be a couple of weeks given my current workload.",
"Just an update - I did manage to successfully deploy training code written with the `Trainer` API on a `SLURM` cluster using `torchrun` ([here](https://pytorch.org/docs/stable/elastic/run.html?highlight=torchrun)). We can discuss where in the docs it would be best to show an example - I think it would help a lot of people. There are some posts on the forum I can update, to start with. "
] | 1,651
| 1,652
| null |
NONE
| null |
### System Info
```shell
Fails with error
Traceback (most recent call last):
File "/scratches/neuron/anaconda3/envs/T5DST-SGD/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/scratches/neuron/anaconda3/envs/T5DST-SGD/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 26, in <module>
from .user import UserCommands
File "/scratches/neuron/anaconda3/envs/T5DST-SGD/lib/python3.8/site-packages/transformers/commands/user.py", line 20, in <module>
from huggingface_hub.hf_api import HfFolder, create_repo, list_repos_objs, login, logout, whoami
ImportError: cannot import name 'list_repos_objs' from 'huggingface_hub.hf_api' (/scratches/neuron/anaconda3/envs/T5DST-SGD/lib/python3.8/site-packages/huggingface_hub/hf_api.py)
However I am running `4.16.2` with python `3.8`.
```
### Who can help?
@sgugger @stevhliu @patil-suraj
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am working with a copy of the `run_summarization.py` (pytorch) example that the authors of this [paper](https://arxiv.org/pdf/2109.07506.pdf) modified to work for dialogue state tracking (implemented [here ](https://github.com/chiahsuan156/DST-as-Prompting) for reference)
The `run_summarization.py` script can be launched with `torch.distributed.launch` and the `--do_predict` option. This shards the examples in the test set to various GPUs and therefore generation and task-oriented metrics is accelerated. The predictions are written to the `generated_predictions.txt` file in the output directory.
To be able to compute dialogue-relevant task oriented metrics, one ought to run a postprocessing script that uses the `generated_predictions.txt`. Because the trainer erases all the columns that are not keys to the model `forward` method from the dataset, the metadata that informs us of what training examples the predictions are related to is lost. Therefore, we rely on the ordering of the `generated_predictions.txt` to match the order of the examples in the dataset.
My question is:
- Does `predictions` (`L675`) obey the order of the `dataset`? So if my dataset has 1m examples, will the 1m entries in the `predictions` list match the order of the dataset iterator? In my experience this depends on implementation* and the behaviour is not documented.
*For example, in frameworks such as `ray` you have to explicitly enforce the order in which the results are returned and the predictions may be returned out of order - if a process finishes, it returns its results so it can be given more work by an external load balancer.
### Expected behavior
```shell
Improved documentation about expected behaviour here. Happy to discuss where this should be added and contribute a small PR to clarify this important issue.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16992/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/16991
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16991/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16991/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16991/events
|
https://github.com/huggingface/transformers/issues/16991
| 1,218,791,428
|
I_kwDOCUB6oc5IpUgE
| 16,991
|
The current equivalent of transformers.models.bert.modeling_bert.gelu
|
{
"login": "Pzoom522",
"id": 22963490,
"node_id": "MDQ6VXNlcjIyOTYzNDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/22963490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pzoom522",
"html_url": "https://github.com/Pzoom522",
"followers_url": "https://api.github.com/users/Pzoom522/followers",
"following_url": "https://api.github.com/users/Pzoom522/following{/other_user}",
"gists_url": "https://api.github.com/users/Pzoom522/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pzoom522/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pzoom522/subscriptions",
"organizations_url": "https://api.github.com/users/Pzoom522/orgs",
"repos_url": "https://api.github.com/users/Pzoom522/repos",
"events_url": "https://api.github.com/users/Pzoom522/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pzoom522/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"All of those have been refactored in a `ACT2FN` dictionary which is used across the codebase. You can import it as such:\r\n\r\n```py\r\nfrom transformers.activations import ACT2FN\r\n\r\ngelu_function = ACT2FN['gelu']\r\n```\r\n\r\nHope that helps!",
"Many thx. Closed : )"
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.5.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I believe it's an issue relevant to version migration. The code I'm using contains
```
x = transformers.models.bert.modeling_bert.gelu(x)
```
which seems to be no longer usable.
Similar problem was discussed in https://stackoverflow.com/questions/66133626 but with no good answer.
### Expected behavior
Please let me know what is the current API that can be a replacement of `transformers.models.bert.modeling_bert.gelu`, or it's safe to directly use `x = x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))` instead.
Many thanks!
PS: the link to the migration guide should be changed into https://huggingface.co/docs/transformers/migration.
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16991/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16990
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16990/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16990/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16990/events
|
https://github.com/huggingface/transformers/pull/16990
| 1,218,712,934
|
PR_kwDOCUB6oc428heR
| 16,990
|
[T5 Tokenizer] Model has no fixed position ids - there is no hardcode…
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Oh, that's a serious change if a user forgot to set a `max_length`. I understand it fixes a bug, but still would like @LysandreJik 's take on it as well. Thanks for the PR in any case!\r\n\r\nAgree! We should at least put some :exclamation: mark in this PR stating that this change could lead to unexpected behavior OOM if `max_length` is not defined.",
"That is definitely a breaking change we want to avoid, IMO. This is likely to break user pipelines with OOM errors or a non consistent number of tokens generated. I'd advocate against this change, and would push to:\r\n- Document that while the limit is set to 512, T5 can handle longer lengths and encourage users to define their own max lengths\r\n- Document that this limit will be removed in v5\r\n- Update the warning just for T5 (see below)\r\n\r\n<details>\r\n<summary>Updating the warning just for T5</summary>\r\n\r\nYou can override this method, which is in `tokenization_utils_base.py`, in `tokenization_t5.py` and `tokenization_t5_fast.py`\r\n\r\nhttps://github.com/huggingface/transformers/blob/e6f00a11d7fa34215184e3c797e19e6c7debe0fe/src/transformers/tokenization_utils_base.py#L3379-L3397\r\n\r\nI wouldn't recommend skipping the warning altogether as it still gives important information regarding why the text was eventually truncated or padded. But updating the message makes sense:\r\n\r\n```diff\r\n def _eventual_warn_about_too_long_sequence(self, ids: List[int], max_length: Optional[int], verbose: bool):\r\n \"\"\"\r\n Depending on the input and internal state we might trigger a warning about a sequence that is too long for its\r\n corresponding model\r\n\r\n Args:\r\n ids (`List[str]`): The ids produced by the tokenization\r\n max_length (`int`, *optional*): The max_length desired (does not trigger a warning if it is set)\r\n verbose (`bool`): Whether or not to print more information and warnings.\r\n\r\n \"\"\"\r\n if max_length is None and len(ids) > self.model_max_length and verbose:\r\n if not self.deprecation_warnings.get(\"sequence-length-is-longer-than-the-specified-maximum\", False):\r\n logger.warning(\r\n- \"Token indices sequence length is longer than the specified maximum sequence length \"\r\n- f\"for this model ({len(ids)} > {self.model_max_length}). Running this sequence through the model \"\r\n- \"will result in indexing errors\"\r\n+ \"The T5 model has no maximum length, but a maximum length is still set for backwards compatibility \"\r\n+ \"purposes. To take advantage of the full capabilities of the model, we recommend setting a \"\r\n+ \"max_length manually.\"\r\n )\r\n self.deprecation_warnings[\"sequence-length-is-longer-than-the-specified-maximum\"] = True\r\n```\r\n \r\n</details>",
"Okey took some time to think about it - it's really not easy. I agree @LysandreJik that the previous change (while correct) is too strong as it might break quite some pipelines. \r\n\r\nTo begin with, note that `model_max_length` or `max_length` is only relevant if `truncation=True` is set. So for all other cases this bug is not relevant. \r\nNow the problem is that by default T5 should **not** have a set maximum length. \r\nHowever it is completely reasonable for people to set their own maximum length. To me this means the following: If a user instantiates T5 Tokenizer with `model_max_length` or passes `max_length` when encoding/padding, then these values should **always** be the true max length values and in this case the (incorrectly) hard-coded max length values can be discarded. \r\nOnly if a user does not pass `max_length` when encoding/padding and does not define `model_max_length` at init, then we should fall back to the (incorrect) hard-coded max length values until v5.\r\n\r\nIn this PR there two things are changed the 2.) can be considered a small breaking change, but it's really a bug correction for me.\r\n\r\n1. If T5 Tokenizer is instantiated without a custom `model_max_length` and one of the identifiers for which `model_max_length` is hardcoded is used, the following warning appears:\r\n```\r\nThis tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.\r\nFor now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.\r\n- Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.\r\n- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.\r\n- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.\r\n```\r\nPreviously no warning appeared. Note that this warning appears every time at init. However it can be disabled as described above and it's also good to warn the user about upcoming changes this way.\r\n\r\n2. If T5 Tokenizer is instantiated with a `model_max_length`, this `model_max_length` always counts even if it's longer than the hardcoded ones. This means the following snippet:\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import T5TokenizerFast\r\n\r\ntok = T5TokenizerFast.from_pretrained(\"t5-base\", model_max_length=600)\r\n\r\nout = tok(100 * \"hello there is a\", padding=\"longest\", truncation=True).input_ids\r\nprint(len(out))\r\n```\r\n\r\ndoes **not** throw a warning (since the user defines `model_max_length`) and print a length of 600 (not 512). <- this behavior is different from how it was before.\r\nMy rational on changing this is the following:\r\n- T5's hardcoded model max lengths are wrong, I'm fine with using those if no `model_max_length` is defined or no `max_length` is passed\r\n- **But**, if a user already passes a `model_max_length` <- then this should be the only source of truth. E.g. In the example above 600 should be tha max length and not 512.\r\n\r\n\r\n**To be crystal clear 2.) changes the behavior - e.g. run the code snippet before/after the PR, but it's really a bug correction here IMO**\r\n",
"Failure is unrelated"
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
…d max length
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16986
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16990/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16990",
"html_url": "https://github.com/huggingface/transformers/pull/16990",
"diff_url": "https://github.com/huggingface/transformers/pull/16990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16990.patch",
"merged_at": 1651519654000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16989
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16989/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16989/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16989/events
|
https://github.com/huggingface/transformers/pull/16989
| 1,218,673,190
|
PR_kwDOCUB6oc428ZCy
| 16,989
|
set eos_token_id to None to generate until max length
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
Update `check_encoder_decoder_model_generate` to generate until max length.
Otherwise, this check
```python
self.assertEqual(generated_output.shape, (input_ids.shape[0],) + (decoder_config.max_length,))
```
might fail.
### Remark
In `generate()`, we have
https://github.com/huggingface/transformers/blob/dced262409177586bb510b6b724c762fb89da0e8/src/transformers/generation_utils.py#L1129-L1133
So I think the (original) logic about `Generate until max length` in `check_encoder_decoder_model_generate` should be updated too. The case won't really happen in the tests, but in general, `config` might still have `eos_token_id`.
I also leave the corresponding flax tests untouched for now.
This PR will fix
```
FAILED tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::Swin2BartModelTest::test_encoder_decoder_model_generate
tests/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py:280: in check_encoder_decoder_model_generate
self.assertEqual(generated_output.shape, (inputs.shape[0],) + (decoder_config.max_length,))
AssertionError: torch.Size([13, 2]) != (13, 20)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16989/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16989",
"html_url": "https://github.com/huggingface/transformers/pull/16989",
"diff_url": "https://github.com/huggingface/transformers/pull/16989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16989.patch",
"merged_at": 1651168058000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16988
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16988/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16988/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16988/events
|
https://github.com/huggingface/transformers/pull/16988
| 1,218,565,444
|
PR_kwDOCUB6oc428Bqm
| 16,988
|
Add Tensorflow Swin model
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@LysandreJik @Rocketknight1 @NielsRogge I added you as reviewers to cover the transformers, vision and tensorflow aspects of this PR. Apologies if you're not the right person to review this - and please feel free to remove yourselves or add others :) ",
"I think those are good choices! Reviewing now."
] | 1,651
| 1,652
| 1,652
|
COLLABORATOR
| null |
# What does this PR do?
Adds a tensorflow implementation of the Swin architecture and associated tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Will tag specific members / contributors when moved from drafts.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16988/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16988/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16988",
"html_url": "https://github.com/huggingface/transformers/pull/16988",
"diff_url": "https://github.com/huggingface/transformers/pull/16988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16988.patch",
"merged_at": 1652735993000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16987
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16987/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16987/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16987/events
|
https://github.com/huggingface/transformers/issues/16987
| 1,218,548,010
|
I_kwDOCUB6oc5IoZEq
| 16,987
|
Memory calculator for transformer models
|
{
"login": "marksverdhei",
"id": 46672778,
"node_id": "MDQ6VXNlcjQ2NjcyNzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/46672778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marksverdhei",
"html_url": "https://github.com/marksverdhei",
"followers_url": "https://api.github.com/users/marksverdhei/followers",
"following_url": "https://api.github.com/users/marksverdhei/following{/other_user}",
"gists_url": "https://api.github.com/users/marksverdhei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marksverdhei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marksverdhei/subscriptions",
"organizations_url": "https://api.github.com/users/marksverdhei/orgs",
"repos_url": "https://api.github.com/users/marksverdhei/repos",
"events_url": "https://api.github.com/users/marksverdhei/events{/privacy}",
"received_events_url": "https://api.github.com/users/marksverdhei/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Reminds me of your current work on accelerate @muellerzr!",
"On my to do (asap) is to integrate that work from Accelerate. Should be in the next few weeks here. Basically how that one works is we retry the training loop reducing the batch size until we escape the CUDA OOM (this was request *many* times in our internal slack to integrate it here as well). Said implementation: https://github.com/huggingface/accelerate/blob/main/src/accelerate/memory_utils.py",
"Very interesting, I wasn't aware of this. Looking forward to the integration\r\nAny thoughts on pre-calculating expected memory usage?\r\nOr any reason why this would be unfeasible or impractical?",
"Without lots of very specific code, currently it is unfeasible (though won't be soon!).\r\nThe key is with pytorch's `meta` device. It currently doesn't work on all ops, but once it does we should be able to track all the sizes of the intermediate activation without real memory usage, getting us there. \r\n\r\nOtherwise we currently *could* by just doing the size of the model * the right number based on the optimizer selected, but we'd still be missing all of those intermediate activation sizes.\r\n\r\nFor now, the bs reducer is a good way to only add a few minutes (if not seconds) to get it going, hence why I went with that approach. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"FYI we've now built this: https://huggingface.co/spaces/hf-accelerate/model-memory-usage?logs=build",
"Thats awesome! Thanks a lot!",
"@muellerzr I'm having an error with the calculator. Is not working for me with \"gated models\". I'm getting this error:\r\n\r\nError\r\n\"Model `meta-llama/Llama-2-7b-chat-hf` had an error, please open a discussion on the model's page with the error message and name: `You are trying to access a gated repo.\\nMake sure to request access at https://huggingface.co/meta-llama/Llama-2-7b-chat-hf and pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`.\"\r\n\r\nI'm using a working access token. I have been using it to download llama 2 models."
] | 1,651
| 1,696
| 1,654
|
NONE
| null |
### Feature request
This feature request is quite high level.
The feature is some tool, function, object, etc. that takes in information such as model config, trainer arguments, and max sequence length and calculates the expected memory usage. It could for instance be used to produce a warning if expected memory exceeds current available memory, and lets you select your hyperparameters with memory usage taken into account (instead of trying out which params raises memory errors and not). I mentioned this on the HF discord and was encouraged to make a feature request
### Motivation
When working with this library and the trainer API,
I've been missing some type of tool that can calculate the expected memory consumption of your model training.
`RuntimeError: CUDA error: out of memory` haunts us all and can perhaps be better understood if we're able to pre compute expected memory to see if the error is expected or not. It also makes it easier to select hyperparameters with memory constraints.
### Your contribution
Should this be of interest:
* how it will be integrated should probably be agreed upon first.
* I'm willing to contribute to this in the summer if nobody has picked it up by then, should my help be wanted
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16987/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16986
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16986/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16986/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16986/events
|
https://github.com/huggingface/transformers/issues/16986
| 1,218,514,372
|
I_kwDOCUB6oc5IoQ3E
| 16,986
|
Warning tells you you will get indexing errors in T5 for going beyond max length
|
{
"login": "marksverdhei",
"id": 46672778,
"node_id": "MDQ6VXNlcjQ2NjcyNzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/46672778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marksverdhei",
"html_url": "https://github.com/marksverdhei",
"followers_url": "https://api.github.com/users/marksverdhei/followers",
"following_url": "https://api.github.com/users/marksverdhei/following{/other_user}",
"gists_url": "https://api.github.com/users/marksverdhei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marksverdhei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marksverdhei/subscriptions",
"organizations_url": "https://api.github.com/users/marksverdhei/orgs",
"repos_url": "https://api.github.com/users/marksverdhei/repos",
"events_url": "https://api.github.com/users/marksverdhei/events{/privacy}",
"received_events_url": "https://api.github.com/users/marksverdhei/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thanks a lot for the issue @marksverdhei . You're right T5 has no fixed max length - so this warning is confusing. \r\n\r\nThe reason why lots of people associate T5 with a max length of 512 was that it was pretrained on a max length of 512, but is not limited to this length! \r\n\r\nIt has shown to generalize well to longer sequences. Also see: https://github.com/huggingface/transformers/issues/5204",
"I think it is a bit confusing. As in the paper, \"We use a maximum sequence length of 512\". Note that this is number of tokens, not the words. This I guess corresponds to max_input_length = 512 parameter. This is the maximum number of tokens that the underlying model can take. You can not change it.\r\n\r\nBut for longer text, you can do scripting to break it into 512 chunks, and feed them to the model. And I guess that is where max_source_length (length of text) is relevant. \r\n",
"> I think it is a bit confusing. As in the paper, \"We use a maximum sequence length of 512\". Note that this is number of tokens, not the words. This I guess corresponds to max_input_length = 512 parameter. This is the maximum number of tokens that the underlying model can take. You can not change it.\r\n> \r\n> But for longer text, you can do scripting to break it into 512 chunks, and feed them to the model. And I guess that is where max_source_length (length of text) is relevant.\r\n\r\nWith T5 you can change max input length. Relative positional embeddings make it possible to process arbitrary lengths, which is what T5 uses, as opposed to classical positional embeddings such as in the original transformer architecture. \r\nIt is just that when training, a length of 512 tokens is used because it is a trade-off between \r\nprocessing long-enough texts while not using too much time and memory. \r\n"
] | 1,651
| 1,682
| 1,651
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.16.2
- Python version: 3.8.12
```
### Who can help?
@patrickvonplaten @saul
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
>>> inputs = tokenizer("foo " * 2000, return_tensors="pt")
Outputs `Token indices sequence length is longer than the specified maximum sequence length for this model (4001 > 512). Running this sequence through the model will result in indexing errors`
```
```python
>>> from transformers import AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> model.generate(**inputs)
tensor([[ 0, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575, 32, 5575,
32, 5575, 32, 5575, 32, 5575, 32, 5575]])
```
No indexing errors
### Expected behavior
The warning is wrong for T5 since it uses relative positional embeddings.
I would expect no warning, or otherwise, a warning about memory usage
I suppose this issue should apply to all models that do no have fixed length postional encodings
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16986/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16986/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16985
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16985/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16985/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16985/events
|
https://github.com/huggingface/transformers/issues/16985
| 1,218,459,941
|
I_kwDOCUB6oc5IoDkl
| 16,985
|
Beginning word ids
|
{
"login": "oorojoo",
"id": 93263530,
"node_id": "U_kgDOBY8Wqg",
"avatar_url": "https://avatars.githubusercontent.com/u/93263530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oorojoo",
"html_url": "https://github.com/oorojoo",
"followers_url": "https://api.github.com/users/oorojoo/followers",
"following_url": "https://api.github.com/users/oorojoo/following{/other_user}",
"gists_url": "https://api.github.com/users/oorojoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oorojoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oorojoo/subscriptions",
"organizations_url": "https://api.github.com/users/oorojoo/orgs",
"repos_url": "https://api.github.com/users/oorojoo/repos",
"events_url": "https://api.github.com/users/oorojoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/oorojoo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
inputs = tokenizer('in an ideal situation, it works')
print(inputs.word_ids())
Returns
[None, 0, 1, 2, 3, 3, 3, 4, 5, 6, None]
Are there any methods to identify which token is the beginning position for a word
For example:
Returns
[None, 1, 1, 1, 1, 0, 0, 1, 1, 1, None]
where 1 is the start token for a given word and 0 is a token that is not the start token for a given word.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16985/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16984
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16984/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16984/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16984/events
|
https://github.com/huggingface/transformers/issues/16984
| 1,218,342,659
|
I_kwDOCUB6oc5Inm8D
| 16,984
|
Getting a fixed size embedding from the last hidden state.
|
{
"login": "PrithivirajDamodaran",
"id": 7071019,
"node_id": "MDQ6VXNlcjcwNzEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7071019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrithivirajDamodaran",
"html_url": "https://github.com/PrithivirajDamodaran",
"followers_url": "https://api.github.com/users/PrithivirajDamodaran/followers",
"following_url": "https://api.github.com/users/PrithivirajDamodaran/following{/other_user}",
"gists_url": "https://api.github.com/users/PrithivirajDamodaran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PrithivirajDamodaran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PrithivirajDamodaran/subscriptions",
"organizations_url": "https://api.github.com/users/PrithivirajDamodaran/orgs",
"repos_url": "https://api.github.com/users/PrithivirajDamodaran/repos",
"events_url": "https://api.github.com/users/PrithivirajDamodaran/events{/privacy}",
"received_events_url": "https://api.github.com/users/PrithivirajDamodaran/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nFirst of all, please use the forum for these kind of questions, we'd like to keep Github issues for bugs/feature requests.\r\n\r\nSecond, a last hidden state is typically of shape (batch_size, seq_len, hidden_size) for these kind of models, as they are Transformer-based. You feed a sequence of patches through a Transformer encoder, hence you end up with a vector for each of these patches at the end. You can permute it to turn it into a tensor of shape (batch_size, hidden_size, seq_len), and split the last dimension based on the patch size, to get a tensor of shape (batch_size, hidden_size, patch_size, patch_size). This gives you an image-like representation.\r\n\r\nIn code:\r\n\r\n```\r\nfrom transformers import AutoModel\r\nimport torch\r\n\r\nmodel = AutoModel.from_pretrained(\"microsoft/beit-base-patch16-224\")\r\n\r\npixel_values = torch.randn(1, 3, 224, 224)\r\n\r\noutputs = model(pixel_values)\r\nlast_hidden_state = outputs.last_hidden_state[:,1:,:] # we discard the CLS token\r\n\r\nbatch_size = last_hidden_state.shape[0]\r\nnum_patches = model.config.image_size // model.config.patch_size\r\nimage_like_representation = last_hidden_state.permute(0, 2, 1)\r\nimage_like_representation = image_like_representation.view(batch_size, -1, num_patches, num_patches)\r\n```",
"Rank Apology - I will use the forum going forward.\r\n\r\nBut just to close this one out, maybe I am not explaining properly: I am using BEiT as the image encoder and BERT as the Text encoder. I am trying to get a fixed size 1D representation of the last_hidden_state of BEiT (Something like what CLIP obtains) to concatenate with a BERT embedding to feed into an MLP head.\r\n\r\nI could use the pooler_output of the image but it doesn’t seem to preserve certain spatial nuances hence I would like to use the last_hidden_state\r\n\r\n\r\nCan you help me? Moved to the forum as well - https://discuss.huggingface.co/t/how-to-get-a-fixed-size-embedding-from-the-last-hidden-state-of-vision-models/17275\r\n",
"@PrithivirajDamodaran you can adapt code snippet from @NielsRogge as follows:\r\n\r\n```\r\nfrom transformers import AutoModel\r\nimport torch\r\n\r\nmodel = AutoModel.from_pretrained(\"microsoft/beit-base-patch16-224\")\r\n\r\npixel_values = torch.randn(1, 3, 224, 224)\r\n\r\noutputs = model(pixel_values)\r\nimg_embedding = outputs.last_hidden_state[0, 0, :] # CLS token of the last layer can be used as the image embedding\r\n\r\n```",
"@nihit - To get the CLS token, I can directly get the ```pooler_output```, instead of slicing last_hidden_state. Because all HF bare models returns by default two keys ```pooler_output``` and ```last_hidden_state```. BTW pooler_output is nothing but the raw first entry (CLS) from the last_hidden_state but it was passed through a simple MLP pooler layer (linear + tanh). \r\n\r\nI am NOT interested in CLS embedding, I would like to have a fixed representation of the entire last layer itself i.e. last_hidden_state.\r\n\r\nI have figured this out.",
"@PrithivirajDamodaran - I noticed that sometimes the last_hidden_state has seq length dimension different from the number of input_ids from the pre-processor (for example layoutlm). Does anyone know why this happens?"
] | 1,651
| 1,657
| 1,651
|
NONE
| null |
I am trying to play with the bare vision model's last hidden state (like ViT, BEiT). How can I get a fixed size representation out of the last_hidden_state which is of shape (channels, height, width) say of 1D? Should I simply flatten it?
Please advice
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16984/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16983
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16983/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16983/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16983/events
|
https://github.com/huggingface/transformers/issues/16983
| 1,218,285,505
|
I_kwDOCUB6oc5InY_B
| 16,983
|
how huggingface process uneven input tensors
|
{
"login": "Slyne",
"id": 6286804,
"node_id": "MDQ6VXNlcjYyODY4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6286804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Slyne",
"html_url": "https://github.com/Slyne",
"followers_url": "https://api.github.com/users/Slyne/followers",
"following_url": "https://api.github.com/users/Slyne/following{/other_user}",
"gists_url": "https://api.github.com/users/Slyne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Slyne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Slyne/subscriptions",
"organizations_url": "https://api.github.com/users/Slyne/orgs",
"repos_url": "https://api.github.com/users/Slyne/repos",
"events_url": "https://api.github.com/users/Slyne/events{/privacy}",
"received_events_url": "https://api.github.com/users/Slyne/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Please use the [forum](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs and feature requests only. In this instance, I don't think the `Trainer` will help you as you want a very specific data processing, so you should use `Accelerate`. The dispatch feature it offers is done exactly for users with `IterableDataset` that don't want to process the data in each process.",
"> Please use the [forum](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs and feature requests only. In this instance, I don't think the `Trainer` will help you as you want a very specific data processing, so you should use `Accelerate`. The dispatch feature it offers is done exactly for users with `IterableDataset` that don't want to process the data in each process.\r\n\r\nThank you. I solved this issue just by customizing the get_train_loader, doing padding in my own pipelining and removing data collator and adding a model.join() context.\r\nBTW, does the dispatch feature in Accelerate can solve every process has different number of batches in one epoch when dataloader.num_workers > 0 without duplicate data processing?\r\n\r\n\r\nAnother thing I find is that the [find_batch_size](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer_pt_utils.py#L105) cannot support [BatchFeature](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/feature_extraction_utils.py#L62).\r\nNot sure if this is an issue.\r\n\r\n",
"The last one is a bug, will fix that!"
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
### Feature request
For IterableDataset, it seems there's no way to handle the uneven input tensors in trainer.py. (Please correct me if I misunderstant this).
Pytorch document suggests to use join: https://pytorch.org/tutorials/advanced/generic_join.html#what-is-join
Secondly, the dataloader for IterableDataset doesn't have sampler, which may be an issue when num workers > 0.
https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer.py#L672
### Motivation
Currently in trainer.py, the IterableDataset is wrapped into IterableDatasetShard in distributed training. It seems that it requires every process to have the same whole dataset and distribute the samples in [IterableDatasetShard](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer_pt_utils.py#L678), which makes every batch has the same amount of data by [pad the first batch data](https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/trainer_pt_utils.py#L770).
When in training, since the data is guranteed to be equal in every process, there is no need to use [Join](https://pytorch.org/tutorials/advanced/generic_join.html#what-is-join) to process the uneven input datasets.
Here's my case. I have a very large audo dataset of 1 million audio files and each file requires the processing step just like [this example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py). If I do the same step as this example, the large training datasets not only requires a lot of time processing but also a lot of disk storage to store the features.
Therefore I decide to use IterableDataset to do on-the-fly preprocessing. However, I need to maintain a buffer to do some shuffling then the CPU memory increases greatly and every process is doing many duplicate work that the other process has already done. For example, let's say we have 2 processes to process 11 samples. Every process has to process the 11 samples instead of 5 + 6.
So I decide to give up the IterableDatasetShard and use IterableDataset directly. Every process has its own part of data to process. For example, process0 has 5 samples and process 1 has 6 samples. However this also means some process will finish training earlier. But I see there's no join used in trainer.py. So I was wondering what's the best practice to do this training and inference in this use case.
Thanks!
### Your contribution
Not sure
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16983/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16982
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16982/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16982/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16982/events
|
https://github.com/huggingface/transformers/issues/16982
| 1,218,264,229
|
I_kwDOCUB6oc5InTyl
| 16,982
|
Exporting DeBerta using custom onnx configuration
|
{
"login": "RaiAmanRai",
"id": 102528851,
"node_id": "U_kgDOBhx3Uw",
"avatar_url": "https://avatars.githubusercontent.com/u/102528851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RaiAmanRai",
"html_url": "https://github.com/RaiAmanRai",
"followers_url": "https://api.github.com/users/RaiAmanRai/followers",
"following_url": "https://api.github.com/users/RaiAmanRai/following{/other_user}",
"gists_url": "https://api.github.com/users/RaiAmanRai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RaiAmanRai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RaiAmanRai/subscriptions",
"organizations_url": "https://api.github.com/users/RaiAmanRai/orgs",
"repos_url": "https://api.github.com/users/RaiAmanRai/repos",
"events_url": "https://api.github.com/users/RaiAmanRai/events{/privacy}",
"received_events_url": "https://api.github.com/users/RaiAmanRai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @lewtun @michaelbenayoun ",
"Hi @RaiAmanRai thanks for reporting this issue! Would you be able to share a reproducible code snippet that also shows the type of inputs you're feeding to the exported model (e.g. with ONNX Runtime)?",
"Hi @lewtun , thanks for stopping by. The above mentioned code snippet was used to export the model into .onnx format. \r\n\r\nThe following code snippet was used to check the outputs of the model and its shape\r\n~~~\r\nimport onnxruntime as ort\r\nimport numpy as np\r\n\r\nort_session = ort.InferenceSession('C:/Users/Hp/zsc/onnx_deberta/model3.onnx')\r\n\r\ninputs = tokenizer(\"Using BERT in ONNX and we are doing this as a test to check the output shape!\", return_tensors=\"np\", return_token_type_ids=False)\r\n\r\ninputs['attention_mask'] = inputs['attention_mask'].astype(np.int64)\r\ninputs['input_ids'] = inputs['input_ids'].astype(np.int64)\r\noutputs = ort_session.run(onnx_outputs, dict(inputs))\r\noutputs[0].shape\r\n~~~\r\n\r\n Here, tokenizer is the same instance used above.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@lewtun @michaelbenayoun can you guys please look into this issue.\r\nThis has become a major issue in the development I am working on, and would request to resolve it.\r\n\r\nThank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I try to fix the same issue. Is there any solution about this issue",
"Hi @forrestfaraday , @RaiAmanRai ,\r\n\r\nThe support for the ONNX export is now under [`optimum.exporters.onnx`](https://github.com/huggingface/optimum/releases/tag/v1.5.0), and we actually support the export of Deberta.\r\n\r\nAll you need to do is installing `optimum`:\r\n\r\n```bash\r\npip install optimum\r\n```\r\n\r\nThen run the `optimum.exporters.onnx` CLI:\r\n\r\n```bash\r\npython -m optimum.exporters.onnx --model Narsil/deberta-large-mnli-zero-cls deberta_onnx/\r\n```",
"**Thanks for the explanation. Could you please help me with this error. Because I tried a lot of time this pipeline with different deberta models.** \r\n\r\n`hg_checkpoint = \"microsoft/deberta-v3-base\"\r\nsave_hg = \"tmp/hg_onnx/\"`\r\n\r\n**Load a model from transformers and export it to ONNX**\r\n`ort_model_hg = ORTModelForTokenClassification.from_pretrained(hg_checkpoint, from_transformers=True)\r\ntokenizer_hg = AutoTokenizer.from_pretrained(hg_checkpoint)`\r\n\r\n**Save the onnx model and tokenizer**\r\n`ort_model_hg.save_pretrained(save_hg)\r\ntokenizer_hg.save_pretrained(save_hg)`\r\n\r\n**Define the quantization methodology**\r\n`qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)\r\nquantizer_hg = ORTQuantizer.from_pretrained(ort_model_hg)`\r\n\r\n**Apply dynamic quantization on the model**\r\n\r\n`quantizer_hg.quantize(save_dir=save_hg, quantization_config=qconfig)\r\nfrom optimum.onnxruntime import ORTModelForTokenClassification\r\nfrom transformers import pipeline, AutoTokenizer`\r\n\r\n`model_hg = ORTModelForTokenClassification.from_pretrained(save_hg, file_name=\"model_quantized.onnx\")\r\ntokenizer_hg = AutoTokenizer.from_pretrained(save_hg)\r\npipeline_hg = pipeline(\"token-classification\", model=model_hg, tokenizer=tokenizer_hg, aggregation_strategy = 'first')\r\nresults = pipeline_hg(text)\r\nresults`\r\n\r\n**InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids**\r\n",
"I have no problems quantizing/optimizing the deberta model and then loading it. I am facing the below error when importing predict when using pipeline.\r\n\r\n**InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids**",
"Could you open a PR on the [optimum repo](https://github.com/huggingface/optimum/issues) please?\r\nWe will try to figure it out there!"
] | 1,651
| 1,668
| 1,656
|
NONE
| null |
### Feature request
I am trying to export a DeBerta model, but since the current version of transformers[onnx] doesn't support DeBerta architecture, I am trying to do it by implementing a custom onnx configuration. Although, I am able to provide the required inputs, I am not really getting the required output shape for **Sequence Classification** task.
I also tried to use the approach mentioned below, but to no good
~~~
from collections import OrderedDict
from typing import Mapping
from pathlib import Path
from transformers.onnx import export
from transformers.onnx import OnnxConfig
from transformers import BartTokenizer, BartModel, BartConfig
onnx_path = Path("C:/Users/Hp/zsc/onnx_deberta/model3.onnx")
class DebertaConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
return OrderedDict(
[
("input_ids", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
]
)
config = AutoConfig.from_pretrained("Narsil/deberta-large-mnli-zero-cls")
base_model = AutoModel.from_pretrained("Narsil/deberta-large-mnli-zero-cls")
tokenizer = AutoTokenizer.from_pretrained("Narsil/deberta-large-mnli-zero-cls")
onnx_config = DebertaConfig(config, task="sequence-classification")
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
~~~
the onnx_outputs shape is of kind (1, 10, 1024) instead of (1, 3).
Any way to achieve it or am I doing something wrong?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16982/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16981
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16981/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16981/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16981/events
|
https://github.com/huggingface/transformers/pull/16981
| 1,218,224,208
|
PR_kwDOCUB6oc4265Fj
| 16,981
|
Skip RoFormer ONNX test if rjieba not installed
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"So this test is then currently just skipped on our ONNX tests? \r\n\r\nShould we maybe not better add `rjieba` to the test package to the test RoFormer (I couldn't find it in the `setup.py` or in the Docker file) cc @LysandreJik @sgugger ",
"It looks like RoFormer tokenization is completely untested yes, so this package should be added in the `\"testing\"` extra.",
"I agree we should include `rjieba` for the tests, unless there are reasons for not adding specific packages.\r\n\r\n### Further remark\r\nIf we could not add `rjieba`, it is not a good idea to add `@require_rjieba` for `test_pytorch_export`, otherwise this test won't be run for other models neither. In this case, I think we might need a specific way to skip this test for `RoFormer` (and others that require `rjieba`).",
"Thanks for the feedback - I'll add `rjieba` to our testing suite as well :)",
"Hey @sgugger @patrickvonplaten I'm hitting some peculiar issues with 2 of the slow tests of the RoFormer tokenizer. Would you mind taking a look and seeing if my decision to skip them is valid?",
"I'll let @patrickvonplaten decide as I know nothing on that model too :-)",
"There is a test dedicated to custom tokenizers with specific dependencies: https://github.com/huggingface/transformers/blob/main/.circleci/config.yml#L538\r\n\r\nIt installs `jieba` but not `rjieba`. Would it make sense to add it there? If you're testing for ONNX, it's very likely that it does not make sense as it's limited to tokenizer tests right now.",
"> There is a test dedicated to custom tokenizers with specific dependencies: https://github.com/huggingface/transformers/blob/main/.circleci/config.yml#L538\r\n> \r\n> It installs `jieba` but not `rjieba`. Would it make sense to add it there? If you're testing for ONNX, it's very likely that it does not make sense as it's limited to tokenizer tests right now.\r\n\r\nThanks for the tip! Done in [3cafcb2](https://github.com/huggingface/transformers/pull/16981/commits/3cafcb2e06bc7caf0eba2e03e817fedcf0cfe073)",
"Hey @patrickvonplaten @LysandreJik I think this PR is ready for a final pass :)\r\n\r\nThe failing test is unrelated to the PR itself (a failing Pegasus generate test)"
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
# What does this PR do?
This PR adds the `@require_rjieba` decorator to the slow ONNX tests to deal with the following error in our daily CI runs:
```
(line 164) ImportError: You need to install rjieba to use RoFormerTokenizer. See https://pypi.org/project/rjieba/ for installation.
```
~~I wasn't sure if `rjieba` should actually be installed in the GitHub workflow, but it doesn't seem to be the case for the RoFormer tests and so I omitted that for now.~~
Edit: I've added `rjieba` to the `"tests"` extras and also tested that the slow ONNX test passes when this dep is installed:
```
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "roformer"
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16981/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16981",
"html_url": "https://github.com/huggingface/transformers/pull/16981",
"diff_url": "https://github.com/huggingface/transformers/pull/16981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16981.patch",
"merged_at": 1651651450000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16980
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16980/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16980/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16980/events
|
https://github.com/huggingface/transformers/pull/16980
| 1,218,212,662
|
PR_kwDOCUB6oc4262jd
| 16,980
|
Remove masked image modeling from BEIT ONNX export
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi,\r\n\r\nThere's a reason I haven't added BEiT to the auto classes. It's because it can't be used with the run_mim.py script, because BEiT handles masked image modeling differently compared to the other ones (which do it similar to the way it's defined in SimMIM paper). So this may confuse users, maybe we should properly document it that BEiT is not the same as the other ones",
"Ah I see, but isn't a bit odd to exclude BEiT just because it isn't compatible with our example scripts? \r\n\r\nFor instance, is there anything fundamentally wrong with loading `BeitForMaskedImageModeling` via the autoclass if I'm rolling my own masked image modeling code?\r\n\r\nIf not, I'd prefer to keep BEIT in the autoclasses and put the warning inside the `run_mim.py` script if a user tries to run it with this architecture",
"Hmm maybe there is a fundamental issue with using BEiT in the autoclasses as I'm seeing the torch tests fail with:\r\n\r\n```\r\nself = BeitEmbeddings(\r\n (patch_embeddings): PatchEmbeddings(\r\n (projection): Conv2d(3, 32, kernel_size=(2, 2), stride=(2, 2))\r\n )\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n)\r\npixel_values = tensor([[[[7.7614e-01, 1.7656e-01, 6.0460e-01, ..., 3.9106e-01,\r\n 5.2019e-01, 8.9339e-01],\r\n [2.7568...1, 9.9367e-01],\r\n [9.4963e-01, 1.6943e-01, 9.7946e-01, ..., 1.9085e-01,\r\n 1.9910e-01, 4.6059e-02]]]])\r\nbool_masked_pos = tensor([[0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n ...,\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0]])\r\n\r\n def forward(self, pixel_values: torch.Tensor, bool_masked_pos: Optional[torch.BoolTensor] = None) -> torch.Tensor:\r\n \r\n embeddings = self.patch_embeddings(pixel_values)\r\n batch_size, seq_len, _ = embeddings.size()\r\n \r\n cls_tokens = self.cls_token.expand(batch_size, -1, -1)\r\n if bool_masked_pos is not None:\r\n> mask_tokens = self.mask_token.expand(batch_size, seq_len, -1)\r\nE AttributeError: 'NoneType' object has no attribute 'expand'\r\n```",
"Well yeah that's because BEiT does masked image modeling by predicting visual tokens of a VQ-VAE, whereas the other ones predict pixel values (RGB) as in the [SimMIM paper](https://arxiv.org/abs/2111.09886). So I'm afraid BEiT cannot be added to this auto class.",
"OK thanks for the clarification. I'll remove this feature from the ONNX export and add a note to the BEiT docs :)"
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
# What does this PR do?
This PR removes masked image modeling from the list of supported features in the ONNX exporter. As explained by @NielsRogge, BEiT cannot be loaded with the `AutoModelForMaskedImageModeling` class due to:
> Well yeah that's because BEiT does masked image modeling by predicting visual tokens of a VQ-VAE, whereas the other ones predict pixel values (RGB) as in the [SimMIM paper](https://arxiv.org/abs/2111.09886). So I'm afraid BEiT cannot be added to this auto class.
I've also added a note in the BEiT docs to help users who don't know these details. I've also checked that the slow tests pass for ONNX with
```
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s
```
Edit: we should merge this after #16981 to ensure the RoFormer tests pass first
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16980/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16980",
"html_url": "https://github.com/huggingface/transformers/pull/16980",
"diff_url": "https://github.com/huggingface/transformers/pull/16980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16980.patch",
"merged_at": 1651651524000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16979
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16979/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16979/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16979/events
|
https://github.com/huggingface/transformers/pull/16979
| 1,218,197,827
|
PR_kwDOCUB6oc426zTp
| 16,979
|
Added translation of installation.mdx to Portuguese Issue #16824
|
{
"login": "rzimmerdev",
"id": 35232794,
"node_id": "MDQ6VXNlcjM1MjMyNzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/35232794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rzimmerdev",
"html_url": "https://github.com/rzimmerdev",
"followers_url": "https://api.github.com/users/rzimmerdev/followers",
"following_url": "https://api.github.com/users/rzimmerdev/following{/other_user}",
"gists_url": "https://api.github.com/users/rzimmerdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rzimmerdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rzimmerdev/subscriptions",
"organizations_url": "https://api.github.com/users/rzimmerdev/orgs",
"repos_url": "https://api.github.com/users/rzimmerdev/repos",
"events_url": "https://api.github.com/users/rzimmerdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/rzimmerdev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm still working on translating the remaining files, although there's already some work done on three files.",
"Maybe ~@gante~ @omarespejel ? :smile: ",
"Obrigado @rzimmerdev! @sgugger, LGTM. Ready to merge and start the Portuguese docs 🤗\r\n\r\nI removed the preprocessing doc from this PR because it was not ready yet."
] | 1,651
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Creates folder pt in docs/source for translating documentation to Portuguese
Currently, only the installation.mdx file was translated as of this PR.
Fixes issue #16824
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16979/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16979",
"html_url": "https://github.com/huggingface/transformers/pull/16979",
"diff_url": "https://github.com/huggingface/transformers/pull/16979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16979.patch",
"merged_at": 1652442945000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16978
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16978/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16978/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16978/events
|
https://github.com/huggingface/transformers/issues/16978
| 1,218,146,467
|
I_kwDOCUB6oc5Im3Cj
| 16,978
|
Data collator using in Parallel training & Disable to use DistributedDataParallel
|
{
"login": "CaoYiqingT",
"id": 45160643,
"node_id": "MDQ6VXNlcjQ1MTYwNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45160643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaoYiqingT",
"html_url": "https://github.com/CaoYiqingT",
"followers_url": "https://api.github.com/users/CaoYiqingT/followers",
"following_url": "https://api.github.com/users/CaoYiqingT/following{/other_user}",
"gists_url": "https://api.github.com/users/CaoYiqingT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaoYiqingT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaoYiqingT/subscriptions",
"organizations_url": "https://api.github.com/users/CaoYiqingT/orgs",
"repos_url": "https://api.github.com/users/CaoYiqingT/repos",
"events_url": "https://api.github.com/users/CaoYiqingT/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaoYiqingT/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Setting nproc_per_node=2 enables the multi-gpu training by ddp for the first 500 steps. But right after the 500th step, the same problem occurs as follows:\r\nTraceback (most recent call last):\r\n File \"base-Trainer.py\", line 67, in <module>\r\n main()\r\n File \"base-Trainer.py\", line 62, in main\r\n trainer.train()\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py\", line 1383, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py\", line 1475, in _maybe_log_save_evaluate\r\n tr_loss_scalar = self._nested_gather(tr_loss).mean().item()\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py\", line 2385, in _nested_gather\r\n tensors = distributed_concat(tensors)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py\", line 168, in distributed_concat\r\n dist.all_gather(output_tensors, tensor)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 1185, in all_gather\r\n work = _default_pg.allgather([tensor_list], [tensor])\r\nRuntimeError: All tensor operands to scatter/gather must have the same size\r\nTraceback (most recent call last):\r\n File \"base-Trainer.py\", line 67, in <module>\r\n main()\r\n File \"base-Trainer.py\", line 62, in main\r\n trainer.train()\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py\", line 1383, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py\", line 1475, in _maybe_log_save_evaluate\r\n tr_loss_scalar = self._nested_gather(tr_loss).mean().item()\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py\", line 2385, in _nested_gather\r\n tensors = distributed_concat(tensors)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py\", line 168, in distributed_concat\r\n dist.all_gather(output_tensors, tensor)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 1185, in all_gather\r\n work = _default_pg.allgather([tensor_list], [tensor])\r\nRuntimeError: All tensor operands to scatter/gather must have the same size\r\n 1%|▉ | 500/91677 [02:39<8:03:56, 3.14it/s]\r\nTraceback (most recent call last):\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py\", line 261, in <module>\r\n main()\r\n File \"/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py\", line 257, in main\r\n cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/home/caoyq/.conda/envs/torch_cp37/bin/python', '-u', 'base-Trainer.py', '--local_rank=1', '--mode', 'Data', '--train_data_path', 'utils/dev-small.json', '--output_dir', 'outputs/Data_only_load_data/', '--do_train', '--per_device_train_batch_size', '4', '--save_steps', '100000', '--log_on_each_node', '0']' returned non-zero exit status 1.",
"Please use the [forums](https://discuss.huggingface.co/) to debug your code (which you should provide if you want people to be able to help you) as we keep issues for feature requests and identified bugs in the library.",
"Met the same problem. Have you fixed it? @CaoYiqingT ",
"@Jun-jie-Huang This problem happens when using the log service. I just close the log, the the Trainer runs well. But this is only a temporary measure. Wish it can help you.",
"@CaoYiqingT Thanks for your quick response! Closing the log works for me. 👍 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.12.0.dev0
- Platform: Linux-4.15.0-45-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Both are tried and failed
```
### Who can help?
Library/Trainer: @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
code sample that reproduces the problem(with data): https://drive.google.com/file/d/1jFLV9Ir0Um3C6MTPjCgZMUKJtmasrQPT/view?usp=sharing
terminal input:
For DDP:
CUDA_VISIBLE_DEVICES=5,6 python -m torch.distributed.launch base-Trainer.py --mode "Data" --train_data_path utils/dev-small.json --output_dir outputs/Data_only_load_data/ --do_train --per_device_train_batch_size 4 --save_steps 100000
For DataParallel:
CUDA_VISIBLE_DEVICES=5,6 python base-Trainer.py --mode "Data" --train_data_path utils/dev-small.json --output_dir outputs/Data_only_load_data/ --do_train --per_device_train_batch_size 4 --save_steps 100000
some explaination to the task and files:
1. The task is MLM, with specified mask tokens, not randomly choosed.
2. tokenizer is basically a roberta-base-tokenizer, adding a special token [pron]
3. Dataset returns one sample of data each time, with type str or (str, str)
4. tokenization and label-creating are realized in collater()
error messages:
when using DataParallel:
/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
when using DDP:
1. set 2 gpu devices but only one gpu is used.
5. terminal outputs as follows:
Traceback (most recent call last):
File "base-Trainer.py", line 67, in <module>
main()
File "base-Trainer.py", line 62, in main
trainer.train()
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 1475, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer.py", line 2385, in _nested_gather
tensors = distributed_concat(tensors)
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 168, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1185, in all_gather
work = _default_pg.allgather([tensor_list], [tensor])
RuntimeError: All tensor operands to scatter/gather must have the same size
0%|▎ | 500/183354 [02:03<12:32:13, 4.05it/s]
Traceback (most recent call last):
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module>
main()
File "/home/caoyq/.conda/envs/torch_cp37/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/caoyq/.conda/envs/torch_cp37/bin/python', '-u', 'base-Trainer.py', '--local_rank=0', '--mode', 'Data', '--train_data_path', 'utils/dev-small.json', '--output_dir', 'outputs/Data_only_load_data/', '--do_train', '--per_device_train_batch_size', '4', '--save_steps', '100000']' returned non-zero exit status 1.
### Expected behavior
```shell
1. I want to use DDP, not just DP
2. From the error messages given above, my data collator may have problems. It would be so nice of you if you could tell me what's wrong and how can I fixed it. If there exists a sample I can follow, please let me know.
best wish
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16978/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16977
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16977/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16977/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16977/events
|
https://github.com/huggingface/transformers/pull/16977
| 1,218,082,176
|
PR_kwDOCUB6oc426alq
| 16,977
|
Update README_zh-hans.md
|
{
"login": "tarzanwill",
"id": 15139679,
"node_id": "MDQ6VXNlcjE1MTM5Njc5",
"avatar_url": "https://avatars.githubusercontent.com/u/15139679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarzanwill",
"html_url": "https://github.com/tarzanwill",
"followers_url": "https://api.github.com/users/tarzanwill/followers",
"following_url": "https://api.github.com/users/tarzanwill/following{/other_user}",
"gists_url": "https://api.github.com/users/tarzanwill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tarzanwill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tarzanwill/subscriptions",
"organizations_url": "https://api.github.com/users/tarzanwill/orgs",
"repos_url": "https://api.github.com/users/tarzanwill/repos",
"events_url": "https://api.github.com/users/tarzanwill/events{/privacy}",
"received_events_url": "https://api.github.com/users/tarzanwill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16977/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16977",
"html_url": "https://github.com/huggingface/transformers/pull/16977",
"diff_url": "https://github.com/huggingface/transformers/pull/16977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16977.patch",
"merged_at": 1651244703000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16976
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16976/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16976/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16976/events
|
https://github.com/huggingface/transformers/issues/16976
| 1,217,969,574
|
I_kwDOCUB6oc5ImL2m
| 16,976
|
Bug: Finetuning large models resume checkpoint error
|
{
"login": "lorr1",
"id": 57237365,
"node_id": "MDQ6VXNlcjU3MjM3MzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorr1",
"html_url": "https://github.com/lorr1",
"followers_url": "https://api.github.com/users/lorr1/followers",
"following_url": "https://api.github.com/users/lorr1/following{/other_user}",
"gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorr1/subscriptions",
"organizations_url": "https://api.github.com/users/lorr1/orgs",
"repos_url": "https://api.github.com/users/lorr1/repos",
"events_url": "https://api.github.com/users/lorr1/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorr1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Indeed, I saw that yesterday and am working on a fix.",
"Should be fixed by the PR mentioned above :-)",
"Thanks!!"
] | 1,651
| 1,651
| 1,651
|
NONE
| null |
When finetuning a large model (e.g. Eleuther 6B), you shard the checkpoints upon saving [here](https://github.com/huggingface/transformers/blob/c79bbc3ba54a81dab2eac13d89f264ed64cb2460/src/transformers/modeling_utils.py#L193). However, upon resuming the checkpoint (and loading the best checkpoint after training), you confirm if there is a valid checkpoint assuming weights are no sharded [here](https://github.com/huggingface/transformers/blob/dced262409177586bb510b6b724c762fb89da0e8/src/transformers/trainer.py#L1196). This causes an error upon resuming training.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16976/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16975
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16975/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16975/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16975/events
|
https://github.com/huggingface/transformers/issues/16975
| 1,217,893,178
|
I_kwDOCUB6oc5Il5M6
| 16,975
|
Trainer: TypeError: an integer is required (got type NoneType)
|
{
"login": "loretoparisi",
"id": 163333,
"node_id": "MDQ6VXNlcjE2MzMzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loretoparisi",
"html_url": "https://github.com/loretoparisi",
"followers_url": "https://api.github.com/users/loretoparisi/followers",
"following_url": "https://api.github.com/users/loretoparisi/following{/other_user}",
"gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions",
"organizations_url": "https://api.github.com/users/loretoparisi/orgs",
"repos_url": "https://api.github.com/users/loretoparisi/repos",
"events_url": "https://api.github.com/users/loretoparisi/events{/privacy}",
"received_events_url": "https://api.github.com/users/loretoparisi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Could be this issue related to this **[SF](https://stackoverflow.com/questions/70699247/typeerror-an-integer-is-required-got-type-nonetype)** answer?\r\n\r\nThe dataset looks like\r\n```\r\nDataset({\r\n features: ['label', 'text', 'input_ids', 'token_type_ids', 'attention_mask'],\r\n num_rows: 8256315\r\n})\r\n```\r\n\r\nand features\r\n\r\n```\r\n{'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),\r\n 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),\r\n 'label': ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None),\r\n 'text': Value(dtype='string', id=None),\r\n 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)}\r\n```",
"[UPDATE]\r\nI have changed the tokenize function like\r\n\r\n```python\r\ndef tokenize_function(batch):\r\n tokens = tokenizer(batch['text'], padding=\"max_length\", truncation=True, max_length=128)\r\n tokens['label'] = features[\"label\"].str2int(batch['label'])\r\n return tokens\r\ntokenized_datasets = sentences.map(tokenize_function, batched=True)\r\n```\r\n\r\nand removed the mapping as defined above, but now I'm facing a `None` label issue:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n[<ipython-input-39-3f04e6ec6f6e>](https://localhost:8080/#) in <module>()\r\n 14 tokens['label'] = features[\"label\"].str2int(batch['label']) if batch[\"label\"] is not None else None\r\n 15 return tokens\r\n---> 16 tokenized_datasets = sentences.map(tokenize, batched=True)\r\n\r\n10 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)\r\n 852 if value not in self._str2int:\r\n 853 value = str(value).strip()\r\n--> 854 output.append(self._str2int[str(value)])\r\n 855 else:\r\n 856 # No names provided, try to integerize\r\n\r\nKeyError: 'None'\r\n```",
"Solved filtering `None` rows\r\n\r\n```python\r\nsentences = sentences.filter(lambda example: example['label'] is not None and example['text'] is not None)\r\n```\r\n\r\nand slightly changing the `tokenizer`\r\n\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\n\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\",\"vie\",\"nld\",\"epo\",\"por\",\"tur\",\"heb\",\"hun\",\"ell\",\"ind\",\"ara\",\"arz\",\"fin\",\"bul\",\"yue\",\"swe\",\"ukr\",\"bel\",\"que\",\"ces\",\"swh\",\"nno\",\"wuu\",\"nob\",\"zsm\",\"est\",\"kat\",\"pol\",\"lat\",\"urd\",\"sqi\",\"isl\",\"fry\",\"afr\",\"ron\",\"fao\",\"san\",\"bre\",\"tat\",\"yid\",\"uig\",\"uzb\",\"srp\",\"qya\",\"dan\",\"pes\",\"slk\",\"eus\",\"cycl\",\"acm\",\"tgl\",\"lvs\",\"kaz\",\"hye\",\"hin\",\"lit\",\"ben\",\"cat\",\"bos\",\"hrv\",\"tha\",\"orv\",\"cha\",\"mon\",\"lzh\",\"scn\",\"gle\",\"mkd\",\"slv\",\"frm\",\"glg\",\"vol\",\"ain\",\"jbo\",\"tok\",\"ina\",\"nds\",\"mal\",\"tlh\",\"roh\",\"ltz\",\"oss\",\"ido\",\"gla\",\"mlt\",\"sco\",\"ast\",\"jav\",\"oci\",\"ile\",\"ota\",\"xal\",\"tel\",\"sjn\",\"nov\",\"khm\",\"tpi\",\"ang\",\"aze\",\"tgk\",\"tuk\",\"chv\",\"hsb\",\"dsb\",\"bod\",\"sme\",\"cym\",\"mri\",\"ksh\",\"kmr\",\"ewe\",\"kab\",\"ber\",\"tpw\",\"udm\",\"lld\",\"pms\",\"lad\",\"grn\",\"mlg\",\"xho\",\"pnb\",\"grc\",\"hat\",\"lao\",\"npi\",\"cor\",\"nah\",\"avk\",\"mar\",\"guj\",\"pan\",\"kir\",\"myv\",\"prg\",\"sux\",\"crs\",\"ckt\",\"bak\",\"zlm\",\"hil\",\"cbk\",\"chr\",\"nav\",\"lkt\",\"enm\",\"arq\",\"lin\",\"abk\",\"pcd\",\"rom\",\"gsw\",\"tam\",\"zul\",\"awa\",\"wln\",\"amh\",\"bar\",\"hbo\",\"mhr\",\"bho\",\"mrj\",\"ckb\",\"osx\",\"pfl\",\"mgm\",\"sna\",\"mah\",\"hau\",\"kan\",\"nog\",\"sin\",\"glv\",\"dng\",\"kal\",\"liv\",\"vro\",\"apc\",\"jdt\",\"fur\",\"che\",\"haw\",\"yor\",\"crh\",\"pdc\",\"ppl\",\"kin\",\"shs\",\"mnw\",\"tet\",\"sah\",\"kum\",\"ngt\",\"nya\",\"pus\",\"hif\",\"mya\",\"moh\",\"wol\",\"tir\",\"ton\",\"lzz\",\"oar\",\"lug\",\"brx\",\"non\",\"mww\",\"hak\",\"nlv\",\"ngu\",\"bua\",\"aym\",\"vec\",\"ibo\",\"tkl\",\"bam\",\"kha\",\"ceb\",\"lou\",\"fuc\",\"smo\",\"gag\",\"lfn\",\"arg\",\"umb\",\"tyv\",\"kjh\",\"oji\",\"cyo\",\"urh\",\"kzj\",\"pam\",\"srd\",\"lmo\",\"swg\",\"mdf\",\"gil\",\"snd\",\"tso\",\"sot\",\"zza\",\"tsn\",\"pau\",\"som\",\"egl\",\"ady\",\"asm\",\"ori\",\"dtp\",\"cho\",\"max\",\"kam\",\"niu\",\"sag\",\"ilo\",\"kaa\",\"fuv\",\"nch\",\"hoc\",\"iba\",\"gbm\",\"sun\",\"war\",\"mvv\",\"pap\",\"ary\",\"kxi\",\"csb\",\"pag\",\"cos\",\"rif\",\"kek\",\"krc\",\"aii\",\"ban\",\"ssw\",\"tvl\",\"mfe\",\"tah\",\"bvy\",\"bcl\",\"hnj\",\"nau\",\"nst\",\"afb\",\"quc\",\"min\",\"tmw\",\"mad\",\"bjn\",\"mai\",\"cjy\",\"got\",\"hsn\",\"gan\",\"tzl\",\"dws\",\"ldn\",\"afh\",\"sgs\",\"krl\",\"vep\",\"rue\",\"tly\",\"mic\",\"ext\",\"izh\",\"sma\",\"jam\",\"cmo\",\"mwl\",\"kpv\",\"koi\",\"bis\",\"ike\",\"run\",\"evn\",\"ryu\",\"mnc\",\"aoz\",\"otk\",\"kas\",\"aln\",\"akl\",\"yua\",\"shy\",\"fkv\",\"gos\",\"fij\",\"thv\",\"zgh\",\"gcf\",\"cay\",\"xmf\",\"tig\",\"div\",\"lij\",\"rap\",\"hrx\",\"cpi\",\"tts\",\"gaa\",\"tmr\",\"iii\",\"ltg\",\"bzt\",\"syc\",\"emx\",\"gom\",\"chg\",\"osp\",\"stq\",\"frr\",\"fro\",\"nys\",\"toi\",\"new\",\"phn\",\"jpa\",\"rel\",\"drt\",\"chn\",\"pli\",\"laa\",\"bal\",\"hdn\",\"hax\",\"mik\",\"ajp\",\"xqa\",\"pal\",\"crk\",\"mni\",\"lut\",\"ayl\",\"ood\",\"sdh\",\"ofs\",\"nus\",\"kiu\",\"diq\",\"qxq\",\"alt\",\"bfz\",\"klj\",\"mus\",\"srn\",\"guc\",\"lim\",\"zea\",\"shi\",\"mnr\",\"bom\",\"sat\",\"szl\"]\r\nfeatures = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})\r\nnum_labels = features['label'].num_classes\r\ndata_files = { \"train\": \"train.csv\", \"test\": \"test.csv\" }\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n download_mode=\"force_redownload\")\r\nsentences = sentences.filter(lambda example: example['label'] is not None and example['text'] is not None)\r\nsentences = sentences.shuffle()\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel_name = 'microsoft/xtremedistil-l6-h256-uncased'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\ndef tokenize(batch):\r\n tokens = tokenizer(batch['text'], padding=\"max_length\", truncation=True, max_length=128)\r\n tokens['label'] = features[\"label\"].str2int(batch['label'])\r\n return tokens\r\ntokenized_datasets = sentences.map(tokenize, batched=True)\r\n\r\nfull_train_dataset = tokenized_datasets[\"train\"]\r\nfull_eval_dataset = tokenized_datasets[\"test\"]\r\n\r\nimport torch\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\nprint(device)\r\n\r\nfrom transformers import AutoModelForSequenceClassification\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels)\r\nmodel = model.to(device)\r\n\r\nimport numpy as np\r\nfrom datasets import load_metric\r\n\r\nmetric = load_metric(\"accuracy\")\r\ndef compute_metrics(eval_pred):\r\n print(eval_pred)\r\n logits, labels = eval_pred\r\n predictions = np.argmax(logits, axis=-1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\nfrom transformers import TrainingArguments\r\ntraining_args = TrainingArguments(\"test_trainer\",\r\n per_device_train_batch_size=128, \r\n num_train_epochs=24,learning_rate=3e-05,\r\n evaluation_strategy=\"epoch\")\r\nfrom transformers import Trainer\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=full_train_dataset,\r\n eval_dataset=full_eval_dataset,\r\n compute_metrics=compute_metrics,\r\n)\r\n```"
] | 1,651
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@lys
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset,Features,Value,ClassLabel
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
from transformers import TrainingArguments
import numpy as np
from datasets import load_metric
import torch
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
data_files = { "train": "train.csv", "test": "test.csv" }
sentences = load_dataset(
"loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
download_mode="force_redownload"
)
print(sentences)
# You can make this part faster with num_proc=<some int>
sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
sentences = sentences.shuffle()
model_name = 'microsoft/xtremedistil-l6-h256-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=128)
tokenized_datasets = sentences.map(tokenize_function, batched=True)
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
device = "cuda:0" if torch.cuda.is_available() else "cpu"
print(device)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels)
model = model.to(device)
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
print(eval_pred)
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
training_args = TrainingArguments("test_trainer",
per_device_train_batch_size=128,
num_train_epochs=24,learning_rate=3e-05,
evaluation_strategy="epoch")
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=full_train_dataset,
eval_dataset=full_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```
Stack trace:
```
<div class="stream output-id-8" style="display: inline; color: rgb(213, 213, 213); font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(56, 56, 56); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><div class="output_subarea output_text" style="display: inline;"><pre style="margin-bottom: 0px; margin-top: 0px; display: inline;">***** Running training *****
Num examples = 8256315
Num Epochs = 24
Instantaneous batch size per device = 128
Total train batch size (w. parallel, distributed & accumulation) = 128
Gradient Accumulation steps = 1
Total optimization steps = 1548072
</pre></div></div><div class="display_data output-id-4976" style="color: rgb(213, 213, 213); font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(56, 56, 56); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><div class="output_subarea output_html rendered_html"><div><progress value="4942" max="1548072" style="width: 300px; height: 20px; vertical-align: middle;"></progress><span> </span>[ 4942/1548072 50:35 < 263:23:43, 1.63 it/s, Epoch 0.08/24]</div>
Epoch | Training Loss | Validation Loss
-- | -- | --
<p style="margin-bottom: 6px; margin-top: 6px;"></p></div></div><div class="stream output-id-4535" style="display: inline; color: rgb(213, 213, 213); font-family: Roboto, Noto, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(56, 56, 56); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><div class="output_subarea output_text" style="display: inline;"><pre style="margin-bottom: 0px; margin-top: 0px; display: inline;">Saving model checkpoint to test_trainer/checkpoint-500
Configuration saved in test_trainer/checkpoint-500/config.json
Model weights saved in test_trainer/checkpoint-500/pytorch_model.bin
Saving model checkpoint to test_trainer/checkpoint-1000
Configuration saved in test_trainer/checkpoint-1000/config.json
Model weights saved in test_trainer/checkpoint-1000/pytorch_model.bin
Saving model checkpoint to test_trainer/checkpoint-1500</pre></div></div>
...
Saving model checkpoint to test_trainer/checkpoint-4500
Configuration saved in test_trainer/checkpoint-4500/config.json
Model weights saved in test_trainer/checkpoint-4500/pytorch_model.bin
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-10-3435b262f1ae>](https://localhost:8080/#) in <module>()
----> 1 trainer.train()
5 frames
[/usr/local/lib/python3.7/dist-packages/transformers/data/data_collator.py](https://localhost:8080/#) in torch_default_data_collator(features)
113 label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"]
114 dtype = torch.long if isinstance(label, int) else torch.float
--> 115 batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
116 elif "label_ids" in first and first["label_ids"] is not None:
117 if isinstance(first["label_ids"], torch.Tensor):
TypeError: an integer is required (got type NoneType)
```
### Expected behavior
```shell
training complete successfully.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16975/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16974
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16974/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16974/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16974/events
|
https://github.com/huggingface/transformers/pull/16974
| 1,217,840,509
|
PR_kwDOCUB6oc425m7W
| 16,974
|
TF: XLA bad words logits processor and list of processors
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 @patrickvonplaten I'm stuck on the ngram logits processor, so I'd like to request your suggestions regarding what to try out next :D The bad words logits processor is ready and XLA-compatible.\r\n\r\nContext:\r\n1. Without XLA, it works well;\r\n1. With XLA, yields incorrect outputs (it masks the wrong tokens in some cases). It is not a CPU/GPU thing -- it has the same output regardless of the hardware;\r\n2. The XLA/non-XLA mismatch is at the output of `_calc_row_banned_ngram_tokens`, which gets the tokens that should be banned for each row;\r\n3. All intermediary variables I was able to pull out had the same contents. However, if I try to pull out all ngrams, I get a core dumped on XLA 🤔 \r\n\r\nThings I've tried (without any symptom change):\r\n1. The current implementation is a `tf.while_loop` with `tf.TensorArray`. On https://github.com/huggingface/transformers/pull/16974/commits/ddc89115e88a24e3fd79e210ef3f4b9e51ba54c7, we can see my original implementation with a `tf.map_fn` (which is closer to the original code). Both versions have the exact same symptoms described above, and return the same errors for the same inputs when XLA is on (!);\r\n2. Pulling the initialization of the `tf.TensorArray` to the start of `__call__`, pass `ngram_size` as an argument, and use `tf.function` as a decorator to `__call__`. The two first changes are to attempt a retrace trigger, the last one to rule out problems associated with attempting to compile a class instance (as opposed to a function);\r\n3. Using `tf.shape` instead of `tensor.shape`, as the former is more suited for symbolic tensors;\r\n4. Using batches with a single row as input;\r\n5. Looking for other ways to implement the sliding window on the inputs (i.e. getting the ngrams), with no success.",
"I'd be very much in favor of just not converting the `ngram` Processor. I don't think it's a necessary requirement to publish the new TF generate method. Let's maybe leave this as a hard second issue in case the community is very interested in this feature. \r\n\r\nI think it's now more important to think about how to advertise, document XLA TF generate well and not loose too much time on this. ",
"Also not that many models use this processor (only know of BART and T5 for some summarization tasks) ",
"Agree that it's not necessary to convert this one, but examining it, I suspect that there are some sneaky changes in output size depending on inputs, and XLA is struggling to deal with it. It seems very tough to convert to XLA, but if we decide we need it later let me know and I'll do my best to dig into it.",
"Great 👍 I'm going to revert that one, add a TODO pointing at this PR, add a few final tests for the list of logits processors with XLA, and will ping you back.",
"@Rocketknight1 @patrickvonplaten ready for review"
] | 1,651
| 1,651
| 1,651
|
MEMBER
| null |
# What does this PR do?
This PR converts to XLA-compatible the `bad_words` logits processor. As per the discussion below, I was unable to convert the `ngrams` one -- added an exception and a TODO.
Also makes a change to the list of processors -- XLA raised issues when the processors had different arguments, so I had to add `cur_len` to all processors. After the change, the list wrapper is also compatible with XLA.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16974/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16974",
"html_url": "https://github.com/huggingface/transformers/pull/16974",
"diff_url": "https://github.com/huggingface/transformers/pull/16974.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16974.patch",
"merged_at": 1651244098000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16973
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16973/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16973/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16973/events
|
https://github.com/huggingface/transformers/pull/16973
| 1,217,793,435
|
PR_kwDOCUB6oc425czv
| 16,973
|
Update check_models_are_tested to deal with Windows path
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,651
| 1,651
| 1,651
|
COLLABORATOR
| null |
# What does this PR do?
`TEST_FILES_WITH_NO_COMMON_TESTS` contains forward slash like `mt5/test_modeling_flax_mt5.py`.
The condition `if test_file in TEST_FILES_WITH_NO_COMMON_TESTS:` in `check_models_are_tested` would give failures to fix on Windows, like
```
camembert\test_modeling_camembert.py should define `all_model_classes` to apply common tests
```
This PR uses `test_file.replace(os.sep, "/")` to make it work on Windows too 😄
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16973/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16973",
"html_url": "https://github.com/huggingface/transformers/pull/16973",
"diff_url": "https://github.com/huggingface/transformers/pull/16973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16973.patch",
"merged_at": 1651152717000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16972
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16972/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16972/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16972/events
|
https://github.com/huggingface/transformers/issues/16972
| 1,217,738,179
|
I_kwDOCUB6oc5IlTXD
| 16,972
|
Issue in reformer: Reformer doesn't depend on its key feature -- `LSHSelfAttention`
|
{
"login": "leo-liuzy",
"id": 11146950,
"node_id": "MDQ6VXNlcjExMTQ2OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11146950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-liuzy",
"html_url": "https://github.com/leo-liuzy",
"followers_url": "https://api.github.com/users/leo-liuzy/followers",
"following_url": "https://api.github.com/users/leo-liuzy/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-liuzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-liuzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-liuzy/subscriptions",
"organizations_url": "https://api.github.com/users/leo-liuzy/orgs",
"repos_url": "https://api.github.com/users/leo-liuzy/repos",
"events_url": "https://api.github.com/users/leo-liuzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-liuzy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @leo-liuzy,\r\n\r\nSorry what exactly is the issue here with Reformer? Is the training not working?",
"Hi @patrickvonplaten , I am evaluating released model trained on crime and punishment (with examples randomly grabbed from crime and punishment). I found if I remove LSHSelfAttention output from producing perplexity. The perplexity doesn't change much. But if I remove LocalSelfAttention, the PPL goes up by a lot. So, I wonder if this is caused a bug (even during training) in codebase, or it's intrinsic to the specific reformer's model structure -- (`attn_layers = [\"lsh\", \"local\", \"lsh\", \"local\", \"lsh\", \"local\"]`)",
"I'm not really sure @leo-liuzy sadly - I've never removed the local layers when training the model. Maybe you can also try asking on https://discuss.huggingface.co/ :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,651
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
conda create -n reformer-issue python=3.8 -y
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install -e . # install from source
python check_reformer.py
```
Make file changes (very minimal changes) as my PR here: https://github.com/leo-liuzy/transformers/pull/2
Changes are located [here](https://github.com/leo-liuzy/transformers/pull/2/files#diff-4f979561f9762bfd9333c74331153c4ee974120a4cf3c28052a29ec7e2c15ed7R1482)
I made my fork from huggingface main two days ago.
I also play with removing `LocalSelfAttention` and the perplexity greatly improve especially with `long_inputs_lst` (in the file). When just using `LSHSelfAttention`, increase num_hash doesn't help.
My question is: **could this be caused by an innocent bug in transferring from Reformer's official code? Or is this intrinsic to the reformer?**
I know in reformer they had a 20-layer transformer trained with 20 LSHSelfAttention and it shows good performance; that's why it further confused me.
### Expected behavior
```shell
With `weight = 0 if isinstance(self.attention.self_attention, LSHSelfAttention) else 1`
No. hash: 1
Seq_len(43)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 2.614
ppl: 6.123
Seq_len(85)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 3.808
ppl: 14.006
Seq_len(135)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 2.230
ppl: 4.693
Seq_len(53)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 2.261
ppl: 4.792
Seq_len(47)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 2.646
ppl: 6.258
Seq_len(78)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 2.347
ppl: 5.087
Seq_len(26)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 2.712
ppl: 6.553
Seq_len(63)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 3.568
ppl: 11.858
Seq_len(147)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 0
bpd: 2.983
ppl: 7.907
With `weight = 1 if isinstance(self.attention.self_attention, LSHSelfAttention) else 1`
No. hash: 1
Seq_len(43)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 2.614
ppl: 6.123
Seq_len(85)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 3.808
ppl: 14.006
Seq_len(135)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 2.218
ppl: 4.651
Seq_len(53)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 2.261
ppl: 4.792
Seq_len(47)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 2.646
ppl: 6.258
Seq_len(78)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 2.347
ppl: 5.087
Seq_len(26)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 2.712
ppl: 6.553
Seq_len(63)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 3.568
ppl: 11.858
Seq_len(147)
Using LSHAttn:
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LocalSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
<class 'transformers.models.reformer.modeling_reformer.LSHSelfAttention'>: Y_1 = X_1 + f(X_2) * 1
bpd: 2.973
ppl: 7.850
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16972/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.