url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/17874
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17874/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17874/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17874/events
|
https://github.com/huggingface/transformers/pull/17874
| 1,284,144,525
|
PR_kwDOCUB6oc46VYau
| 17,874
|
Fix TF GPT2 `test_onnx_runtime_optimize`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
Fix TF GPT2 `test_onnx_runtime_optimize` by skipping 2 test classes.
Current error:
```
tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_onnx_runtime_optimize
(line 372) onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. In Node, ("tfgpt2_for_sequence_classification_27/GatherV2", GatherV2, "", -1) : ("tfgpt2_for_sequence_classification_27/score/Tensordot:0": tensor(float),"tfgpt2_for_sequence_classification_27/sub:0": tensor(int32),"tfgpt2_for_sequence_classification_27/GatherV2/axis:0": tensor(int32),) -> ("logits": tensor(float),) , Error No Op registered for GatherV2 with domain_version of 10
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17874/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17874",
"html_url": "https://github.com/huggingface/transformers/pull/17874",
"diff_url": "https://github.com/huggingface/transformers/pull/17874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17874.patch",
"merged_at": 1656314850000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17873
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17873/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17873/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17873/events
|
https://github.com/huggingface/transformers/pull/17873
| 1,284,007,141
|
PR_kwDOCUB6oc46U6bt
| 17,873
|
[WIP] Generate docs
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still having hope of merging this one day :crossed_fingers: ",
"Ok sadly really not finding the time at the moment :cry: @gante @ArthurZucker @sanchit-gandhi could it maybe be interesting for one of you to take it over? ",
"Also cc @sgugger just FYI ",
"You don't need to cc me, I see everything ;-)\r\n\r\n\r\n",
"I could look into this in a couple of weeks if you want to offload it! Reassuring to know @sgugger has assumed the role of Hugging Face's [Big Brother](https://en.wikipedia.org/wiki/Big_Brother_(Nineteen_Eighty-Four)) 👀",
"I think I can take care of it maybe next week 😄 Adding it to my list ! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,656
| 1,669
| 1,669
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17873/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17873",
"html_url": "https://github.com/huggingface/transformers/pull/17873",
"diff_url": "https://github.com/huggingface/transformers/pull/17873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17873.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17872
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17872/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17872/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17872/events
|
https://github.com/huggingface/transformers/pull/17872
| 1,283,923,123
|
PR_kwDOCUB6oc46UoSv
| 17,872
|
Fix test_inference_instance_segmentation_head
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
Current `test_inference_instance_segmentation_head` (in `MaskFormerModelIntegrationTest`) failed in CI.
The expected slice
```python
[[-1.3738, -1.7725, -1.9365], [-1.5978, -1.9869, -2.1524], [-1.5796, -1.9271, -2.0940]]
```
has precision `4` and `atol` argument (`TOLERANCE`) is also `1e-4`, which makes the difference at the boundary. This is **likely** the cause of test failures. Give more precision for the expected values should fix the issue.
```bash
(Pdb) diff1 # (with original expected values)
0.0001039505
(Pdb) diff2 # (with more precision)
1.4066696e-05
```
(However, I am not able to get the test failure with the original setting, launched manually in a GCP VM.)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17872/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17872",
"html_url": "https://github.com/huggingface/transformers/pull/17872",
"diff_url": "https://github.com/huggingface/transformers/pull/17872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17872.patch",
"merged_at": 1656092205000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17871
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17871/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17871/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17871/events
|
https://github.com/huggingface/transformers/pull/17871
| 1,283,913,156
|
PR_kwDOCUB6oc46UmIP
| 17,871
|
[CodeGen] support device_map="auto" for sharded checkpoints
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
MEMBER
| null |
# What does this PR do?
This PR adds the `_no_split_modules` attribute in `CodeGenPreTrainedModel` to be able to load the sharded checkpoint with `device_map="auto"`
cc @rooa
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17871/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17871",
"html_url": "https://github.com/huggingface/transformers/pull/17871",
"diff_url": "https://github.com/huggingface/transformers/pull/17871.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17871.patch",
"merged_at": 1656086791000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17870
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17870/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17870/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17870/events
|
https://github.com/huggingface/transformers/pull/17870
| 1,283,881,713
|
PR_kwDOCUB6oc46UfUt
| 17,870
|
Properly get tests deps in test_fetcher
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
With the move of the tests, the test fetcher is now improperly converting relative imports from other tests to the corresponding test files. This PR fixes that problem.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17870/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17870",
"html_url": "https://github.com/huggingface/transformers/pull/17870",
"diff_url": "https://github.com/huggingface/transformers/pull/17870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17870.patch",
"merged_at": 1656104207000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17869
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17869/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17869/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17869/events
|
https://github.com/huggingface/transformers/pull/17869
| 1,283,829,566
|
PR_kwDOCUB6oc46UUBT
| 17,869
|
Fix add new model like frameworks
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
When selecting specific frameworks with `transformers-cli add-new-model-like`, all objects are still added to the main init. This is due to the change in all our inits and the command not being properly adapted.
This PR will fix it!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17869/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17869",
"html_url": "https://github.com/huggingface/transformers/pull/17869",
"diff_url": "https://github.com/huggingface/transformers/pull/17869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17869.patch",
"merged_at": 1656349655000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17868
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17868/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17868/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17868/events
|
https://github.com/huggingface/transformers/issues/17868
| 1,283,824,281
|
I_kwDOCUB6oc5MhZqZ
| 17,868
|
Calling `generate` on a `T5ForConditionalGeneration` returns `n` tokens but `n-1` scores
|
{
"login": "ClementRomac",
"id": 8899812,
"node_id": "MDQ6VXNlcjg4OTk4MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8899812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ClementRomac",
"html_url": "https://github.com/ClementRomac",
"followers_url": "https://api.github.com/users/ClementRomac/followers",
"following_url": "https://api.github.com/users/ClementRomac/following{/other_user}",
"gists_url": "https://api.github.com/users/ClementRomac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ClementRomac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ClementRomac/subscriptions",
"organizations_url": "https://api.github.com/users/ClementRomac/orgs",
"repos_url": "https://api.github.com/users/ClementRomac/repos",
"events_url": "https://api.github.com/users/ClementRomac/events{/privacy}",
"received_events_url": "https://api.github.com/users/ClementRomac/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, @ClementRomac \r\n\r\nIf you look the [config.json](https://huggingface.co/t5-small/blob/main/config.json) file of the `t5-small` model, you will see it uses `pad_token_id` as `decoder_start_token_id` (both are `0`).\r\n\r\nThe `scores` having length `len(sequence) - 1` is expected. Think it this way, \r\n\r\n```python\r\ngenerated sequence = [decoder_start_token_id, token_1, token_2]\r\n```\r\n\r\nThe scores is/are:\r\n\r\n- score for generating `token_1` while we have `[decoder_start_token_id]`\r\n- score for generating `token_2` while we have `[decoder_start_token_id, token_1]`\r\n\r\nThis is also documented in [generation_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py), for example\r\n\r\n(`SampleEncoderDecoderOutput`)\r\nhttps://github.com/huggingface/transformers/blob/afb71b672679e57449085e4955a321db8e5705b9/src/transformers/generation_utils.py#L172\r\nor\r\n(`GreedySearchEncoderDecoderOutput`)\r\nhttps://github.com/huggingface/transformers/blob/afb71b672679e57449085e4955a321db8e5705b9/src/transformers/generation_utils.py#L101\r\n\r\netc.",
"Hey @ydshieh,\r\n\r\nThanks for your answer, it makes sense!\r\n\r\nCould we consider documenting it a little bit more somewhere? I don't have any clear idea on where to put it but to be honest this behaviour can appear a bit confusing when looking at the documentation.\r\n\r\nFor instance, in [generation_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py), it is mentioned (both for `SampleEncoderDecoderOutput` and `GreedySearchEncoderDecoderOutput`):\r\n\r\n1. that `sequence_length` should be up to `max_length` (however we get `max_length +1` in the above example)\r\n2. that `scores` will have size `max_length-1` (however we get `max_length` scores in the above example) \r\n\r\nhttps://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/generation_utils.py#L169-L175",
"@ClementRomac ,\r\n\r\nI think it is because you use `max_new_tokens=15,` instead of the argument `max_length.`\r\n\r\nSee https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/generation_utils.py#L925-L929\r\n\r\nI think it is quite well documented. It is possible to make it even more explicit to include `max_new_tokens` regarding the output format.\r\n\r\n@patrickvonplaten Do you think we should add this in `GreedySearchEncoderDecoderOutput` etc ..?",
"Always happy to make the generate docs more explicit!\r\n\r\nAlso gently pinging @gante here for feedback :-) ",
"Note: Some docstrings associated with `scores` have \r\n\r\n```\r\n`(max_length-1,)`-shaped tuple of `torch.FloatTensor`\r\n```\r\n\r\nwhile others have \r\n```\r\n`(max_length-input_ids.shape[-1],)`-shaped tuple of `torch.FloatTensor`\r\n```\r\n\r\ndepending on whether the model is an encoder-decoder or a decoder-only (respectively)\r\n______________________\r\n\r\nI see two minor problems with the current docstrings:\r\n1. Generation may stop before we generate `max_length` tokens (or `max_new_tokens` new tokens);\r\n2. We are pushing away from `max_length` towards `max_new_tokens`. \r\n\r\nAs such, it would be nice to improve the docs to address these two issues! Since the previous sentence in the docstring contains `(...) at each generation step`, perhaps something like this:\r\n\r\n```\r\nTuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element per generation step),\r\n```\r\n\r\nThe complete docstring would be:\r\n\r\n```\r\nscores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`):\r\n Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)\r\n at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element per \r\n generation step), with each tensor of shape `(batch_size, config.vocab_size)`).\r\n```\r\n\r\nWDYT?",
"@gante Looks good to me, as long as we keep `batch_size*num_return_sequences` instead of `batch_size` wherever it applies.",
"Very much agree with @gante here!",
"Assigned to me to update the docstring for all three frameworks ",
"@ClementRomac Hi! I have been trying to calculate the probability of a sequence but am not sure how to do it. As you mentioned calculating the probability, can you please tell me how to do it?\r\n\r\nI have the scores for each step of the generate method, but not sure how to use them.\r\n\r\nWhat I am doing is, given a premise and a hypothesis, I am trying to identify whether they are entailment, contradiction, or, neutral. I am getting the classification correctly, I just don't know how to calculate the probability of the sequence being **entailment**\r\n\r\n```python\r\ndef is_entailment(premise, hypothesis):\r\n entailment_premise = premise\r\n entailment_hypothesis = hypothesis\r\n\r\n token_output = tokenizer(\"mnli premise: \" + entailment_premise + \" hypothesis: \" + entailment_hypothesis,\r\n return_tensors=\"pt\", return_length=True)\r\n input_ids = token_output.input_ids\r\n\r\n output = model.generate(input_ids, output_scores=True, return_dict_in_generate=True, max_length=50)\r\n entailment_ids = output[\"sequences\"]\r\n\r\n entailment = tokenizer.decode(entailment_ids[0], skip_special_tokens=True)\r\n return entailment\r\n```"
] | 1,656
| 1,672
| 1,658
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten, @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
if __name__ == '__main__':
torch.manual_seed(0)
tokenizer = AutoTokenizer.from_pretrained('t5-small')
model = AutoModelForSeq2SeqLM.from_pretrained('t5-small')
input = tokenizer.encode("I enjoy walking with my cute dog", return_tensors='pt')
result = model.generate(
input,
max_new_tokens=15,
do_sample=True,
return_dict_in_generate=True,
output_scores=True,
)
print(len(result["scores"]))
for sequence in result["sequences"]:
print(len(sequence))
print(tokenizer.decode(sequence))
```
Output:
```
15
16
<pad> Ich, liebe es, mes lustig beim laufen
```
### Expected behavior
I would have expected to have up to 15 tokens (as `max_new_tokens=15`) and `len(result["scores"]) == len(result["sequences"][0])`. However, the size of the returned sequence of tokens is always `len(result["scores"]) + 1`. In addition, if `max_new_tokens` is reached we have `len(result["sequences"][0]) == max_new_tokens + 1`.
When looking at the decoded sequence, there is always a pad token at the beginning.
I don't know if this is necessarily a bug but this behaviour is somewhat confusing, especially when trying to compute the probability of the sequence given scores.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17868/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17867
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17867/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17867/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17867/events
|
https://github.com/huggingface/transformers/issues/17867
| 1,283,820,271
|
I_kwDOCUB6oc5MhYrv
| 17,867
|
layoutxlm model can not convert to onnx
|
{
"login": "githublsk",
"id": 77612906,
"node_id": "MDQ6VXNlcjc3NjEyOTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/77612906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githublsk",
"html_url": "https://github.com/githublsk",
"followers_url": "https://api.github.com/users/githublsk/followers",
"following_url": "https://api.github.com/users/githublsk/following{/other_user}",
"gists_url": "https://api.github.com/users/githublsk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githublsk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githublsk/subscriptions",
"organizations_url": "https://api.github.com/users/githublsk/orgs",
"repos_url": "https://api.github.com/users/githublsk/repos",
"events_url": "https://api.github.com/users/githublsk/events{/privacy}",
"received_events_url": "https://api.github.com/users/githublsk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @lewtun",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@lewtun friendly ping!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,656
| 1,671
| 1,671
|
NONE
| null |
### Model description
I use layoutxlm to training my data for downstream, when I convert the model which I trained to onnx by huggingface code layoutlmv2-to-onnx, it occurs below problem, can you give me some advice? it seems concat two different types causing this problem, but I do not modify any code just using run xfun for token classification, which confused me so much, I hope you can help us
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
I use layoutxlm to training my data for downstream, when I convert the model which I trained to onnx by huggingface code layoutlmv2-to-onnx, it occurs below problem, can you give me some advice? it seems concat two different types causing this problem, but I do not modify any code just using run xfun for token classification, which confused me so much, I hope you can help us
<img width="943" alt="企业微信截图_1656080658142" src="https://user-images.githubusercontent.com/77612906/175556272-af5e91d0-c76e-483f-ad24-95fa7690146e.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17867/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17867/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17866
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17866/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17866/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17866/events
|
https://github.com/huggingface/transformers/pull/17866
| 1,283,816,730
|
PR_kwDOCUB6oc46URQk
| 17,866
|
Bloom Optimize operations
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I won't merge this now since I saw that it broke some slow tests, will investigate that!",
"With the two proposed changes, all tests are now passing @younesbelkada :)",
"Thanks a lot @NouamaneTazi !! Amazing job 🔥 ",
"Let's merge this together as some improvements to make the inference faster\r\n- [x] create attn mask only once\r\n- [x] broadcast alibi only once instead of each time on the attention layer\r\n- [x] Remove the contiguous calls and test the model\r\n- [ ] Refactor the reshaping (check how it is done in BLOOM Flax) in the attention layer",
"Before merging, let's fix the code quality tests...",
"All tests should be passing now except for `BloomModelTest::test_simple_generation` \r\nIt seems the issue with this one comes from the fact the we now use `torch.bmm` instead of `torch.baddbmm` in [this line](https://github.com/younesbelkada/transformers/blob/773d8e780fea41b8a8f77bf2bccbfbeacc91d50d/src/transformers/models/bloom/modeling_bloom.py#L307)\r\n\r\nAnd I don't undestand what's happening here: (this only gives different outputs for fp16)\r\n```python\r\nb = torch.baddbmm(\r\n torch.zeros_like(sliced_alibi, dtype=torch.float16),\r\n query_layer.transpose(1, 0),\r\n key_layer.transpose(1, 0).transpose(1, 2),\r\n beta=1.0,\r\n alpha=1.0,\r\n)\r\nc = torch.baddbmm(\r\n sliced_alibi,\r\n query_layer.transpose(1, 0),\r\n key_layer.transpose(1, 0).transpose(1, 2),\r\n beta=1.0,\r\n alpha=1.0,\r\n) - sliced_alibi\r\nprint(b==c)\r\n```\r\n\r\ngives:\r\n```\r\ntensor([[[ True, True, True, True, False, True, True],\r\n [ True, True, True, True, False, False, False],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, False, False, True, False, False],\r\n [ True, True, False, False, True, False, False],\r\n [ True, True, True, False, False, True, True],\r\n [ True, False, False, False, True, False, True]],\r\n\r\n [[ True, False, True, True, True, True, False],\r\n [ True, True, True, True, True, False, False],\r\n [ True, False, True, False, False, True, True],\r\n [ True, True, False, True, False, True, False],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, False, True, True, True, False],\r\n [ True, False, True, True, True, True, True]],\r\n\r\n [[ True, True, True, False, True, True, True],\r\n [ True, True, True, True, False, False, True],\r\n [ True, True, True, False, True, False, True],\r\n [ True, False, True, True, False, False, True],\r\n [ True, False, True, False, True, True, True],\r\n [ True, True, True, True, True, True, False],\r\n [ True, False, True, False, False, True, True]],\r\n\r\n [[ True, True, True, False, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, False, True, True, False],\r\n [ True, True, True, True, False, True, False],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, True, False]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, False, True, False, True],\r\n [ True, True, True, False, False, True, True],\r\n [ True, False, True, True, True, False, True],\r\n [ True, True, False, True, False, False, True],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, False, True, True, True, False]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, False, True, False, False, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, False, False, True],\r\n [ True, True, False, False, False, True, True],\r\n [ True, False, True, False, False, False, True],\r\n [ True, True, True, False, False, True, True]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, True, False, False, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, True, True, False, True, True],\r\n [ True, False, True, True, True, True, True],\r\n [ True, True, False, True, True, True, True],\r\n [ True, False, False, True, False, True, False]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, False, True, True, True, False, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, False, True, True, True, True],\r\n [ True, False, False, True, True, True, True],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, True, True, False, False, True],\r\n [ True, False, True, True, True, True, True]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, True, True, True, True, True],\r\n [ True, True, True, True, True, False, True]],\r\n\r\n [[ True, True, True, True, False, True, True],\r\n [ True, True, True, False, True, False, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, True, True, True, True, True],\r\n [ True, True, True, True, False, False, True],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, False, False, False, True, False]],\r\n\r\n [[ True, True, True, False, False, False, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, False, False, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, False, True, True, True, True],\r\n [ True, True, True, True, True, True, False]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, False, True, True, False, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, True, True, False, True, True],\r\n [ True, True, False, True, False, False, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, False, False]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, True, True, True, True, True]]], device='cuda:0')\r\n```\r\n\r\nAnd btw this is what we get when print `old_matmul_result == new_matmul_result`\r\n```\r\ntensor([[[ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, False, True, True, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, True, False, True, True, True],\r\n [ True, False, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, False, True, True]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, False, True, True, True, True, False],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, False, True, True, True, False],\r\n [ True, True, True, True, False, True, True],\r\n [ True, False, False, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, False, True, True, True, False],\r\n [ True, True, False, False, True, True, False],\r\n [ True, True, True, True, True, True, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, False, True, True, True, True],\r\n [ True, False, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, False, True, True, False, False, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, False, True, False, True, True, True],\r\n [ True, True, True, False, True, True, False]],\r\n\r\n [[ True, True, True, False, True, False, True],\r\n [ True, True, True, True, False, True, False],\r\n [ True, False, True, True, True, False, False],\r\n [ True, True, True, True, True, True, False],\r\n [ True, True, True, True, True, False, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, False, False, False, True]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, False, True, True, False],\r\n [ True, True, False, True, False, False, True],\r\n [ True, True, True, True, True, False, True],\r\n [ True, False, True, False, False, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, True, False, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, False, True, True, True],\r\n [ True, True, True, True, True, True, False],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, True, True, True, True, False],\r\n [ True, True, False, True, False, True, True],\r\n [ True, True, True, True, True, True, False],\r\n [ True, True, False, True, True, False, True],\r\n [ True, False, True, True, True, False, False],\r\n [ True, False, True, True, False, True, False],\r\n [ True, False, True, True, False, True, True]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, False, True, True, True, True],\r\n [ True, True, True, True, True, True, False],\r\n [ True, False, True, True, False, True, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, False, True, False]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, False, True, True],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, False, True, True, False, True, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, False, False, True, True],\r\n [ True, False, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, True, True, False, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, True, True, True, True, True],\r\n [ True, False, True, True, True, False, True],\r\n [ True, False, True, True, True, False, True],\r\n [ True, True, True, False, True, True, False],\r\n [ True, True, True, True, True, True, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, True, False, True, False, True]],\r\n\r\n [[ True, True, False, True, True, False, False],\r\n [ True, False, True, False, True, True, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, False, True, False, False, True, True],\r\n [ True, True, True, True, False, True, False],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True]],\r\n\r\n [[ True, True, True, True, True, False, True],\r\n [ True, True, True, False, True, True, False],\r\n [ True, True, True, True, True, False, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, True, True, True, True],\r\n [ True, True, True, False, False, False, True],\r\n [ True, True, True, True, True, False, True]]], device='cuda:0')\r\n```",
"all tests are passing now ! Is it ok if we merge this @stas00 (since you are working on DS inference just to check if this PR does not conflict anything with you work) ?",
"I didn't have a chance to read this PR, but let me at least run a quick test with it.\r\n\r\n**update:** it looks fine for the 350b model - I'm waiting for the 176 to download and will test with it as well.\r\n\r\nif in a rush please go ahead and merge and if anything emerges we can fix it after.",
"There were lots of changes since you got approval, so please wait for a re-review of @patrickvonplaten and me.",
"Thanks a lot @sgugger @patrickvonplaten \r\nI think that we will do the tests in fp32 instead, we just need to keep in mind that doing batched generation can be flaky for small models (<=350m) as we have identified it with @NouamaneTazi . We will put a comment on the tests explaining what we have found and I think that we should be good to go!",
"All tests are passing now (tested on A100) 🎉\r\n",
"Now tests pass on both A100 and Titan RTX 🎉 (because we used `fp32`)\r\n(Note that the test `BloomModelTest::test_batch_generation_padd` is still failing on Titan RTX in `fp16` whether for this PR or the `main` branch, because of the issue mentioned above)\r\n"
] | 1,656
| 1,661
| 1,657
|
CONTRIBUTOR
| null |
Moved the original PR: #17759 here to check if the tests pass
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17866/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17866/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17866",
"html_url": "https://github.com/huggingface/transformers/pull/17866",
"diff_url": "https://github.com/huggingface/transformers/pull/17866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17866.patch",
"merged_at": 1657559774000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17865
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17865/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17865/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17865/events
|
https://github.com/huggingface/transformers/pull/17865
| 1,283,721,616
|
PR_kwDOCUB6oc46T8oB
| 17,865
|
[tests/VisionEncoderDecoder] import to_2tuple from test utils
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
MEMBER
| null |
# What does this PR do?
Import to_2tuple from `testing_utils` as it's removed from `modeling_vit`file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17865/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17865",
"html_url": "https://github.com/huggingface/transformers/pull/17865",
"diff_url": "https://github.com/huggingface/transformers/pull/17865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17865.patch",
"merged_at": 1656077010000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17864
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17864/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17864/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17864/events
|
https://github.com/huggingface/transformers/pull/17864
| 1,283,720,866
|
PR_kwDOCUB6oc46T8dZ
| 17,864
|
Fix Maskformer test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"You are right, \r\n\r\nprev (and checkpoint on Hub): `model.pixel_level_module.decoder.fpn.stem.0.weight`\r\n\r\nthis PR: `model.pixel_level_module.decoder.fpn.stem.layers.1.weight` --> Get extra `layers` attribute\r\n\r\n\r\nI will ignore the test",
"But maybe let's use `self.layers = nn.Sequential(...)` for future models in the 1st place?",
"Yes, the problem here is that the model inherited from `nn.Sequential`, which was a mistake. And to fix it, we had to go through this hacky way. But it's definitely not the recommended approach!",
"Test skipped now."
] | 1,656
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
Fix Maskformer test `test_multi_gpu_data_parallel_forward`.
Ignore the test, as the original workaround will break current checkpoints.
------
Original Attempt
I know we probably want to avoid using `nn.DataParallel(model)`. But before doing so, I just tried my best to fix tests.
After spending some time debugging, I find using `add_module` instead of `nn.Sequential` causing problems.
I am not sure if the change in this PR is what we prefer though.
@NielsRogge Is there any reason to use `add_module`? I don't know if the comment regarding `Provide backwards compatibility ...` is really necessary.
https://github.com/huggingface/transformers/blob/d88719581b34f301edcc7772d927d8a3e3a77af6/src/transformers/models/maskformer/modeling_maskformer.py#L1986
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17864/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17864",
"html_url": "https://github.com/huggingface/transformers/pull/17864",
"diff_url": "https://github.com/huggingface/transformers/pull/17864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17864.patch",
"merged_at": 1656092101000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17863
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17863/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17863/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17863/events
|
https://github.com/huggingface/transformers/pull/17863
| 1,283,641,639
|
PR_kwDOCUB6oc46TrJC
| 17,863
|
[Flax] Fix incomplete batches in example scripts
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just waiting to double check that the slow tests pass from [`test_flax_examples.py`](https://github.com/huggingface/transformers/blob/main/examples/flax/test_flax_examples.py) before merging. Working with @patil-suraj to verify this ✅",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@patil-suraj @sanchit-gandhi can we merge this one?",
"Just verifying the slow tests from [`test_flax_examples.py`](https://github.com/huggingface/transformers/blob/main/examples/flax/test_flax_examples.py) pass on a <s>v3-8</s> v100 GPU!"
] | 1,656
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently in our Flax examples scripts, we drop the last incomplete batch during training and inference:
https://github.com/huggingface/transformers/blob/09178705101b9803e7b9ea7f79a46b4c242dd4bf/examples/flax/summarization/run_summarization_flax.py#L350
We do this for two reasons:
1. Because XLA is not shape polymorphic, forming the last batch with shape different from the preceding batches triggers a recompilation of the `pmap`'d function .
2. If the batch size is not divisible by the number devices, then the last step must be executed on a single device (or a subset of devices), potentially leading to OOMs.
During training, dropping the last batch isn't an issue: since we shuffle the data and train for multiple epochs, all of the training data is eventually used and the effects of dropping the last batch amortised.
However, during evaluation and prediction, dropping the last batch leads to incorrect results: since we don't account for the examples in the last batch, we do not evaluate over the whole dataset, and thus have partial results.
This PR corrects for the incomplete batches in the relevant Flax training examples.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17863/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17863",
"html_url": "https://github.com/huggingface/transformers/pull/17863",
"diff_url": "https://github.com/huggingface/transformers/pull/17863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17863.patch",
"merged_at": 1658933447000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17862
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17862/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17862/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17862/events
|
https://github.com/huggingface/transformers/issues/17862
| 1,283,464,849
|
I_kwDOCUB6oc5MgB6R
| 17,862
|
RegexTokenizer
|
{
"login": "pschwllr",
"id": 38880871,
"node_id": "MDQ6VXNlcjM4ODgwODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/38880871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pschwllr",
"html_url": "https://github.com/pschwllr",
"followers_url": "https://api.github.com/users/pschwllr/followers",
"following_url": "https://api.github.com/users/pschwllr/following{/other_user}",
"gists_url": "https://api.github.com/users/pschwllr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pschwllr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pschwllr/subscriptions",
"organizations_url": "https://api.github.com/users/pschwllr/orgs",
"repos_url": "https://api.github.com/users/pschwllr/repos",
"events_url": "https://api.github.com/users/pschwllr/events{/privacy}",
"received_events_url": "https://api.github.com/users/pschwllr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @pschwllr,\r\n\r\nI currently write my master thesis dealing with molecules and transformers and came here looking for a SMILES tokenizer to use with Hugging Face transformers. I am neither into molecular biology nor Hugging Face, so proceed with some caution, but maybe this is still useful. If it is, please let me know, especially if you have ideas how to improve it.\r\n\r\nThis code snippet provides a tokenizer that can be used with Hugging Face transformers. It uses a simple Word Level algorithm, which you could easily replace with BPE etc..\r\n\r\n```py\r\nfrom tokenizers import Regex, Tokenizer\r\nfrom tokenizers.models import WordLevel\r\nfrom tokenizers.pre_tokenizers import Split\r\nfrom tokenizers.processors import TemplateProcessing\r\nfrom tokenizers.trainers import WordLevelTrainer\r\nfrom transformers import PreTrainedTokenizerFast\r\n\r\nSMI_REGEX_PATTERN = r\"\"\"(\\[[^\\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\\(|\\)|\\.|=|-|\\+|\\\\|\\/|:|~|@|\\?|>>?|\\*|\\$|\\%[0-9]{2}|[0-9])\"\"\"\r\nBOS_TOKEN = \"^\"\r\nEOS_TOKEN = \"&\"\r\nPAD_TOKEN = \" \"\r\nUNK_TOKEN = \"?\"\r\nMODEL_MAX_LENGTH = 120\r\n\r\nsmi = \"CC(C)(C)c1ccc2occ(CC(=O)Nc3ccccc3F)c2c1\"\r\n\r\nsmiles_tokenizer = Tokenizer(WordLevel(unk_token=UNK_TOKEN))\r\nsmiles_tokenizer.pre_tokenizer = Split(\r\n pattern=Regex(SMI_REGEX_PATTERN), behavior=\"isolated\", invert=False\r\n)\r\nsmiles_trainer = WordLevelTrainer(\r\n special_tokens=[BOS_TOKEN, EOS_TOKEN, PAD_TOKEN, UNK_TOKEN]\r\n)\r\nsmiles_tokenizer.train_from_iterator(smi, trainer=smiles_trainer)\r\nsmiles_tokenizer.post_processor = TemplateProcessing(\r\n single=BOS_TOKEN + \" $A \" + EOS_TOKEN,\r\n special_tokens=[\r\n (BOS_TOKEN, smiles_tokenizer.token_to_id(BOS_TOKEN)),\r\n (EOS_TOKEN, smiles_tokenizer.token_to_id(EOS_TOKEN)),\r\n ],\r\n)\r\n\r\ntokenizer_pretrained = PreTrainedTokenizerFast(\r\n tokenizer_object=smiles_tokenizer,\r\n model_max_length=MODEL_MAX_LENGTH,\r\n padding_side=\"right\",\r\n truncation_side=\"left\",\r\n bos_token=BOS_TOKEN,\r\n eos_token=EOS_TOKEN,\r\n pad_token=PAD_TOKEN,\r\n unk_token=UNK_TOKEN,\r\n)\r\n\r\nprint(tokenizer_pretrained.encode(smi)) # [0, 5, 5, 6, 5, ..., 4, 8, 1]\r\n```"
] | 1,656
| 1,667
| 1,659
|
NONE
| null |
### Feature request
We would like to implement a general RegexTokenizer, which gets a regex as input and tokenizes strings according to this regex.
### Motivation
In chemistry, for example, there are line notations like SMILES (http://opensmiles.org/opensmiles.html), which can be used to represent molecules and reactions as strings.
In previous work, such as the MolecularTransformer (https://pubs.acs.org/doi/full/10.1021/acscentsci.9b00576, built with OpenNMT) or RXNMapper (https://www.science.org/doi/10.1126/sciadv.abe4166, with huggingface/transformers), we used a regex to split SMILES by atoms/bonds.
```
SMI_REGEX_PATTERN = r"(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|#|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9])"
def smi_tokenizer(smi, pattern=SMI_REGEX_PATTERN):
"""
Tokenize a SMILES molecule or reaction
"""
import re
regex = re.compile(pattern)
tokens = [token for token in regex.findall(smi)]
assert smi == ''.join(tokens)
return ' '.join(tokens)
```
But every time we want to change the transformer model, we have to rewrite the tokenizer and redefine it, to make it work with the model. Would there be a more efficient and general way to do it? We could imagine that also other fields (e.g. proteins) could benefit from a RegexTokenizer.
### Your contribution
Happy to help with the PR. The regex for SMILES (chemistry) is ready. We just don't know where to best start.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17862/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17862/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17861
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17861/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17861/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17861/events
|
https://github.com/huggingface/transformers/pull/17861
| 1,283,358,783
|
PR_kwDOCUB6oc46Strf
| 17,861
|
Fix the url mistake
|
{
"login": "mmdjiji",
"id": 25279643,
"node_id": "MDQ6VXNlcjI1Mjc5NjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/25279643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmdjiji",
"html_url": "https://github.com/mmdjiji",
"followers_url": "https://api.github.com/users/mmdjiji/followers",
"following_url": "https://api.github.com/users/mmdjiji/following{/other_user}",
"gists_url": "https://api.github.com/users/mmdjiji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmdjiji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmdjiji/subscriptions",
"organizations_url": "https://api.github.com/users/mmdjiji/orgs",
"repos_url": "https://api.github.com/users/mmdjiji/repos",
"events_url": "https://api.github.com/users/mmdjiji/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmdjiji/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix the url mistake from `https://huggingface.co/docstransformers/training` to `https://huggingface.co/docs/transformers/training`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17861/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17861",
"html_url": "https://github.com/huggingface/transformers/pull/17861",
"diff_url": "https://github.com/huggingface/transformers/pull/17861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17861.patch",
"merged_at": 1656349760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17860
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17860/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17860/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17860/events
|
https://github.com/huggingface/transformers/issues/17860
| 1,283,293,287
|
I_kwDOCUB6oc5MfYBn
| 17,860
|
Text generation: Unexpected behavior when input ends with newlines
|
{
"login": "monsieurpooh",
"id": 29328114,
"node_id": "MDQ6VXNlcjI5MzI4MTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/29328114?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monsieurpooh",
"html_url": "https://github.com/monsieurpooh",
"followers_url": "https://api.github.com/users/monsieurpooh/followers",
"following_url": "https://api.github.com/users/monsieurpooh/following{/other_user}",
"gists_url": "https://api.github.com/users/monsieurpooh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monsieurpooh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monsieurpooh/subscriptions",
"organizations_url": "https://api.github.com/users/monsieurpooh/orgs",
"repos_url": "https://api.github.com/users/monsieurpooh/repos",
"events_url": "https://api.github.com/users/monsieurpooh/events{/privacy}",
"received_events_url": "https://api.github.com/users/monsieurpooh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @monsieurpooh,\r\n\r\nSorry I can't run:\r\n\r\n```py\r\n gen_tokens = model.generate(input_ids, do_sample=specifiedDoSample, temperature=specifiedTemperature, max_length=calculated_max_length, min_length=calculated_min_length, repetition_penalty=specifiedRepetitionPenalty, bad_words_ids=badWordsTokens)\r\n gen_text = tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0] # tokenizer.batch_decode(gen_tokens)[0]\r\n print(gen_text)\r\n```\r\n\r\nas `model` is not defined. \r\n\r\nCan you copy-paste a reproducible code snippet please? :-) Thanks a lot!",
"Hi Patrick, here is a code snippet https://paste.ee/p/B2Upc\r\n\r\nAnd here is the input I am using, but please make sure there's 2 newlines at the end to repro: https://paste.ee/p/ND8cZ",
"Hey @monsieurpooh,\r\n\r\nCould you maybe try to just copy-paste here in this thread a short, minimal code snippet (sorry we have very limited amount of time to look at issues and a 200 code snippet file with lots of commented out code takes too much time). Can you try to condense the problem into ~5-10 lines of code maybe? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Please use the following minimal repro code snippet and observe the behavior described in the previous comments by modifying the input to tokenizer.\r\n\r\n```\r\nfrom transformers import GPTNeoForCausalLM, GPT2Tokenizer\r\n\r\nmodel_name = \"EleutherAI/gpt-neo-125M\"\r\n\r\nmodel = GPTNeoForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, cache_dir='gpt_cache_dir', resume_download=True).half().to(\"cuda:0\")\r\ntokenizer = GPT2Tokenizer.from_pretrained(model_name, low_cpu_mem_usage=True, cache_dir='gpt_cache_dir', resume_download=True)\r\n\r\ninput_ids = tokenizer(\"This is a line 1\\n\\nThis is a line 2\\n\\nThis is a line 3\\n\\n\", return_tensors=\"pt\").input_ids.cuda()\r\ngen_tokens = model.generate(input_ids, do_sample=True, temperature=0.01, max_length=40, min_length=1, repetition_penalty=1.0)\r\n\r\ngen_text = \"Output: \\\"\" + tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0] + \"\\\"\"\r\n\r\nprint(gen_text)\r\n```",
"Gently pinging @gante here as well"
] | 1,656
| 1,669
| 1,659
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.15.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.5.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten, @Narsil
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
gen_tokens = model.generate(input_ids, do_sample=specifiedDoSample, temperature=specifiedTemperature, max_length=calculated_max_length, min_length=calculated_min_length, repetition_penalty=specifiedRepetitionPenalty, bad_words_ids=badWordsTokens)
gen_text = tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0] # tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
As input, use text such as (the end of the text has 2 newlines):
```
This is a line 1
This is a line 2
This is a line 3
```
Actual behavior:
-If the input ends with 1 newline, generating multiple tokens works as expected, but generating just 1 token says the next token should be a newline by itself.
-If the input ends with 2 newlines, generate multiple tokens doesn't work as expected, and printing the next top score reveals the next token is some unexpected thing such as another newline or a token beginning with a space.
Reason it's a problem: If the prompt had a format like this, there is no way to generate a good result while still specifying newline as one of the bad_words_ids. Say we have some dialogue with multiple people saying things, each separated by 2 newlines. We want the next text to also be separated by 2 newlines, but contain no more newlines (we want it to be a big paragraph). There is no way to generate this correctly.
### Expected behavior
Either:
If the input ends with 1 newline, then the next token should be a newline followed by a word, such as "\nThis"
OR
If the input ends with 2 newlines, then the next token should be a word that's not preceded by a space, rather than yet another newline
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17860/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17859
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17859/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17859/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17859/events
|
https://github.com/huggingface/transformers/issues/17859
| 1,283,043,014
|
I_kwDOCUB6oc5Mea7G
| 17,859
|
Bad readme for HF model codeparrot/codeparrot-small
|
{
"login": "shi-kejian",
"id": 32584185,
"node_id": "MDQ6VXNlcjMyNTg0MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/32584185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shi-kejian",
"html_url": "https://github.com/shi-kejian",
"followers_url": "https://api.github.com/users/shi-kejian/followers",
"following_url": "https://api.github.com/users/shi-kejian/following{/other_user}",
"gists_url": "https://api.github.com/users/shi-kejian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shi-kejian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shi-kejian/subscriptions",
"organizations_url": "https://api.github.com/users/shi-kejian/orgs",
"repos_url": "https://api.github.com/users/shi-kejian/repos",
"events_url": "https://api.github.com/users/shi-kejian/events{/privacy}",
"received_events_url": "https://api.github.com/users/shi-kejian/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi!\r\n\r\nThanks for reporting. We now have a [discussion and PR feature](https://huggingface.co/codeparrot/codeparrot-small/discussions) on the hub, meaning that you can directly open an issue on the hub for a particular repository. So feel free to ping them there!",
"Thanks for reporting - it is fixed now!",
"Closing the issue in that case!"
] | 1,656
| 1,656
| 1,656
|
NONE
| null |
### System Info
```shell
Not system-dependent
https://huggingface.co/codeparrot/codeparrot-small
The README is not updated
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("lvwerra/codeparrot-small")
model = AutoModelWithLMHead.from_pretrained("lvwerra/codeparrot-small")
### Expected behavior
```shell
The model loading should be successful
The model card is "codeparrot/codeparrot-small" but in example usage it's "lvwerra/codeparrot-small".
A simple update to the model card README will do
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17859/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17858
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17858/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17858/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17858/events
|
https://github.com/huggingface/transformers/pull/17858
| 1,282,995,164
|
PR_kwDOCUB6oc46Rjtm
| 17,858
|
Add type hints for gptneox models
|
{
"login": "willtai",
"id": 20279061,
"node_id": "MDQ6VXNlcjIwMjc5MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/20279061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willtai",
"html_url": "https://github.com/willtai",
"followers_url": "https://api.github.com/users/willtai/followers",
"following_url": "https://api.github.com/users/willtai/following{/other_user}",
"gists_url": "https://api.github.com/users/willtai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willtai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willtai/subscriptions",
"organizations_url": "https://api.github.com/users/willtai/orgs",
"repos_url": "https://api.github.com/users/willtai/repos",
"events_url": "https://api.github.com/users/willtai/events{/privacy}",
"received_events_url": "https://api.github.com/users/willtai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding missing type hints for `GPTNeoxForCausalLM` and `GPTNeoXModel` as referenced in this issue.(https://github.com/huggingface/transformers/issues/16059#issuecomment-1164898772).
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community please feel free to review😄
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17858/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17858",
"html_url": "https://github.com/huggingface/transformers/pull/17858",
"diff_url": "https://github.com/huggingface/transformers/pull/17858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17858.patch",
"merged_at": 1656087156000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17857
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17857/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17857/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17857/events
|
https://github.com/huggingface/transformers/pull/17857
| 1,282,874,086
|
PR_kwDOCUB6oc46RKTj
| 17,857
|
TF: XLA beam search + most generation-compatible models are now also XLA-generate-compatible
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Very cool! Can we try it out for at least on Encoder-Decoder architecture as well (just to know that this code holds true here)?",
"@patrickvonplaten @Rocketknight1 now with encoder-decoder tests, and ready for review -- I was working on it on a separate branch, so I've merged it into this one. Now, this PR standardizes the XLA model kwargs preparation, and most models can use the XLA functionality. Some models were incompatible for different reasons, so there is a new flag to gate XLA generation (and the flag is set in the problematic architectures).\r\n\r\nFinally, I'm also considering adding a general test like `test_xla_generate_fast`, but with `@slow`, beam search, and >100 tokens. It will probably break for a few models (like T5), but at least we would be able to automatically track which models are reliable with XLA beam search -- WDYT? ",
"Note: as per the comment above, if this PR gets merged as it is, I will open an issue to track issues regarding XLA generation (relevant models failing fast tests, as well as models failing the slow tests)"
] | 1,656
| 1,679
| 1,656
|
MEMBER
| null |
# What does this PR do?
The much-awaited PR -- beam search is now XLA compatible. GPT2 is the only model with XLA beam search tests, more models will follow in subsequent PRs 🎊 Preliminary tests on my machine shows that XLA beam search on GPU is ~26x faster (greedy search and sample are ~30x faster).
Slow tests have been run for the usual generate models (gpt2, t5, rag, speech_to_text, encoder_decoder, vision_encoder_decoder, bart).
EDIT: I've also generalized a few functions, and now ALL models that are compatible with generate are also compatible with XLA generate (with a few exceptions, when the models have no past cache support)
__________________________________
A hard-earned lesson which is kinda obvious in hindsight: `if` branches can make the XLA compiler confused about variable shapes, tagging their shape as `<unknown>`, which in turn causes all sorts of exceptions. Out of curiosity, I tried replacing the `if` by `tf.cond`, but the `<unknown>` shape persisted (because the tensor could indeed have a different shape at tracing time, depending on the branch taken)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17857/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17857/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17857",
"html_url": "https://github.com/huggingface/transformers/pull/17857",
"diff_url": "https://github.com/huggingface/transformers/pull/17857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17857.patch",
"merged_at": 1656502861000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17856
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17856/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17856/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17856/events
|
https://github.com/huggingface/transformers/pull/17856
| 1,282,779,917
|
PR_kwDOCUB6oc46Q1v1
| 17,856
|
Properly calculate the total train iterations and recalculate num epochs in no_trainer scripts
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger new timings when compared to old. Main decreases repeatedly were seen between image_classification, swag, and squad (though squad is an iffy one. the previous two I can guarantee):\r\n\r\n<html><body>\r\n<!--StartFragment--><google-sheets-html-origin>\r\n\r\nExample | Before | After\r\n-- | -- | --\r\nimage_classification | 99.7 | 40.55\r\nswag | 65.21 | 55.31\r\nsquad | 59.45 | 41.67\r\nclm | 37.45 | 35.69\r\nner | 28.34 | 25.51\r\nglue | 21.88 | 19.35\r\nmlm | 18.52 | 15.47\r\n\r\n<!--EndFragment-->\r\n</body>\r\n</html>"
] | 1,656
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a situation where your `max_train_steps` was less than that of a single epoch, but yet the script would still continue through the entire first epoch due to how the number of batches recalculation was performed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17856/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17856",
"html_url": "https://github.com/huggingface/transformers/pull/17856",
"diff_url": "https://github.com/huggingface/transformers/pull/17856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17856.patch",
"merged_at": 1656013561000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17855
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17855/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17855/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17855/events
|
https://github.com/huggingface/transformers/issues/17855
| 1,282,767,110
|
I_kwDOCUB6oc5MdXkG
| 17,855
|
LayoutLMv2 training on sagemaker error: undefined value has_torch_function_variadic
|
{
"login": "Natlem",
"id": 4445315,
"node_id": "MDQ6VXNlcjQ0NDUzMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4445315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Natlem",
"html_url": "https://github.com/Natlem",
"followers_url": "https://api.github.com/users/Natlem/followers",
"following_url": "https://api.github.com/users/Natlem/following{/other_user}",
"gists_url": "https://api.github.com/users/Natlem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Natlem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Natlem/subscriptions",
"organizations_url": "https://api.github.com/users/Natlem/orgs",
"repos_url": "https://api.github.com/users/Natlem/repos",
"events_url": "https://api.github.com/users/Natlem/events{/privacy}",
"received_events_url": "https://api.github.com/users/Natlem/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @philschmid (hope I am tagging correctly)",
"@Natlem could you try adding `debugger_hook_config=False` to the `HuggingFace` estimator? \r\n\r\n```python\r\n huggingface_estimator = HuggingFace(entry_point='train.py',\r\n source_dir='scripts',\r\n instance_type='ml.p3.2xlarge',\r\n instance_count=1,\r\n role=role,\r\n transformers_version='4.17.0',\r\n pytorch_version='1.10.2',\r\n py_version='py38',\r\n hyperparameters=hyperparameters,\r\n environment={'HF_TASK': 'text-classification'},\r\n code_location='s3://dummy_code_location',\r\n debugger_hook_config=False,\r\n)\r\n```",
"Hi @philschmid ,\r\n\r\nAdded the `debugger_hook_config=False`, the error is gone now. Thanks !",
"Awesome, closing the issue. Feel free to reopen if you have more issues.",
"@Natlem i forwarded the error to the AWS team to be able use the debugger soon. ",
"@philschmid Thanks !",
"@philschmid do you have any idea why this solves the problem? Is it documented by AWS anywhere?\r\n\r\nSagemaker Debugger has cost me multiple days of time in the mysterious problems it produces. Far more than anything else on Sagemaker. I posted an issue on awslabs about this awhile back and never got a reply. I would really like to know what is going on here\r\n\r\n**For anyone encountering this while using a HyperparameterTuner**\r\nPassing `debugger_hook_config=False` in the `Estimator` will not the solve the problem. Further, passing `environment={'USE_SMDEBUG':0}` also will not solve the problem. Somehow these settings never make it to a tuner's constituent training jobs.\r\n\r\nThe only way to solve it is to set `ENV USE_SMDEBUG=\"0\"` in the docker container that will be running the constituent training jobs.",
"> Somehow these settings never make it to a tuner's constituent training jobs.\r\n\r\nAre you using the `HuggingFace` estimator or the `HyperparameterTuner`"
] | 1,656
| 1,664
| 1,656
|
NONE
| null |
### System Info
```shell
transformer: 4.17.0
torch: 1.10.2
Platform: Sagemaker Deep Learning Container
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The error only comes when training on Sagemaker using Huggingface.
Scripts to start training on Sagemaker:
Folder organization:
```
./
----sg_training.py
----scripts
-------requirements.txt
-------train.py
```
sg_training.py:
```
import boto3
import sagemaker
from sagemaker.huggingface import HuggingFace
if __name__ == "__main__":
iam_client = boto3.client(...)
role = iam_client.get_role(...)['Role']['Arn']
sess = sagemaker.Session()
sagemaker_session_bucket = 's3-sagemaker-session'
hyperparameters = {'epochs': 20,
'train_batch_size': 1,
'model_name': "microsoft/layoutxlm-base",
'output_dir': '/opt/ml/model/',
'checkpoints': '/opt/ml/checkpoints/',
'combine_train_val': True,
'exp_tracker': "all",
'exp_name': 'Sagemaker Training'
}
huggingface_estimator = HuggingFace(entry_point='train.py',
source_dir='scripts',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
hyperparameters=hyperparameters,
environment={'HF_TASK': 'text-classification'},
code_location='s3://dummy_code_location')
huggingface_estimator.fit()
```
Entrypoint scripts folder:
requirements.txt:
```
git+https://github.com/facebookresearch/detectron2.git
```
train.py:
```
import argparse
import logging
import os
import sys
from transformers import LayoutLMv2ForSequenceClassification
def run():
model = LayoutLMv2ForSequenceClassification.from_pretrained('microsoft/layoutxlm-base',
num_labels=5)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--epochs", type=int, default=3)
parser.add_argument("--exp_name", type=str, default="Sagemaker Training")
parser.add_argument("--train-batch-size", type=int, default=2)
parser.add_argument("--eval-batch-size", type=int, default=1)
parser.add_argument("--warmup_steps", type=int, default=500)
parser.add_argument("--model_name", type=str)
parser.add_argument("--learning_rate", type=str, default=1e-5)
parser.add_argument("--combine_train_val", type=bool, default=False)
# Data, model, and output directories
parser.add_argument("--output-data-dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"])
parser.add_argument("--checkpoints", type=str, default="/opt/ml/checkpoints")
parser.add_argument("--model-dir", type=str, default='/opt/ml/code/model')
parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"])
args, _ = parser.parse_known_args()
logger = logging.getLogger(__name__)
logging.basicConfig(
level=logging.getLevelName("INFO"),
handlers=[logging.StreamHandler(sys.stdout)],
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
run()
```
### Expected behavior
```shell
Here the log on the error from AWS Cloud Watch:
Invoking script with the following command:
/opt/conda/bin/python3.8 train.py --checkpoints /opt/ml/checkpoints/ --combine_train_val True --epochs 20 --exp_name Sagemaker_Training_doc_cls --exp_tracker all --model_name microsoft/layoutxlm-base --output_dir /opt/ml/model/ --train_batch_size 1
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2777, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/opt/conda/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/opt/conda/lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 48, in <module>
from detectron2.modeling import META_ARCH_REGISTRY
File "/opt/conda/lib/python3.8/site-packages/detectron2/modeling/__init__.py", line 2, in <module>
from detectron2.layers import ShapeSpec
File "/opt/conda/lib/python3.8/site-packages/detectron2/layers/__init__.py", line 2, in <module>
from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList
File "/opt/conda/lib/python3.8/site-packages/detectron2/layers/batch_norm.py", line 4, in <module>
from fvcore.nn.distributed import differentiable_all_reduce
File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/__init__.py", line 4, in <module>
from .focal_loss import (
File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/focal_loss.py", line 52, in <module>
sigmoid_focal_loss_jit: "torch.jit.ScriptModule" = torch.jit.script(sigmoid_focal_loss)
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py", line 1310, in script
fn = torch._C._jit_script_compile(
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_recursive.py", line 838, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py", line 1310, in script
fn = torch._C._jit_script_compile(
RuntimeError:
undefined value has_torch_function_variadic:
File "/opt/conda/lib/python3.8/site-packages/torch/utils/smdebug.py", line 2962
>>> loss.backward()
"""
if has_torch_function_variadic(input, target, weight, pos_weight):
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return handle_torch_function(
binary_cross_entropy_with_logits,
'binary_cross_entropy_with_logits' is being compiled since it was called from 'sigmoid_focal_loss'
File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/focal_loss.py", line 36
targets = targets.float()
p = torch.sigmoid(inputs)
ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
p_t = p * targets + (1 - p) * (1 - targets)
loss = ce_loss * ((1 - p_t) ** gamma)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "train.py", line 6, in <module>
from transformers import LayoutLMv2ForSequenceClassification
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2768, in __getattr__
value = getattr(module, name)
File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2767, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py", line 2779, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.layoutlmv2.modeling_layoutlmv2 because of the following error (look up to see its traceback):
undefined value has_torch_function_variadic:
File "/opt/conda/lib/python3.8/site-packages/torch/utils/smdebug.py", line 2962
>>> loss.backward()
"""
if has_torch_function_variadic(input, target, weight, pos_weight):
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return handle_torch_function(
binary_cross_entropy_with_logits,
'binary_cross_entropy_with_logits' is being compiled since it was called from 'sigmoid_focal_loss'
File "/opt/conda/lib/python3.8/site-packages/fvcore/nn/focal_loss.py", line 36
targets = targets.float()
p = torch.sigmoid(inputs)
ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
p_t = p * targets + (1 - p) * (1 - targets)
loss = ce_loss * ((1 - p_t) ** gamma)
```
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17855/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17854
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17854/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17854/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17854/events
|
https://github.com/huggingface/transformers/pull/17854
| 1,282,758,247
|
PR_kwDOCUB6oc46QxIR
| 17,854
|
Fix Splinter test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
`test_multi_gpu_data_parallel_forward` is not mean to run for `SplinterForQuestionAnswering`, due to the number of question tokens might be different on different replicas. This PR skips this test for `SplinterForQuestionAnswering`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17854/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17854",
"html_url": "https://github.com/huggingface/transformers/pull/17854",
"diff_url": "https://github.com/huggingface/transformers/pull/17854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17854.patch",
"merged_at": 1656080775000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17853
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17853/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17853/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17853/events
|
https://github.com/huggingface/transformers/issues/17853
| 1,282,657,094
|
I_kwDOCUB6oc5Mc8tG
| 17,853
|
Fail when using pipeline for the inference of DeBERTa-Vx ORTModels
|
{
"login": "JingyaHuang",
"id": 44135271,
"node_id": "MDQ6VXNlcjQ0MTM1Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/44135271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JingyaHuang",
"html_url": "https://github.com/JingyaHuang",
"followers_url": "https://api.github.com/users/JingyaHuang/followers",
"following_url": "https://api.github.com/users/JingyaHuang/following{/other_user}",
"gists_url": "https://api.github.com/users/JingyaHuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JingyaHuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JingyaHuang/subscriptions",
"organizations_url": "https://api.github.com/users/JingyaHuang/orgs",
"repos_url": "https://api.github.com/users/JingyaHuang/repos",
"events_url": "https://api.github.com/users/JingyaHuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/JingyaHuang/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @JingyaHuang ,\r\n\r\nThe use case seems solid. Ideally we should be able to run the code as-is. But as far as I understand it's not really doable because `token_type_ids` **are** used in some models but not others, so finding a all covering solution is tricky.\r\n\r\nAm I correct ?\r\n\r\nThen, for adding the argument instead of adding `return_token_type_ids` I suggest adding `tokenizer_args` as a dict where you could pass `tokenizer_args={\"return_token_type_ids\": False}` . We can always think about promoting this particular argument to first class, but it seems that going more explicit is better here WDYT ?\r\n\r\n",
"Hi @Narsil,\r\n\r\nExactly, the pipelines work well for many other ort models except for DeBERTa(s). As the exported DeBERTa ONNX model with `token_type_ids` can only be traced by `torch.jit.trace` when `model.config.type_vocab_size`>0. And `token_type_ids` are not traced thus not a valid input when `model.config.type_vocab_size`=0(default), it is definitely tricky.\r\n\r\n`tokenizer_kwargs` sounds good to me! We might want to enable users to do something like this:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"{checkpoint}\")\r\nmodel = ORTModelForSequenceClassification.from_pretrained(\"{checkpoint}\")\r\nonnx_classifier = pipeline(\"text-classification\", model=model, tokenizer=tokenizer, return_token_type_ids=False)\r\ntext = \"Hello, my dog is cute\"\r\npred = onnx_classifier(text)\r\n```\r\nThis snippet works already as [`TextClassificationPipeline.preprocess` takes `tokenizer_kwargs` as input](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_classification.py#L147), however it is not yet supported for other tasks(e.g. [token-classification](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/token_classification.py#L193), `FeatureExtractionPipeline`...)\r\n\r\nIs it something that we want to apply to other pipelines or there might be some other considerations?\r\n",
"Pinging @mfuntowicz to get his input on wether it's a tracing issue or a pipeline issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,656
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.21.0.dev0
- Platform: Linux-5.4.0-1080-aws-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu102 (True)
```
### Who can help?
@LysandreJik @Narsil
### Reproduction
### Context
The DeBERTa tokenizers output `token_type_ids` by default. However, in the use case of using `transformers.pipeline` for inference of `ORTModelForXXX` in Optimum. Depending on `config.type_vocab_size` the exported IR doesn't always take `token_type_ids` as input, and this will lead to failure as `onnxruntime.InferenceSession` is not tolerant of invalid inputs.
* PR #17617 - Support of DeBERTa onnx
* PR [#225](https://github.com/huggingface/optimum/pull/225) - Discussion in Optimum
```python
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained("microsoft/deberta-base",from_transformers=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base")
onnx_classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "Hello, my dog is cute"
pred = onnx_classifier(text)
```
Error Message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/text_classification.py", line 138, in __call__
result = super().__call__(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1043, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1050, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/base.py", line 959, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/transformers/pipelines/text_classification.py", line 163, in _forward
return self.model(**model_inputs)
File "/home/ubuntu/optimum/optimum/modeling_base.py", line 31, in __call__
return self.forward(*args, **kwargs)
File "/home/ubuntu/optimum/optimum/onnxruntime/modeling_ort.py", line 520, in forward
outputs = self.model.run(None, onnx_inputs)
File "/home/ubuntu/anaconda3/envs/venv_opt_test/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 192, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids
```
### Ideas
To enable Pipeline for DeBERTa-Vxxx ONNX model, I am thinking of configuring `return_token_type_ids` argument in the [`preprocess` method](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/token_classification.py#L195-L200) of `XXXPipeline.preprocess` depending on `self.model.config.type_vocab_size`, which can remove `token_type_ids` from inputs when it is unused. WDYT?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17853/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17852
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17852/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17852/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17852/events
|
https://github.com/huggingface/transformers/pull/17852
| 1,282,656,689
|
PR_kwDOCUB6oc46QbHA
| 17,852
|
Index RNG states by global rank in saves
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
As pointed out in #17829, in a multi-node training the `Trainer` saves the RNG states with the same filenames in the various nodes. This causes problems when the nodes share the same file system, so it's easier to just save each file indexed by global rank instead.
Fixes #17829
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17852/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17852",
"html_url": "https://github.com/huggingface/transformers/pull/17852",
"diff_url": "https://github.com/huggingface/transformers/pull/17852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17852.patch",
"merged_at": 1656003230000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17851
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17851/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17851/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17851/events
|
https://github.com/huggingface/transformers/issues/17851
| 1,282,647,232
|
I_kwDOCUB6oc5Mc6TA
| 17,851
|
Add `trace_device` argument to `smp.DistributedModel` call in `Trainer()`
|
{
"login": "joehoover",
"id": 11277670,
"node_id": "MDQ6VXNlcjExMjc3Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11277670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joehoover",
"html_url": "https://github.com/joehoover",
"followers_url": "https://api.github.com/users/joehoover/followers",
"following_url": "https://api.github.com/users/joehoover/following{/other_user}",
"gists_url": "https://api.github.com/users/joehoover/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joehoover/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joehoover/subscriptions",
"organizations_url": "https://api.github.com/users/joehoover/orgs",
"repos_url": "https://api.github.com/users/joehoover/repos",
"events_url": "https://api.github.com/users/joehoover/events{/privacy}",
"received_events_url": "https://api.github.com/users/joehoover/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,656
| 1,659
| 1,659
|
NONE
| null |
### Feature request
Allow `trace_device` to be passed to `smp.DistributedModel` so that `Trainer` jobs that use SageMaker Model Parallel can support models that exceed a single GPU's memory.
Tagging @philschmid and @patil-suraj, because I think you both are involved in the SageMaker work?
### Motivation
The `Trainer` class provides native support for the SageMaker Model Parallel (smp) library; however, it does not support specifying the device where model tracing is conducted at the beginning of a smp training job. Because the default trace device is GPU, it is not possible to train a model that cannot fit in (a single) GPU memory, which, of course exactly when you would want to use model parallelism.
This can be fixed by passing [trace_device](https://sagemaker.readthedocs.io/en/v2.20.0/api/training/smd_model_parallel_pytorch.html) to the `smp.DistributedModel` [call](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/trainer.py#L1236) in `Trainer()`.
The value of `trace_device` could be specified in the [same way other smp parameters are specified](https://github.com/huggingface/transformers/blob/acb709d55150501698b5b500ca49683b913d4b3d/src/transformers/training_args.py#L904), which is via a json string of smp parameters, `mp_parameters`.
As noted [elsewhere](https://github.com/huggingface/transformers/issues/14851#issuecomment-1013422175_), `trace_device` is not currently supported in the HF DLC, but it is apparently roadmapped.
Accordingly, my current workaround is to use a SageMaker pytorch estimator and a custom Trainer class with the necessary overrides to ensure that `trace_device` is passed.
It would be nice to be able to use the base trainer class rather than this workaround.
### Your contribution
If this seems reasonable, I'd be happy to open a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17851/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17850
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17850/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17850/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17850/events
|
https://github.com/huggingface/transformers/issues/17850
| 1,282,623,445
|
I_kwDOCUB6oc5Mc0fV
| 17,850
|
ViTForImageClassification
|
{
"login": "lyutovad",
"id": 54598054,
"node_id": "MDQ6VXNlcjU0NTk4MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/54598054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lyutovad",
"html_url": "https://github.com/lyutovad",
"followers_url": "https://api.github.com/users/lyutovad/followers",
"following_url": "https://api.github.com/users/lyutovad/following{/other_user}",
"gists_url": "https://api.github.com/users/lyutovad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lyutovad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lyutovad/subscriptions",
"organizations_url": "https://api.github.com/users/lyutovad/orgs",
"repos_url": "https://api.github.com/users/lyutovad/repos",
"events_url": "https://api.github.com/users/lyutovad/events{/privacy}",
"received_events_url": "https://api.github.com/users/lyutovad/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@lyutovad What's the issue?",
"Please follow the template when opening issues."
] | 1,655
| 1,656
| 1,656
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17850/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17849
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17849/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17849/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17849/events
|
https://github.com/huggingface/transformers/pull/17849
| 1,282,583,371
|
PR_kwDOCUB6oc46QK-_
| 17,849
|
Fix: torch.utils.checkpoint import error.
|
{
"login": "kumapo",
"id": 70637,
"node_id": "MDQ6VXNlcjcwNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumapo",
"html_url": "https://github.com/kumapo",
"followers_url": "https://api.github.com/users/kumapo/followers",
"following_url": "https://api.github.com/users/kumapo/following{/other_user}",
"gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumapo/subscriptions",
"organizations_url": "https://api.github.com/users/kumapo/orgs",
"repos_url": "https://api.github.com/users/kumapo/repos",
"events_url": "https://api.github.com/users/kumapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumapo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
missing import statements still happens on training deberta models with gradient checkpointing.
Fixes #17848
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17849/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17849",
"html_url": "https://github.com/huggingface/transformers/pull/17849",
"diff_url": "https://github.com/huggingface/transformers/pull/17849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17849.patch",
"merged_at": 1656091409000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17848
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17848/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17848/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17848/events
|
https://github.com/huggingface/transformers/issues/17848
| 1,282,581,093
|
I_kwDOCUB6oc5McqJl
| 17,848
|
Raise AttributeError on training deberta models with gradient checkpointing
|
{
"login": "kumapo",
"id": 70637,
"node_id": "MDQ6VXNlcjcwNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumapo",
"html_url": "https://github.com/kumapo",
"followers_url": "https://api.github.com/users/kumapo/followers",
"following_url": "https://api.github.com/users/kumapo/following{/other_user}",
"gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumapo/subscriptions",
"organizations_url": "https://api.github.com/users/kumapo/orgs",
"repos_url": "https://api.github.com/users/kumapo/repos",
"events_url": "https://api.github.com/users/kumapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumapo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
- deberta models and `Trainer` with `gradient_checkpointing=True`
- `trainer.train()`
- raise `AttributeError: module 'torch.utils' has no attribute 'checkpoint'`
Refs #9617
### Expected behavior
```shell
nothing raised on training with `gradient_checkpointing=True`.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17848/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17847
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17847/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17847/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17847/events
|
https://github.com/huggingface/transformers/pull/17847
| 1,282,535,390
|
PR_kwDOCUB6oc46QAd8
| 17,847
|
Troubleshooting.mdx Translation Italian
|
{
"login": "F02934",
"id": 56677617,
"node_id": "MDQ6VXNlcjU2Njc3NjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/56677617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/F02934",
"html_url": "https://github.com/F02934",
"followers_url": "https://api.github.com/users/F02934/followers",
"following_url": "https://api.github.com/users/F02934/following{/other_user}",
"gists_url": "https://api.github.com/users/F02934/gists{/gist_id}",
"starred_url": "https://api.github.com/users/F02934/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/F02934/subscriptions",
"organizations_url": "https://api.github.com/users/F02934/orgs",
"repos_url": "https://api.github.com/users/F02934/repos",
"events_url": "https://api.github.com/users/F02934/events{/privacy}",
"received_events_url": "https://api.github.com/users/F02934/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@F02934 Hi, I remade the translation because the previous PR had problem with the commits history. You can check if everything good with the translation and with the toctree",
"Hi @F02934! Sorry for the late reply.\r\n\r\nNo worries! You simply have to edit the _toctree file, removing the empty sections, and adding a title, your _toctree file should look like this:\r\n\r\n``` yaml\r\n- sections:\r\n - local: index\r\n title: 🤗 Transformers\r\n - local: quicktour\r\n title: Tour rapido\r\n - local: installation\r\n title: Installazione\r\n title: Iniziare\r\n- sections:\r\n - local: pipeline_tutorial\r\n title: Pipeline per l'inferenza\r\n - local: autoclass_tutorial\r\n title: Carica istanze pre-allenate con AutoClass\r\n title: Esercitazione\r\n- sections:\r\n - local: troubleshooting\r\n title: Risoluzione dei problemi\r\n title: Guide pratiche\r\n```\r\n\r\nI forgot to add a section and to translate the last title!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @F02934 could you tanslate *How-to Gudes* --> Guide pratiche like in the @mfumanelli example?\r\n\r\n@sgugger @omarespejel ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes # (issue)
Translation of Troubleshooting.mdx (english) to Italian and Update toctree #17459
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mfumanelli
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17847/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17847",
"html_url": "https://github.com/huggingface/transformers/pull/17847",
"diff_url": "https://github.com/huggingface/transformers/pull/17847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17847.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17846
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17846/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17846/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17846/events
|
https://github.com/huggingface/transformers/pull/17846
| 1,282,497,472
|
PR_kwDOCUB6oc46P4CA
| 17,846
|
Update modeling_cvt.py type hints
|
{
"login": "F02934",
"id": 56677617,
"node_id": "MDQ6VXNlcjU2Njc3NjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/56677617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/F02934",
"html_url": "https://github.com/F02934",
"followers_url": "https://api.github.com/users/F02934/followers",
"following_url": "https://api.github.com/users/F02934/following{/other_user}",
"gists_url": "https://api.github.com/users/F02934/gists{/gist_id}",
"starred_url": "https://api.github.com/users/F02934/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/F02934/subscriptions",
"organizations_url": "https://api.github.com/users/F02934/orgs",
"repos_url": "https://api.github.com/users/F02934/repos",
"events_url": "https://api.github.com/users/F02934/events{/privacy}",
"received_events_url": "https://api.github.com/users/F02934/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
As shown in the colab notebook I added the missing type hints for " CvtForImageClassification
CvtModel
"
# What does this PR do?
Add missing type hints for CTV pytorch. #16059 following [this Colab notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing)
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17846/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17846",
"html_url": "https://github.com/huggingface/transformers/pull/17846",
"diff_url": "https://github.com/huggingface/transformers/pull/17846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17846.patch",
"merged_at": 1655996917000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17845
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17845/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17845/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17845/events
|
https://github.com/huggingface/transformers/pull/17845
| 1,282,433,797
|
PR_kwDOCUB6oc46PqEs
| 17,845
|
add MobileNetV2 model
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17845). All of your documentation changes will be reflected on that endpoint.",
"Good to merge @hollance ?",
"@sgugger It seems to fail some tests but they don't look they're from my changes? If you're OK with that test failing then I think this is ready to get merged.",
"Failure is a flaky test, unrelated to this PR."
] | 1,655
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds MobileNetV2 to the Transformers library.
This includes an image classification head and a basic DeepLabV3+ semantic segmentation head.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17845/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17845/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17845",
"html_url": "https://github.com/huggingface/transformers/pull/17845",
"diff_url": "https://github.com/huggingface/transformers/pull/17845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17845.patch",
"merged_at": 1668405610000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17844
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17844/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17844/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17844/events
|
https://github.com/huggingface/transformers/pull/17844
| 1,282,433,411
|
PR_kwDOCUB6oc46Pp_L
| 17,844
|
Update run_mlm.py
|
{
"login": "Muhtasham",
"id": 20128202,
"node_id": "MDQ6VXNlcjIwMTI4MjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/20128202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muhtasham",
"html_url": "https://github.com/Muhtasham",
"followers_url": "https://api.github.com/users/Muhtasham/followers",
"following_url": "https://api.github.com/users/Muhtasham/following{/other_user}",
"gists_url": "https://api.github.com/users/Muhtasham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muhtasham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muhtasham/subscriptions",
"organizations_url": "https://api.github.com/users/Muhtasham/orgs",
"repos_url": "https://api.github.com/users/Muhtasham/repos",
"events_url": "https://api.github.com/users/Muhtasham/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muhtasham/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17844). All of your documentation changes will be reflected on that endpoint.",
"Your PR now touches 518 files, which is not what you intended probably. Make sure to use the specific versions of `black` we do by doing `pip install -e . [quality]` in the repo.",
"> Your PR now touches 518 files, which is not what you intended probably. Make sure to use the specific versions of `black` we do by doing `pip install -e . [quality]` in the repo.\r\n\r\nSorry accidentally pressed requested a review ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,660
| 1,660
|
NONE
| null |
made comments consistent with run_glue.py
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
improves the comments to make more coherent with run_glue.py
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17844/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17844",
"html_url": "https://github.com/huggingface/transformers/pull/17844",
"diff_url": "https://github.com/huggingface/transformers/pull/17844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17844.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17843
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17843/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17843/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17843/events
|
https://github.com/huggingface/transformers/pull/17843
| 1,282,425,039
|
PR_kwDOCUB6oc46PoJe
| 17,843
|
[Flax] Add remat (gradient checkpointing)
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Is there an inconvenient in adding it to all layers?\r\n\r\nIn my case I used it only on transformers blocks (attention + feed forward).",
"> Is there an inconvenient in adding it to all layers?\r\n\r\nBy wrapping `FlaxBertLayer` in a `remat` operation, each Bert layer (attention, intermediate FF, final FF + optional cross-attention layers) has `remat` applied to it:\r\nhttps://github.com/huggingface/transformers/blob/ea8150a5f932ab28efbe2e7a31fee1ca77c289a5/src/transformers/models/bert/modeling_flax_bert.py#L555\r\nWe then use this remat'd layer to construct the Transformer block (layers collection):\r\nhttps://github.com/huggingface/transformers/blob/ea8150a5f932ab28efbe2e7a31fee1ca77c289a5/src/transformers/models/bert/modeling_flax_bert.py#L559-L562\r\nMeaning that each component of the Bert layer is checkpointed, and that **all** Bert layers in the Transformer block (layers collection) are checkpointed.\r\n\r\nWould you like to see `remat` on the embeddings and pooler layers too? Imagine this wouldn't make a huge difference to performance at train time vs just checkpointing the entire Transformer block? ",
"> Would you like to see `remat` on the embeddings and pooler layers too? Imagine this wouldn't make a huge difference to performance at train time vs just checkpointing the entire Transformer block?\r\n\r\nNo actually I thought it was on all layers but the way you did is great!",
"Cool! Once the tests are green, happy to merge it here :-)"
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds gradient checkpointing in Flax (_c.f._ #17399). The API currently takes the form of a method:
```python
from transformers import BertConfig, FlaxBertModel
model = FlaxBertModel(BertConfig())
model.enable_gradient_checkpointing()
```
Note: checkpointing has currently only been implemented for FlaxBert. Implementing for all Flax models is a TODO.
TODO:
- [x] Add checkpointing to `init`
- [x] Add checkpointing to `from_pretrained`
- [x] Add model tests for FlaxBert in `test_modeling_flax_bert`
- [x] Decide on API: checkpointing with a kwarg (`gradient_checkpointing=True`) or a method (`model.gradient_checkpointing_enable()`)?
- [x] Add API functionality for remat policies (c.f. https://github.com/google/jax/blob/636345fd67758c19c5345bee2301df34b6f1c540/jax/_src/ad_checkpoint.py#L44)
- [ ] Copy checkpointing logic to all Flax models
- [ ] Move model tests to `test_modeling_flax_common`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @borisdayma
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17843/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17843",
"html_url": "https://github.com/huggingface/transformers/pull/17843",
"diff_url": "https://github.com/huggingface/transformers/pull/17843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17843.patch",
"merged_at": 1656696834000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17842
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17842/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17842/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17842/events
|
https://github.com/huggingface/transformers/pull/17842
| 1,282,421,369
|
PR_kwDOCUB6oc46PnV2
| 17,842
|
Fix FlaxBigBirdEmbeddings
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I will merge this PR today."
] | 1,655
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
Current `FlaxBigBirdEmbeddings` applies layer norm before dropout, while `BigBirdEmbeddings` and Google's original `BigBird`
applies dropout first. This PR fixes this inconsistency.
Flax
(layernorm --> dropout)
https://github.com/huggingface/transformers/blob/6f29029b05df221c0c37fd2e87aeadc9cb6ce5d7/src/transformers/models/big_bird/modeling_flax_big_bird.py#L232-L233
PyTorch
(dropout immediately after embedding)
https://github.com/huggingface/transformers/blob/6f29029b05df221c0c37fd2e87aeadc9cb6ce5d7/src/transformers/models/big_bird/modeling_big_bird.py#L311-L312
Google
(dropout immediately after embedding)
https://github.com/google-research/bigbird/blob/5f2a5aa7fbab23e32e0e0b41c5f0192f0c023e05/bigbird/core/utils.py#L565-L566
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17842/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17842",
"html_url": "https://github.com/huggingface/transformers/pull/17842",
"diff_url": "https://github.com/huggingface/transformers/pull/17842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17842.patch",
"merged_at": 1656686762000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17841
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17841/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17841/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17841/events
|
https://github.com/huggingface/transformers/pull/17841
| 1,282,416,650
|
PR_kwDOCUB6oc46PmTv
| 17,841
|
Fix broken test for models with batchnorm
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Related to #17427, cc @amyeroberts .",
"_The documentation is not available anymore as the PR was closed or merged._",
"🧠 ",
"Give @amyeroberts the brain emoji for that one - she identified the whole problem, I just fixed the test!"
] | 1,655
| 1,656
| 1,655
|
MEMBER
| null |
One of the Keras tests assumed that fitting a model for one iteration with a learning rate of zero would not change any weights. This is not true for `BatchNorm`, which updates its running means and variances regardless! As a result, the model after the iteration had slightly different outputs, which caused the test to be very flaky. We now reinitialize the model after the single training epoch to make sure this doesn't happen.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17841/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17841/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17841",
"html_url": "https://github.com/huggingface/transformers/pull/17841",
"diff_url": "https://github.com/huggingface/transformers/pull/17841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17841.patch",
"merged_at": 1655996394000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17840
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17840/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17840/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17840/events
|
https://github.com/huggingface/transformers/pull/17840
| 1,282,237,396
|
PR_kwDOCUB6oc46O-53
| 17,840
|
Fix an error message in `BigBird`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
Fix an error message in `BigBird` model file, so we can see the actual/correct difference.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17840/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17840",
"html_url": "https://github.com/huggingface/transformers/pull/17840",
"diff_url": "https://github.com/huggingface/transformers/pull/17840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17840.patch",
"merged_at": 1655988233000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17839
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17839/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17839/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17839/events
|
https://github.com/huggingface/transformers/pull/17839
| 1,282,224,530
|
PR_kwDOCUB6oc46O8CL
| 17,839
|
CLI: handle multimodal inputs
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,656
| 1,656
|
MEMBER
| null |
# What does this PR do?
Adds support for multimodal inputs (for models like CLIP), and adds the special input case for Wav2Vec2 (different audio input name).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17839/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17839",
"html_url": "https://github.com/huggingface/transformers/pull/17839",
"diff_url": "https://github.com/huggingface/transformers/pull/17839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17839.patch",
"merged_at": 1656170231000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17838
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17838/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17838/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17838/events
|
https://github.com/huggingface/transformers/issues/17838
| 1,282,189,702
|
I_kwDOCUB6oc5MbKmG
| 17,838
|
Get different weights from model.get_input_embeddings()
|
{
"login": "heya5",
"id": 27731754,
"node_id": "MDQ6VXNlcjI3NzMxNzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/27731754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heya5",
"html_url": "https://github.com/heya5",
"followers_url": "https://api.github.com/users/heya5/followers",
"following_url": "https://api.github.com/users/heya5/following{/other_user}",
"gists_url": "https://api.github.com/users/heya5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heya5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heya5/subscriptions",
"organizations_url": "https://api.github.com/users/heya5/orgs",
"repos_url": "https://api.github.com/users/heya5/repos",
"events_url": "https://api.github.com/users/heya5/events{/privacy}",
"received_events_url": "https://api.github.com/users/heya5/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Sorry, I made a mistake. I got answer from the documents.\r\n\r\n> config (:class:`~transformers.GPT2Config`): Model configuration class with all the parameters of the model.\r\n Initializing with a config file does not load the weights associated with the model, only the\r\n configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model\r\n weights.\r\n\r\nI need use `GPT2LMHeadModel.from_pretrained(\"gpt2\")` rather than `GPT2LMHeadModel(config=config)`"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
### System Info
```shell
@patil-suraj, @patrickvonplaten
- `transformers` version: 4.10.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.3
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from transformers import GPT2Config, GPT2LMHeadModel
config = GPT2Config.from_pretrained("gpt2")
model = GPT2LMHeadModel(config=config)
# print (model1.get_input_embeddings().weight.shape)
arr = model.get_input_embeddings().weight.detach().numpy()
print (arr)
print (arr.shape)
```
<img width="573" alt="image" src="https://user-images.githubusercontent.com/27731754/175279258-f0551493-2c42-4dbd-a17f-257d8298ae7c.png">
### Expected behavior
```shell
I think `model.get_input_embeddings()` should give same weights every time when it was called
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17838/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17837
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17837/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17837/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17837/events
|
https://github.com/huggingface/transformers/pull/17837
| 1,282,122,127
|
PR_kwDOCUB6oc46OlUo
| 17,837
|
BLOOM - Fix mask creation by default
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This happens if user decides to run `model(input_ids=some_tensor)`, which originally happened during prototyping (in a notebook).\r\n\r\nIt is indeed a niche case;\r\nHowever, the fact that mask _is_ optional can hint other users to the idea that they can keep it None - and it would be more intuitive if having mask=None would be equivalent to passing all ones, like in GPT2Model for example."
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
- fix niche case where the mask is not fed to the forward function
cc @justheuristic
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17837/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17837",
"html_url": "https://github.com/huggingface/transformers/pull/17837",
"diff_url": "https://github.com/huggingface/transformers/pull/17837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17837.patch",
"merged_at": 1656331698000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17836
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17836/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17836/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17836/events
|
https://github.com/huggingface/transformers/pull/17836
| 1,282,068,518
|
PR_kwDOCUB6oc46OZsj
| 17,836
|
replace `Python-base tokenizer` by `non-fast tokenizer` in error message
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
As one user rightly pointed out in an issue #17809, when a user receives the error `"tokens() is not available when using Python-based tokenizers"` it is not obvious that a python-based tokenizer refers to a tokenizer class without the term Fast at the end.
I therefore propose to change the error messages using this term to refer to the term fast which is more easily understood by users.
Fixes #17809
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Would love to have the approval of @sgugger , @LysandreJik , @patrickvonplaten or @patil-suraj :hugs:
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17836/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17836",
"html_url": "https://github.com/huggingface/transformers/pull/17836",
"diff_url": "https://github.com/huggingface/transformers/pull/17836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17836.patch",
"merged_at": 1655987988000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17835
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17835/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17835/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17835/events
|
https://github.com/huggingface/transformers/issues/17835
| 1,281,983,737
|
I_kwDOCUB6oc5MaYT5
| 17,835
|
Batch mismatch in the given course example
|
{
"login": "ahadda5",
"id": 21275079,
"node_id": "MDQ6VXNlcjIxMjc1MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/21275079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadda5",
"html_url": "https://github.com/ahadda5",
"followers_url": "https://api.github.com/users/ahadda5/followers",
"following_url": "https://api.github.com/users/ahadda5/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadda5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadda5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadda5/subscriptions",
"organizations_url": "https://api.github.com/users/ahadda5/orgs",
"repos_url": "https://api.github.com/users/ahadda5/repos",
"events_url": "https://api.github.com/users/ahadda5/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadda5/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"restarted jetbrains and it worked! hmmmmm. \r\ncan i delete this issue. "
] | 1,655
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
torch 1.11.0
cuda 0.13
transformers 4.20.1
linux box , ubuntu 20.04
```
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Go to The example provided. [Course fine-tune ](https://huggingface.co/course/chapter3/3?fw=pt)
2. Open a python file and dump all the cell provided as is. (no changes). UP To the `trainer.train()`
3. error
***** Running training *****
Num examples = 3668
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 1377
0%| | 0/1377 [00:00<?, ?it/s]Traceback (most recent call last):
File "~/.conda/envs/hugg/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3457, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-10d9618241c7>", line 55, in <module>
trainer.train()
File " ~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 1413, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File " ~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 1651, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 2345, in training_step
loss = self.compute_loss(model, inputs)
File "~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/trainer.py", line 2377, in compute_loss
outputs = model(**inputs)
File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "~/.conda/envs/hugg/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1775, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 1165, in forward
label_smoothing=self.label_smoothing)
File "~/.conda/envs/hugg/lib/python3.7/site-packages/torch/nn/functional.py", line 2996, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (624) to match target batch_size (8).
### Expected behavior
```shell
Training the model. No errors.
```
### My comments
The local env matches that of colab in terms of package versions. I used python3.7 as the provided colab has, and the machine has a 1 GPU (cuda 11.3).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17835/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17834
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17834/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17834/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17834/events
|
https://github.com/huggingface/transformers/issues/17834
| 1,281,942,454
|
I_kwDOCUB6oc5MaOO2
| 17,834
|
Issue in wav2vec2ForPretraining
|
{
"login": "annihi1ation",
"id": 40926532,
"node_id": "MDQ6VXNlcjQwOTI2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/40926532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/annihi1ation",
"html_url": "https://github.com/annihi1ation",
"followers_url": "https://api.github.com/users/annihi1ation/followers",
"following_url": "https://api.github.com/users/annihi1ation/following{/other_user}",
"gists_url": "https://api.github.com/users/annihi1ation/gists{/gist_id}",
"starred_url": "https://api.github.com/users/annihi1ation/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/annihi1ation/subscriptions",
"organizations_url": "https://api.github.com/users/annihi1ation/orgs",
"repos_url": "https://api.github.com/users/annihi1ation/repos",
"events_url": "https://api.github.com/users/annihi1ation/events{/privacy}",
"received_events_url": "https://api.github.com/users/annihi1ation/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @annihi1ation,\r\n\r\nWhat `transfromres` version are you using?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
In the example mentioned in the doc, when I print the loss, it is "None"
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
from datasets import load_dataset
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-base")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# compute masked indices
batch_size, raw_sequence_length = input_values.shape
sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length)
mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2)
mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long)
with torch.no_grad():
outputs = model(input_values, mask_time_indices=mask_time_indices)
# compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1)
# show that cosine similarity is much higher than random
cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5
# for contrastive loss training model should be put into train mode
model = model.train()
loss = model(input_values, mask_time_indices=mask_time_indices).loss
### Expected behavior
```shell
Modify the example doc
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17834/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17833
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17833/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17833/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17833/events
|
https://github.com/huggingface/transformers/issues/17833
| 1,281,838,529
|
I_kwDOCUB6oc5MZ03B
| 17,833
|
LayoutLMv3Model output shape is different
|
{
"login": "pocca2048",
"id": 10275397,
"node_id": "MDQ6VXNlcjEwMjc1Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/10275397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pocca2048",
"html_url": "https://github.com/pocca2048",
"followers_url": "https://api.github.com/users/pocca2048/followers",
"following_url": "https://api.github.com/users/pocca2048/following{/other_user}",
"gists_url": "https://api.github.com/users/pocca2048/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pocca2048/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pocca2048/subscriptions",
"organizations_url": "https://api.github.com/users/pocca2048/orgs",
"repos_url": "https://api.github.com/users/pocca2048/repos",
"events_url": "https://api.github.com/users/pocca2048/events{/privacy}",
"received_events_url": "https://api.github.com/users/pocca2048/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThe sequence length of the last hidden states of LayoutLMv3 equals the number of text tokens + image tokens.\r\n\r\nIf you have a text of 208 tokens (as is the case in the code example above), LayoutLMv3 also appends 197 image tokens to it. There are 197 image tokens because `LayoutLMv3Processor` resizes images to 224x224, which, at a patch resolution of 16x16 gives (224/16)**2 = 196 tokens, and one adds one for the CLS token. \r\n\r\nThis is also what is done in the original implementation. \r\n\r\nThe docstrings of the last hidden states can be improved, definitely. Feel free to open a PR regarding this.",
"Oops my bad. I confirmed that both have the same shapes.\r\nI thought two would have different shape because of these:\r\n- transformers\r\nhttps://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py#L1041-L1043\r\n\r\n- original\r\nhttps://github.com/microsoft/unilm/blob/4301ebe1a832b7bcb33be0ab3a460306d467a912/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py#L1070-L1073\r\n```python\r\n sequence_output = outputs[0]\r\n sequence_output = self.dropout(sequence_output)\r\n logits = self.classifier(sequence_output)\r\n```\r\n\r\nI will try to work on updating the documentation. Thank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.1
- Platform: Linux-4.4.0-62-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoProcessor, AutoModelForTokenClassification, AutoModel
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = AutoModel.from_pretrained("microsoft/layoutlmv3-base", num_labels=7)
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
word_labels = example["ner_tags"]
encoding = processor(image, words, boxes=boxes, return_tensors="pt")
outputs = model(**encoding)
encoding.input_ids.shape, outputs.last_hidden_state.shape
```
outputs
```
(torch.Size([1, 208]), torch.Size([1, 405, 768]))
```
### Expected behavior
```
(torch.Size([1, 208]), torch.Size([1, 208, 768]))
```
Hi! Thank you very much for contributing layoutlmv3 model to huggingface.
While using the model, I think I found out that the model has some different parts from the specs.
https://github.com/huggingface/transformers/blob/v4.20.1/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py#L1043
https://github.com/microsoft/unilm/blob/master/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py#L1070
This huggingface implementation has different output shape than original implementation.
In the [documentation](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/layoutlmv3#transformers.LayoutLMv3Model), it says last_hidden_state has shape of `(torch.FloatTensor of shape (batch_size, sequence_length, hidden_size))` but it does not. (original implementation has, but huggingface implementation does not)
Sequence length includes
(Presumably because of that) It makes different training result on FUNSD dataset.
In summary,
- `LayoutLMv3Model` outputs different shape (sequence length) than that is written in documentation.
- and that is different from the original implementation.
Thank you.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17833/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17832
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17832/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17832/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17832/events
|
https://github.com/huggingface/transformers/issues/17832
| 1,281,770,782
|
I_kwDOCUB6oc5MZkUe
| 17,832
|
Multi-modal VisualBERT can be used for classification task?
|
{
"login": "karndeepsingh",
"id": 49562460,
"node_id": "MDQ6VXNlcjQ5NTYyNDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/49562460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karndeepsingh",
"html_url": "https://github.com/karndeepsingh",
"followers_url": "https://api.github.com/users/karndeepsingh/followers",
"following_url": "https://api.github.com/users/karndeepsingh/following{/other_user}",
"gists_url": "https://api.github.com/users/karndeepsingh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karndeepsingh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karndeepsingh/subscriptions",
"organizations_url": "https://api.github.com/users/karndeepsingh/orgs",
"repos_url": "https://api.github.com/users/karndeepsingh/repos",
"events_url": "https://api.github.com/users/karndeepsingh/events{/privacy}",
"received_events_url": "https://api.github.com/users/karndeepsingh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nVisualBERT can probably be used for this, but I'd recommend checking out [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt), which is a very simple extension of ViT (or BERT) for multimodal tasks. The benefit of ViLT over VisualBERT is that one doesn't need to prepare image embeddings, as the ViLT model creates them by itself internally. You can just feed `input_ids` and `pixel_values` to it.\r\n\r\nTo use ViLT for multimodal classification, you can create a class like so:\r\n\r\n```\r\nfrom transformers import ViltPreTrainedModel, ViltModel\r\nfrom transformers.modeling_outputs import SequenceClassifierOutput\r\nfrom torch import nn\r\n\r\nclass MultimodalClassifier(ViltPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.config = config\r\n self.vilt = ViltModel(config)\r\n self.classifier = nn.Linear(config.hidden_size, config.num_labels)\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n pixel_values=None,\r\n pixel_mask=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n image_embeds=None,\r\n labels=None,\r\n output_attentions=None,\r\n output_hidden_states=None,\r\n return_dict=True,\r\n ):\r\n outputs = self.vilt(\r\n input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n pixel_values=pixel_values,\r\n pixel_mask=pixel_mask,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n image_embeds=image_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n\r\n pooler_output = outputs.pooler_output if return_dict else outputs[1]\r\n\r\n logits = self.classifier(pooler_output)\r\n\r\n loss = None\r\n if labels is not None:\r\n loss_fct = nn.CrossEntropyLoss()\r\n loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))\r\n\r\n if not return_dict:\r\n output = (logits,) + outputs[2:]\r\n return ((loss,) + output) if loss is not None else output\r\n\r\n return SequenceClassifierOutput(\r\n loss=loss,\r\n logits=logits,\r\n hidden_states=outputs.hidden_states,\r\n attentions=outputs.attentions,\r\n )\r\n```\r\nCreating this model (with a pre-trained base) can then be done as follows:\r\n\r\n```\r\nmodel = MultimodalClassifier.from_pretrained(\"dandelin/vilt-b32-mlm\", num_labels=10)\r\n```\r\nDoing a forward pass on a batch of image+text pairs can be done as follows:\r\n\r\n```\r\nfrom transformers import ViltProcessor\r\nimport torch\r\nimport requests\r\nfrom PIL import Image\r\n\r\nprocessor = ViltProcessor.from_pretrained(\"dandelin/vilt-b32-mlm\")\r\n\r\nurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ntext = \"this is an image of two cats\"\r\n\r\ninputs = processor(image, text, return_tensors=\"pt\")\r\n\r\noutputs = model(**inputs, labels=torch.tensor([1]))\r\n```",
"> Hi,\r\n> \r\n> VisualBERT can probably be used for this, but I'd recommend checking out [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt), which is a very simple extension of ViT (or BERT) for multimodal tasks. The benefit of ViLT over VisualBERT is that one doesn't need to prepare image embeddings, as the ViLT model creates them by itself internally. You can just feed `input_ids` and `pixel_values` to it.\r\n> \r\n> To use ViLT for multimodal classification, you can create a class like so:\r\n> \r\n> ```\r\n> from transformers import ViltPreTrainedModel, ViltModel\r\n> from transformers.modeling_outputs import SequenceClassifierOutput\r\n> from torch import nn\r\n> \r\n> class MultimodalClassifier(ViltPreTrainedModel):\r\n> def __init__(self, config):\r\n> super().__init__(config)\r\n> self.config = config\r\n> self.vilt = ViltModel(config)\r\n> self.classifier = nn.Linear(config.hidden_size, config.num_labels)\r\n> \r\n> def forward(\r\n> self,\r\n> input_ids=None,\r\n> attention_mask=None,\r\n> token_type_ids=None,\r\n> pixel_values=None,\r\n> pixel_mask=None,\r\n> head_mask=None,\r\n> inputs_embeds=None,\r\n> image_embeds=None,\r\n> labels=None,\r\n> output_attentions=None,\r\n> output_hidden_states=None,\r\n> return_dict=True,\r\n> ):\r\n> outputs = self.vilt(\r\n> input_ids,\r\n> attention_mask=attention_mask,\r\n> token_type_ids=token_type_ids,\r\n> pixel_values=pixel_values,\r\n> pixel_mask=pixel_mask,\r\n> head_mask=head_mask,\r\n> inputs_embeds=inputs_embeds,\r\n> image_embeds=image_embeds,\r\n> output_attentions=output_attentions,\r\n> output_hidden_states=output_hidden_states,\r\n> return_dict=return_dict,\r\n> )\r\n> \r\n> pooler_output = outputs.pooler_output if return_dict else outputs[1]\r\n> \r\n> logits = self.classifier(pooler_output)\r\n> \r\n> loss = None\r\n> if labels is not None:\r\n> loss_fct = nn.CrossEntropyLoss()\r\n> loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))\r\n> \r\n> if not return_dict:\r\n> output = (logits,) + outputs[2:]\r\n> return ((loss,) + output) if loss is not None else output\r\n> \r\n> return SequenceClassifierOutput(\r\n> loss=loss,\r\n> logits=logits,\r\n> hidden_states=outputs.hidden_states,\r\n> attentions=outputs.attentions,\r\n> )\r\n> ```\r\n> \r\n> Creating this model (with a pre-trained base) can then be done as follows:\r\n> \r\n> ```\r\n> model = MultimodalClassifier.from_pretrained(\"dandelin/vilt-b32-mlm\", num_labels=10)\r\n> ```\r\n> \r\n> Doing a forward pass on a batch of image+text pairs can be done as follows:\r\n> \r\n> ```\r\n> from transformers import ViltProcessor\r\n> import torch\r\n> import requests\r\n> from PIL import Image\r\n> \r\n> processor = ViltProcessor.from_pretrained(\"dandelin/vilt-b32-mlm\")\r\n> \r\n> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'\r\n> image = Image.open(requests.get(url, stream=True).raw)\r\n> text = \"this is an image of two cats\"\r\n> \r\n> inputs = processor(image, text, return_tensors=\"pt\")\r\n> \r\n> outputs = model(**inputs, labels=torch.tensor([1]))\r\n> ```\r\n\r\n@NielsRogge Thanks for your reply. I will definitely look into ViLT for the IMAGE+ TEXT classification tasks. There are following two things I want to highlight if you can suggest me:\r\n1. I have text in the Spanish language, How I can get the Bert model pre-trained in the Spanish language incorporated here in ViLT implementation?\r\n2. While classifying, i just don't want to classify with single label. Classification is more of an Multi-Label. How I can incorporate this multi-label output in ViLT?\r\n\r\nYour suggestion on these two will help me to understand the things properly. \r\nThanks again. Waiting for your reply.",
"> I have text in the Spanish language, How I can get the Bert model pre-trained in the Spanish language incorporated here in ViLT implementation?\r\n\r\nViLT was pre-trained on English text only, unfortunately. In that case, a better alternative might be to just forward the text through a BERT-like model pre-trained on Spanish text (could be multilingual), forward the image through a ViT-like model, and simply concatenate the hidden states of both modalities, which are then fed to a classifier.\r\n\r\n> While classifying, i just don't want to classify with single label. Classification is more of an Multi-Label. How I can incorporate this multi-label output in ViLT?\r\n\r\nMulti-label classification requires to use the `BCEWithLogitsLoss` as seen [here](https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/models/bert/modeling_bert.py#L1592) for example. The labels need to be a tensor of shape (batch_size, num_labels), containing the one-hot encoded labels for a batch.",
"> > I have text in the Spanish language, How I can get the Bert model pre-trained in the Spanish language incorporated here in ViLT implementation?\r\n> \r\n> ViLT was pre-trained on English text only, unfortunately. In that case, a better alternative might be to just forward the text through a BERT-like model pre-trained on Spanish text (could be multilingual), forward the image through a ViT-like model, and simply concatenate the hidden states of both modalities, which are then fed to a classifier.\r\n> \r\n> > While classifying, i just don't want to classify with single label. Classification is more of an Multi-Label. How I can incorporate this multi-label output in ViLT?\r\n> \r\n> Multi-label classification requires to use the `BCEWithLogitsLoss` as seen [here](https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/models/bert/modeling_bert.py#L1592) for example. The labels need to be a tensor of shape (batch_size, num_labels), containing the one-hot encoded labels for a batch.\r\n\r\nThanks @NielsRogge for replying.\r\nCan I pass pre-trained tokenizer (pretrained on spanish language) to ViLT processor ?\r\n\r\n\r\nAlso, which model in ViLT shall i use to extract image features in combination with pretrained spanish bert model?\r\n\r\nplease advise.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
Hi,
I have images and descriptions for products and want to train multi-modal by using Image and text embeddings. I just came across VisualBert model and was wondering whether we can use VisualBERT for classification task taking image and text as an input.
Also, if any other multi-modal algorithm can be recommended apart from VisualBERT to train multi-modal using Image and text for classification task.
thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17832/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17831
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17831/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17831/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17831/events
|
https://github.com/huggingface/transformers/issues/17831
| 1,281,245,901
|
I_kwDOCUB6oc5MXkLN
| 17,831
|
DisjunctiveConstraint fails in corner case
|
{
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Interesting edge case! @cwkeam has you encountered this before?",
"Hey @boy2000-007man,\r\n\r\nSorry to reply so late here. I'm a bit hesitant to add so much new code. Could you maybe show a case with input strings and generate and how the current implementation fails? E.g. above I see with some abstract numbers how it fails, but could you maybe also show how the current `generate(...)` method fails for an edge case?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.8.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The current **trie-only** implementation failed to handle the second corner case introduced in the `Figure 1b` of [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://aclanthology.org/N19-1090/),
where the prefix of one constraint is not prefix but subsequence of another.
The minimal code snippet to reproduce is following:
```python3
>>> import transformers
>>> c = transformers.ConstraintListState([transformers.DisjunctiveConstraint(
>>> [[1, 2, 3, 4],
>>> [2, 3, 5],
>>> [3, 6],
>>> [7]]
>>> )])
>>> c.reset([1, 2, 3, 5])
>>> print(c.completed)
False
>>> c.reset([1, 2, 3, 6])
>>> print(c.completed)
False
>>> c.reset([1, 2, 3, 7])
>>> print(c.completed)
False
>>> c.reset([1, 2, 3, 4])
>>> print(c.completed)
True
```
### Expected behavior
```shell
all print statements should output True instead.
The [`AC automaton`](https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm) is the desired algorithm to supersede Trie here.
I can prepare a PR to do the upgrade and fix if necessary.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17831/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17831/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17830
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17830/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17830/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17830/events
|
https://github.com/huggingface/transformers/issues/17830
| 1,281,204,728
|
I_kwDOCUB6oc5MXaH4
| 17,830
|
hf_BigBird failing on torchdynamo
|
{
"login": "shingjan",
"id": 11846349,
"node_id": "MDQ6VXNlcjExODQ2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11846349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shingjan",
"html_url": "https://github.com/shingjan",
"followers_url": "https://api.github.com/users/shingjan/followers",
"following_url": "https://api.github.com/users/shingjan/following{/other_user}",
"gists_url": "https://api.github.com/users/shingjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shingjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shingjan/subscriptions",
"organizations_url": "https://api.github.com/users/shingjan/orgs",
"repos_url": "https://api.github.com/users/shingjan/repos",
"events_url": "https://api.github.com/users/shingjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shingjan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @shingjan \r\n\r\nThe error message is very strange as it says `they are params: (1, 12) vs. indices: (1, 12)`, so it should be identical.\r\nSo far I am not able to reproduce as the installation of `benchmark` and `torchdynamo` gives several errors.\r\n\r\nCould you try to identity the input that is passed to the model in `modeling_big_bird.py` when `torchbench.py` is running that causes the issue?\r\n\r\nAlso, you might try to check with the latest stable `transformers` + `PyTorch` version. Thanks!\r\n",
"Hi @ydshieh Thanks for your prompt reply! I am using pytorch nightly, version `1.13.0.dev20220609`, as it is a requirement for `torchdynamo` and `torchbench` so I can't really fall back to pytorch stable like 1.11.0. This error seems odd to me as well since those two indices looks identical. \r\nThis is the setup I have for repro: https://github.com/pytorch/torchdynamo#minimal-developer-setup\r\nI will dig in and see if there is more I can provide you with about a repro.",
"The two values seems identical because there was a bug in the error message, see #17840. You can re-run your code to see what is the actual values.\r\n\r\nIf you can only use pytorch nightly, could you also check what's your `torchvision`, `torchaudio` and `torchtext` version?\r\nAlso, for these (torch), did you install the version with CUDA, or the version with CPU only?\r\n",
"```\r\ntorchtext 0.14.0.dev20220609 py38 pytorch-nightly\r\ntorchvision 0.14.0.dev20220609 py38_cu113 pytorch-nightly\r\ntorchaudio 0.13.0.dev20220609 py38_cu113 pytorch-nightly\r\n```\r\nThe above are the specs. My installation does include cuda 11.3. This model is supposed to be compiled on cpu/llvm only if that helps.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.12.1
- Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyTorch version (GPU?): 1.13.0.dev20220609 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
./torchbench.py --only hf_BigBird --speed-ts
### Expected behavior
```shell
This model should run just fine. But right now I am seeing:
File "/home/xx/anaconda3/envs/torchdynamo/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 607, in bigbird_block_sparse_attention
gathered_key = self.torch_gather_b2(blocked_key_matrix, rand_attn)
File "/home/xx/anaconda3/envs/torchdynamo/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 969, in torch_gather_b2
raise ValueError(
File "/home/xx/anaconda3/envs/torchdynamo/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 964, in torch_gather_b2
@staticmethod
ValueError: Make sure that the first two dimensions of params and indices are identical, but they are params: (1, 12) vs. indices: (1, 12)
```
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17830/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17829
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17829/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17829/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17829/events
|
https://github.com/huggingface/transformers/issues/17829
| 1,280,866,085
|
I_kwDOCUB6oc5MWHcl
| 17,829
|
RNG states in checkpoint corrupted
|
{
"login": "jglaser",
"id": 1899768,
"node_id": "MDQ6VXNlcjE4OTk3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1899768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jglaser",
"html_url": "https://github.com/jglaser",
"followers_url": "https://api.github.com/users/jglaser/followers",
"following_url": "https://api.github.com/users/jglaser/following{/other_user}",
"gists_url": "https://api.github.com/users/jglaser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jglaser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jglaser/subscriptions",
"organizations_url": "https://api.github.com/users/jglaser/orgs",
"repos_url": "https://api.github.com/users/jglaser/repos",
"events_url": "https://api.github.com/users/jglaser/events{/privacy}",
"received_events_url": "https://api.github.com/users/jglaser/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
### System Info
```shell
transformers-cli env
WARNING:tensorflow:From /autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2022-06-22 16:37:33.424936: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /device:GPU:0 with 14042 MB memory: -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0035:03:00.0, compute capability: 7.0
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.19.2
- Platform: Linux-4.18.0-193.46.1.el8_2.ppc64le-ppc64le-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In distributed training, I cannot restart from a checkpoint due to a corrupted RNG state archive.
```
File "/gpfs/alpine/bif136/world-shared/contact_pred_pair_update/train/../train.py", line 447, in <module>
main()
File "/gpfs/alpine/bif136/world-shared/contact_pred_pair_update/train/../train.py", line 434, in main
train_result = trainer.train(resume_from_checkpoint=last_checkpoint)
File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/trainer.py", line 1317, in train
return inner_training_loop(
File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/trainer.py", line 1525, in _inner_training_loop
self._load_rng_state(resume_from_checkpoint)
File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/transformers/trainer.py", line 1826, in _load_rng_state
checkpoint_rng_state = torch.load(rng_file)
File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/torch/serialization.py", line 600, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/autofs/nccs-svm1_proj/bif136/summit-env/lib/python3.9/site-packages/torch/serialization.py", line 242, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: invalid header or archive is corrupted
```
I am training with 96 ranks (6 local ranks/node), and it looks like the zip files `rng_state_1.pth` are named by local rank, but written by **every rank**. This would explain that there is a write conflict to a shared filesystem (GPFS in this case) from multiple ranks, resulting in corrupted data. The rng files should either
```
- only be created by the first node
- or created and be named differently by all `torch.distributed` ranks
```
This is the problematic line
https://github.com/huggingface/transformers/blob/df8e6804c004903753d3e635d85f32694e3d2c39/src/transformers/trainer.py#L2074
### Expected behavior
```shell
No write conflict, restarting from checkpoint works as advertised
For now, removing the `rng_state_?.pth` files manually from the checkpoint seems to be the only solution
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17829/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17828
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17828/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17828/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17828/events
|
https://github.com/huggingface/transformers/pull/17828
| 1,280,801,930
|
PR_kwDOCUB6oc46J_3a
| 17,828
|
Italian/model sharing
|
{
"login": "mfumanelli",
"id": 53374883,
"node_id": "MDQ6VXNlcjUzMzc0ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/53374883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfumanelli",
"html_url": "https://github.com/mfumanelli",
"followers_url": "https://api.github.com/users/mfumanelli/followers",
"following_url": "https://api.github.com/users/mfumanelli/following{/other_user}",
"gists_url": "https://api.github.com/users/mfumanelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfumanelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfumanelli/subscriptions",
"organizations_url": "https://api.github.com/users/mfumanelli/orgs",
"repos_url": "https://api.github.com/users/mfumanelli/repos",
"events_url": "https://api.github.com/users/mfumanelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfumanelli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
Italian translation of model_sharing.mdx
See issue: [17459](https://github.com/huggingface/transformers/issues/17459)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@omarespejel
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17828/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17828",
"html_url": "https://github.com/huggingface/transformers/pull/17828",
"diff_url": "https://github.com/huggingface/transformers/pull/17828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17828.patch",
"merged_at": 1658405274000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17827
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17827/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17827/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17827/events
|
https://github.com/huggingface/transformers/pull/17827
| 1,280,641,167
|
PR_kwDOCUB6oc46JcDE
| 17,827
|
Update type hints modeling_yoso.py
|
{
"login": "F02934",
"id": 56677617,
"node_id": "MDQ6VXNlcjU2Njc3NjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/56677617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/F02934",
"html_url": "https://github.com/F02934",
"followers_url": "https://api.github.com/users/F02934/followers",
"following_url": "https://api.github.com/users/F02934/following{/other_user}",
"gists_url": "https://api.github.com/users/F02934/gists{/gist_id}",
"starred_url": "https://api.github.com/users/F02934/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/F02934/subscriptions",
"organizations_url": "https://api.github.com/users/F02934/orgs",
"repos_url": "https://api.github.com/users/F02934/repos",
"events_url": "https://api.github.com/users/F02934/events{/privacy}",
"received_events_url": "https://api.github.com/users/F02934/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"HI @Rocketknight1 this should be it, you can check if everything is good. If there's some problem I will resolve it tonight!",
"_The documentation is not available anymore as the PR was closed or merged._",
"Nope, this is perfect. Thanks for the PR, and sorry for the confusion with the last one!"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Type hints for modelling yoso PYTORCH
#16059
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17827/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17827",
"html_url": "https://github.com/huggingface/transformers/pull/17827",
"diff_url": "https://github.com/huggingface/transformers/pull/17827.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17827.patch",
"merged_at": 1655984249000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17826
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17826/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17826/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17826/events
|
https://github.com/huggingface/transformers/pull/17826
| 1,280,629,645
|
PR_kwDOCUB6oc46JZa5
| 17,826
|
Add Jukebox model (replaces #16875)
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Replaces (#16875) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint.",
"Okay, the `1b lyrics` and `5b lyrics` match the original code. Just need refactoring to have better variable names and wrap the sampling kwargs for easy use. ",
"@ArthurZucker let me know when this is ready for the final review",
"@ArthurZucker do you want to let me know once you want to have a final review? Let's try to not let it hang around for too long",
"Apart from the `kwargs` I think it is done! @patrickvonplaten feel free to review",
"Slow tests are now passing, the only issue left to attend is the memory. The slow tests need a lot of RAM, and running inference with the model should also automatically send the unused `Priors` and `VQVAE` to the `cpu`. \r\n\r\nThe documentation and models will be ready soon.",
"As it was previously requested, you can now instantiate `JukeboxVQVAE` and `JukeboxPrior` individually. This is convenient if people only want to use the VQVAE or generate form juste the top level prior. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17826). All of your documentation changes will be reflected on that endpoint."
] | 1,655
| 1,668
| 1,668
|
COLLABORATOR
| null |
This is a draft pull request.
# What does this PR do?
This PR will progressively add the [Jukebox](https://openai.com/blog/jukebox/) model to the hub.
It is linked to [#16870](https://github.com/huggingface/transformers/issues/16870).
# Currently planned steps (WIP)
- [x] Create template files with `transformeres-cli add-new-model-like`
- [x] `src/transformers/tokenization_jukebox.py`
- [x] `src/transformers/test_tokenization_jukebox.py`
- [x] `src/transformers/configuration_jukebox.py`
- [x] `src/transformers/modeling_jukebox.py`
- [x] `src/transformers/configuration_jukebox.py`
- [x] `docs/source/model_doc/jukebox.rst`
- [ ] `src/transformers/tokenization_jukebox_fast.py` (will most probably use WordLevel tokenizer). Also requires to implement a converter function `class JukeboxConverter(Converter):`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17826/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17826/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17826",
"html_url": "https://github.com/huggingface/transformers/pull/17826",
"diff_url": "https://github.com/huggingface/transformers/pull/17826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17826.patch",
"merged_at": 1668110728000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17825
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17825/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17825/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17825/events
|
https://github.com/huggingface/transformers/pull/17825
| 1,280,604,477
|
PR_kwDOCUB6oc46JTrk
| 17,825
|
[Closed - code changes not shown on GH for unknown reason] replace `Python-base tokenizer` by `non-fast tokenizer` in error message
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17825). All of your documentation changes will be reflected on that endpoint."
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
As one user rightly pointed out in an issue #17809, when a user receives the V error it is not obvious that a python-based tokenizer refers to a tokenizer class without the term Fast at the end.
I therefore propose to change the error messages using this term to refer to the term fast which is more easily understood by users.
Fixes #17809
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17825/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17825",
"html_url": "https://github.com/huggingface/transformers/pull/17825",
"diff_url": "https://github.com/huggingface/transformers/pull/17825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17825.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17824
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17824/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17824/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17824/events
|
https://github.com/huggingface/transformers/pull/17824
| 1,280,585,309
|
PR_kwDOCUB6oc46JPW9
| 17,824
|
fix type of None special tokens in not verbose mode (duplicate of #17797)
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Duplicate. Closing in favor of #17797"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17796
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17824/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17824",
"html_url": "https://github.com/huggingface/transformers/pull/17824",
"diff_url": "https://github.com/huggingface/transformers/pull/17824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17824.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17823
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17823/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17823/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17823/events
|
https://github.com/huggingface/transformers/pull/17823
| 1,280,391,074
|
PR_kwDOCUB6oc46Ikkf
| 17,823
|
BLOOM minor changes on tokenizer
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"We may merge this at the same time as https://github.com/huggingface/transformers/pull/17837 to make it a patch release of minor fixes",
"Let's not add a new attribute and change the common tests for one model only. You can override the test in the subclass of the main model tester, leaving the common tests as they are."
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
- Attempts to fix minor issues with the BloomTokenizer
- Remove unused args
- Added a new attribute on `TokenizerTesterMixin` for models that are not agnostic to sequence length (typically models that use ALiBi positional embeddings)
Still need to discuss if it is worth it to force the padding side to the left
cc @SaulLu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17823/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17823",
"html_url": "https://github.com/huggingface/transformers/pull/17823",
"diff_url": "https://github.com/huggingface/transformers/pull/17823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17823.patch",
"merged_at": 1655992632000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17822
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17822/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17822/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17822/events
|
https://github.com/huggingface/transformers/pull/17822
| 1,280,110,139
|
PR_kwDOCUB6oc46HnJp
| 17,822
|
Use higher value for hidden_size in Flax BigBird test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@patil-suraj If you have one minute to review this one, I can merge it today 😄 🙏 but not urgent"
] | 1,655
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
#17658 changed `hidden_size` from `32` to `4` in `FlaxBigBirdModelTester`, which cause the PT/Flax difference increased by ~ 50 times. This PR changes it back (but keep other changes untouched). We can therefore use `1e-5` instead of `5e-5`.
(`hidden_size=4` with `num_attention_heads=2` is likely to introduce some edge cases in random init.)
The testing time (on GCP CPU VM) is 66 vs. 64 seconds (with `-n 1`) and 46 vs. 44 seconds (with `-n 2`), so this change doesn't make the test slower.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17822/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17822",
"html_url": "https://github.com/huggingface/transformers/pull/17822",
"diff_url": "https://github.com/huggingface/transformers/pull/17822.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17822.patch",
"merged_at": 1656091891000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17821
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17821/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17821/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17821/events
|
https://github.com/huggingface/transformers/pull/17821
| 1,280,054,119
|
PR_kwDOCUB6oc46Ha-R
| 17,821
|
Add VideoMAE
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@NielsRogge do you have any ETA on this feature? I am developing a video classification fine-tuning framework, would love to use this model if it gets merged into main!\r\n\r\nCurrently only video model is PerceiverIO, right?",
"There seems to remain an issue with the docs:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/doc-builder\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/commands/doc_builder_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/commands/build.py\", line 96, in build_command\r\n build_doc(\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py\", line 405, in build_doc\r\n sphinx_refs = check_toc_integrity(doc_folder, output_dir)\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py\", line 460, in check_toc_integrity\r\n raise RuntimeError(\r\nRuntimeError: The following files are not present in the table of contents:\r\n- model_doc/videomae\r\nAdd them to ../transformers/docs/source/en/_toctree.yml.\r\n```",
"@LysandreJik yes I was aware of that, should be fixed now.\r\n\r\nDon't merge already please, I'm transferring checkpoints and updating the conversion script.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds [VideoMAE](https://github.com/MCG-NJU/VideoMAE), which extends [ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae) to videos.
The only difference between VideoMAE and ViT is that you need to replace `nn.Conv2d` by `nn.Conv3d` in the patch embedding class. 😂
To do:
- [ ] Decide on a name for `VideoMAEFeatureExtractor` (should we keep it, or rename to `VideoMAEProcessor`, `VideoMAEPreprocessor`?)
- [ ] Decide on the input format for video models; currently I've chosen `pixel_values` of shape (batch_size, num_frames, num_channels, height, width). The original implementation uses (B, C, T, H, W)
- [ ] Doc examples + tests
- [x] Incorporate changes of #17731
- [ ] Make VideoMAEFeatureExtractor robust with return_tensors="np" by default, better tests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17821/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17821",
"html_url": "https://github.com/huggingface/transformers/pull/17821",
"diff_url": "https://github.com/huggingface/transformers/pull/17821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17821.patch",
"merged_at": 1659628975000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17820
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17820/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17820/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17820/events
|
https://github.com/huggingface/transformers/issues/17820
| 1,279,930,860
|
I_kwDOCUB6oc5MSjHs
| 17,820
|
How to use LayoutLMv3 for Document Layout Detection task?
|
{
"login": "matthew-wei",
"id": 23068766,
"node_id": "MDQ6VXNlcjIzMDY4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/23068766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthew-wei",
"html_url": "https://github.com/matthew-wei",
"followers_url": "https://api.github.com/users/matthew-wei/followers",
"following_url": "https://api.github.com/users/matthew-wei/following{/other_user}",
"gists_url": "https://api.github.com/users/matthew-wei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthew-wei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthew-wei/subscriptions",
"organizations_url": "https://api.github.com/users/matthew-wei/orgs",
"repos_url": "https://api.github.com/users/matthew-wei/repos",
"events_url": "https://api.github.com/users/matthew-wei/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthew-wei/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nIf you read a bit more closely: https://github.com/microsoft/unilm/tree/master/layoutlmv3#document-layout-analysis-on-publaynet\r\n\r\nYou'll see they provide a guide regarding fine-tuning LayoutLMv3 on PubLayNet. The Mask R-CNN framework is leveraged. This framework currently is not available in Huggingface Transformers, so you'll need to use the unilm repo for that.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
trasformers = 4.20.1
Models: layoutlmv3
How to use LayoutLMv3 for Document Layout Detection, for example microsoft unilm(https://github.com/microsoft/unilm/tree/master/layoutlmv3)?
I do not find Document Layout Detection task infomation in example and modeling.
The typical dataset for Document Layout Detection is call PubLayNet (https://github.com/ibm-aur-nlp/PubLayNet).
example :https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3
modeling: https://github.com/huggingface/transformers/tree/main/src/transformers/models/layoutlmv3
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
How to use LayoutLMv3 for Document Layout Detection, for example microsoft unilm(https://github.com/microsoft/unilm/tree/master/layoutlmv3)?
I do not find Document Layout Detection task infomation in example and modeling.
The typical dataset for Document Layout Detection is call PubLayNet (https://github.com/ibm-aur-nlp/PubLayNet).
### Expected behavior
```shell
I can know how to use layoutlmv3 model in transformers for Document Layout Detection?
I would really appreciate it if you can give me an example code.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17820/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17819
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17819/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17819/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17819/events
|
https://github.com/huggingface/transformers/issues/17819
| 1,279,735,876
|
I_kwDOCUB6oc5MRzhE
| 17,819
|
Cannot import name 'load_offloaded_weights' from 'accelerate.utils'
|
{
"login": "99991",
"id": 18725165,
"node_id": "MDQ6VXNlcjE4NzI1MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18725165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/99991",
"html_url": "https://github.com/99991",
"followers_url": "https://api.github.com/users/99991/followers",
"following_url": "https://api.github.com/users/99991/following{/other_user}",
"gists_url": "https://api.github.com/users/99991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/99991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/99991/subscriptions",
"organizations_url": "https://api.github.com/users/99991/orgs",
"repos_url": "https://api.github.com/users/99991/repos",
"events_url": "https://api.github.com/users/99991/events{/privacy}",
"received_events_url": "https://api.github.com/users/99991/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"The solution is to upgrade the `accelerate` library. I had version `0.8.0` and upgraded to `0.10.0`.\r\n\r\n```\r\npip install --upgrade accelerate\r\n```\r\n\r\nVersion `0.10.0` has the missing function `load_offloaded_weights`:\r\n\r\nhttps://github.com/huggingface/accelerate/commit/8b8c5345cd84ba96cca810b677601204e06853ba#diff-331ffa5527e400ee60607a11481feb0197abd8492c000255c337d0bf4312c0c0R43"
] | 1,655
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
>>> import transformers
>>> transformers.__version__
'4.20.1'
>>> import accelerate
>>> accelerate.__version__
'0.8.0'
>>> import sys
>>> sys.platform
'linux'
>>> sys.version
'3.8.10 (default, Mar 15 2022, 12:22:08) \n[GCC 9.4.0]'
>>> import os
>>> os.system("lsb_release -a")
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.4 LTS
Release: 20.04
Codename: focal
```
### Who can help?
@99991
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Execute the example from "Getting started" https://github.com/UKPLab/sentence-transformers#getting-started
2. Observe crash.
Error message:
```
Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback):
cannot import name 'load_offloaded_weights' from 'accelerate.utils' (/home/username/.local/lib/python3.8/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
File "/home/username/Desktop/computing_embeddings.py", line 2, in main
model = SentenceTransformer('all-MiniLM-L6-v2')
```
### Expected behavior
It should not crash.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17819/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17818
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17818/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17818/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17818/events
|
https://github.com/huggingface/transformers/pull/17818
| 1,279,681,820
|
PR_kwDOCUB6oc46GJ9-
| 17,818
|
Clean modeling utils, linked to #17760 and #17713
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
The recently introduced `TF` and `FLAX` sharding scripts #17760 and #17713 both use ` convert_file_size_to_int, get_checkpoint_shard_files`, which were moved to `transformers.utils.hub`. This PR just removed the definition of these two function and imports them form `transformers.utils.hub`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17818/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17818/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17818",
"html_url": "https://github.com/huggingface/transformers/pull/17818",
"diff_url": "https://github.com/huggingface/transformers/pull/17818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17818.patch",
"merged_at": 1655900763000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17817
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17817/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17817/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17817/events
|
https://github.com/huggingface/transformers/pull/17817
| 1,279,524,143
|
PR_kwDOCUB6oc46FnLf
| 17,817
|
Bump numpy from 1.21.0 to 1.22.0 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
Bumps [numpy](https://github.com/numpy/numpy) from 1.21.0 to 1.22.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p>
<blockquote>
<h2>v1.22.0</h2>
<h1>NumPy 1.22.0 Release Notes</h1>
<p>NumPy 1.22.0 is a big release featuring the work of 153 contributors
spread over 609 pull requests. There have been many improvements,
highlights are:</p>
<ul>
<li>Annotations of the main namespace are essentially complete. Upstream
is a moving target, so there will likely be further improvements,
but the major work is done. This is probably the most user visible
enhancement in this release.</li>
<li>A preliminary version of the proposed Array-API is provided. This is
a step in creating a standard collection of functions that can be
used across application such as CuPy and JAX.</li>
<li>NumPy now has a DLPack backend. DLPack provides a common interchange
format for array (tensor) data.</li>
<li>New methods for <code>quantile</code>, <code>percentile</code>, and related functions. The
new methods provide a complete set of the methods commonly found in
the literature.</li>
<li>A new configurable allocator for use by downstream projects.</li>
</ul>
<p>These are in addition to the ongoing work to provide SIMD support for
commonly used functions, improvements to F2PY, and better documentation.</p>
<p>The Python versions supported in this release are 3.8-3.10, Python 3.7
has been dropped. Note that 32 bit wheels are only provided for Python
3.8 and 3.9 on Windows, all other wheels are 64 bits on account of
Ubuntu, Fedora, and other Linux distributions dropping 32 bit support.
All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix
the occasional problems encountered by folks using truly huge arrays.</p>
<h2>Expired deprecations</h2>
<h3>Deprecated numeric style dtype strings have been removed</h3>
<p>Using the strings <code>"Bytes0"</code>, <code>"Datetime64"</code>, <code>"Str0"</code>, <code>"Uint32"</code>,
and <code>"Uint64"</code> as a dtype will now raise a <code>TypeError</code>.</p>
<p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19539">gh-19539</a>)</p>
<h3>Expired deprecations for <code>loads</code>, <code>ndfromtxt</code>, and <code>mafromtxt</code> in npyio</h3>
<p><code>numpy.loads</code> was deprecated in v1.15, with the recommendation that
users use <code>pickle.loads</code> instead. <code>ndfromtxt</code> and <code>mafromtxt</code> were both
deprecated in v1.17 - users should use <code>numpy.genfromtxt</code> instead with
the appropriate value for the <code>usemask</code> parameter.</p>
<p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19615">gh-19615</a>)</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/numpy/numpy/commit/4adc87dff15a247e417d50f10cc4def8e1c17a03"><code>4adc87d</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20685">#20685</a> from charris/prepare-for-1.22.0-release</li>
<li><a href="https://github.com/numpy/numpy/commit/fd66547557f57c430d41be2fc0764f74a62e8ccf"><code>fd66547</code></a> REL: Prepare for the NumPy 1.22.0 release.</li>
<li><a href="https://github.com/numpy/numpy/commit/125304b035effcd82e366e601b102e7347eaa9ba"><code>125304b</code></a> wip</li>
<li><a href="https://github.com/numpy/numpy/commit/c283859128b1a4b57014581570a23ed7950a24ea"><code>c283859</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20682">#20682</a> from charris/backport-20416</li>
<li><a href="https://github.com/numpy/numpy/commit/5399c03d4a069fe81a1616be0184c9749d7271ee"><code>5399c03</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20681">#20681</a> from charris/backport-20954</li>
<li><a href="https://github.com/numpy/numpy/commit/f9c45f8ebf31340b1a5a0371bfca25afcfc4794e"><code>f9c45f8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20680">#20680</a> from charris/backport-20663</li>
<li><a href="https://github.com/numpy/numpy/commit/794b36f7e1bf2a8c42774ab0db86a74bd32f674b"><code>794b36f</code></a> Update armccompiler.py</li>
<li><a href="https://github.com/numpy/numpy/commit/d93b14e3d7abaa1d837825e51671f817788e120f"><code>d93b14e</code></a> Update test_public_api.py</li>
<li><a href="https://github.com/numpy/numpy/commit/7662c0789cc6a70d5ad4d950ee2e95f3afef7df6"><code>7662c07</code></a> Update <strong>init</strong>.py</li>
<li><a href="https://github.com/numpy/numpy/commit/311ab52488a7d096ac3bc4c2de0fdae17ecd13ef"><code>311ab52</code></a> Update armccompiler.py</li>
<li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.21.0...v1.22.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17817/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17817",
"html_url": "https://github.com/huggingface/transformers/pull/17817",
"diff_url": "https://github.com/huggingface/transformers/pull/17817.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17817.patch",
"merged_at": 1655904580000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17816
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17816/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17816/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17816/events
|
https://github.com/huggingface/transformers/pull/17816
| 1,279,523,829
|
PR_kwDOCUB6oc46FnG8
| 17,816
|
Bump numpy from 1.21.0 to 1.22.0 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
Bumps [numpy](https://github.com/numpy/numpy) from 1.21.0 to 1.22.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p>
<blockquote>
<h2>v1.22.0</h2>
<h1>NumPy 1.22.0 Release Notes</h1>
<p>NumPy 1.22.0 is a big release featuring the work of 153 contributors
spread over 609 pull requests. There have been many improvements,
highlights are:</p>
<ul>
<li>Annotations of the main namespace are essentially complete. Upstream
is a moving target, so there will likely be further improvements,
but the major work is done. This is probably the most user visible
enhancement in this release.</li>
<li>A preliminary version of the proposed Array-API is provided. This is
a step in creating a standard collection of functions that can be
used across application such as CuPy and JAX.</li>
<li>NumPy now has a DLPack backend. DLPack provides a common interchange
format for array (tensor) data.</li>
<li>New methods for <code>quantile</code>, <code>percentile</code>, and related functions. The
new methods provide a complete set of the methods commonly found in
the literature.</li>
<li>A new configurable allocator for use by downstream projects.</li>
</ul>
<p>These are in addition to the ongoing work to provide SIMD support for
commonly used functions, improvements to F2PY, and better documentation.</p>
<p>The Python versions supported in this release are 3.8-3.10, Python 3.7
has been dropped. Note that 32 bit wheels are only provided for Python
3.8 and 3.9 on Windows, all other wheels are 64 bits on account of
Ubuntu, Fedora, and other Linux distributions dropping 32 bit support.
All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix
the occasional problems encountered by folks using truly huge arrays.</p>
<h2>Expired deprecations</h2>
<h3>Deprecated numeric style dtype strings have been removed</h3>
<p>Using the strings <code>"Bytes0"</code>, <code>"Datetime64"</code>, <code>"Str0"</code>, <code>"Uint32"</code>,
and <code>"Uint64"</code> as a dtype will now raise a <code>TypeError</code>.</p>
<p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19539">gh-19539</a>)</p>
<h3>Expired deprecations for <code>loads</code>, <code>ndfromtxt</code>, and <code>mafromtxt</code> in npyio</h3>
<p><code>numpy.loads</code> was deprecated in v1.15, with the recommendation that
users use <code>pickle.loads</code> instead. <code>ndfromtxt</code> and <code>mafromtxt</code> were both
deprecated in v1.17 - users should use <code>numpy.genfromtxt</code> instead with
the appropriate value for the <code>usemask</code> parameter.</p>
<p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/19615">gh-19615</a>)</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/numpy/numpy/commit/4adc87dff15a247e417d50f10cc4def8e1c17a03"><code>4adc87d</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20685">#20685</a> from charris/prepare-for-1.22.0-release</li>
<li><a href="https://github.com/numpy/numpy/commit/fd66547557f57c430d41be2fc0764f74a62e8ccf"><code>fd66547</code></a> REL: Prepare for the NumPy 1.22.0 release.</li>
<li><a href="https://github.com/numpy/numpy/commit/125304b035effcd82e366e601b102e7347eaa9ba"><code>125304b</code></a> wip</li>
<li><a href="https://github.com/numpy/numpy/commit/c283859128b1a4b57014581570a23ed7950a24ea"><code>c283859</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20682">#20682</a> from charris/backport-20416</li>
<li><a href="https://github.com/numpy/numpy/commit/5399c03d4a069fe81a1616be0184c9749d7271ee"><code>5399c03</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20681">#20681</a> from charris/backport-20954</li>
<li><a href="https://github.com/numpy/numpy/commit/f9c45f8ebf31340b1a5a0371bfca25afcfc4794e"><code>f9c45f8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/20680">#20680</a> from charris/backport-20663</li>
<li><a href="https://github.com/numpy/numpy/commit/794b36f7e1bf2a8c42774ab0db86a74bd32f674b"><code>794b36f</code></a> Update armccompiler.py</li>
<li><a href="https://github.com/numpy/numpy/commit/d93b14e3d7abaa1d837825e51671f817788e120f"><code>d93b14e</code></a> Update test_public_api.py</li>
<li><a href="https://github.com/numpy/numpy/commit/7662c0789cc6a70d5ad4d950ee2e95f3afef7df6"><code>7662c07</code></a> Update <strong>init</strong>.py</li>
<li><a href="https://github.com/numpy/numpy/commit/311ab52488a7d096ac3bc4c2de0fdae17ecd13ef"><code>311ab52</code></a> Update armccompiler.py</li>
<li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.21.0...v1.22.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17816/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17816",
"html_url": "https://github.com/huggingface/transformers/pull/17816",
"diff_url": "https://github.com/huggingface/transformers/pull/17816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17816.patch",
"merged_at": 1655904568000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17815
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17815/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17815/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17815/events
|
https://github.com/huggingface/transformers/pull/17815
| 1,279,518,029
|
PR_kwDOCUB6oc46Flz-
| 17,815
|
Improve encoder decoder model docs
|
{
"login": "Threepointone4",
"id": 22583613,
"node_id": "MDQ6VXNlcjIyNTgzNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22583613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Threepointone4",
"html_url": "https://github.com/Threepointone4",
"followers_url": "https://api.github.com/users/Threepointone4/followers",
"following_url": "https://api.github.com/users/Threepointone4/following{/other_user}",
"gists_url": "https://api.github.com/users/Threepointone4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Threepointone4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Threepointone4/subscriptions",
"organizations_url": "https://api.github.com/users/Threepointone4/orgs",
"repos_url": "https://api.github.com/users/Threepointone4/repos",
"events_url": "https://api.github.com/users/Threepointone4/events{/privacy}",
"received_events_url": "https://api.github.com/users/Threepointone4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Agree with the reviews of @ydshieh and @NielsRogge ! @Threepointone4 do you want to apply them ? Think we can merge after :-)",
"Great job @Threepointone4 ! Merging :-) "
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR improves the documentation of encoder decoder model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Issues [link](https://github.com/huggingface/transformers/issues/16135)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17815/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17815",
"html_url": "https://github.com/huggingface/transformers/pull/17815",
"diff_url": "https://github.com/huggingface/transformers/pull/17815.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17815.patch",
"merged_at": 1656074899000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17814
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17814/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17814/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17814/events
|
https://github.com/huggingface/transformers/pull/17814
| 1,279,496,444
|
PR_kwDOCUB6oc46Fg_1
| 17,814
|
Fix Constrained beam search duplication and weird output issue
|
{
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@cwkeam - do you have an idea here?",
"Good catch @boy2000-007man !! @patrickvonplaten I just read through the issue and saw the code changes and they're all right-on. Thanks for finding these problems!",
"Thank you for this fix! I found this pull request while I was searching to figure out why constrained beam search was churning out such repetitive results on 4.20.1. Installing the current repo of transformers fixed this immediately!"
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
- prevent duplicates between *(topk) generic beam search best model next tokens* and *(advance) constraints forcing the next token*
- ensure unfulfilled hypothesis will advance with correct beam score instead of wrong token score
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#17812
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17814/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17814/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17814",
"html_url": "https://github.com/huggingface/transformers/pull/17814",
"diff_url": "https://github.com/huggingface/transformers/pull/17814.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17814.patch",
"merged_at": 1656075368000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17813
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17813/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17813/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17813/events
|
https://github.com/huggingface/transformers/issues/17813
| 1,279,492,687
|
I_kwDOCUB6oc5MQ4JP
| 17,813
|
push_to_hub returns "OSError: error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408"
|
{
"login": "MicPie",
"id": 36303596,
"node_id": "MDQ6VXNlcjM2MzAzNTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/36303596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MicPie",
"html_url": "https://github.com/MicPie",
"followers_url": "https://api.github.com/users/MicPie/followers",
"following_url": "https://api.github.com/users/MicPie/following{/other_user}",
"gists_url": "https://api.github.com/users/MicPie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MicPie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MicPie/subscriptions",
"organizations_url": "https://api.github.com/users/MicPie/orgs",
"repos_url": "https://api.github.com/users/MicPie/repos",
"events_url": "https://api.github.com/users/MicPie/events{/privacy}",
"received_events_url": "https://api.github.com/users/MicPie/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Maybe there is also a git command line workaround?",
"Yes, can you try with a `git push` command line to see if it works that way?",
"Hi @julien-c,\r\nthank you for the fast feedback, I just tried it two times, at the beginning it looks like it works but then gets stuck again:\r\n```\r\n> git push\r\nEnumerating objects: 413419, done.\r\nCounting objects: 100% (413419/413419), done.\r\nDelta compression using up to 6 threads\r\nCompressing objects: 100% (410375/410375), done.\r\nerror: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408\r\nfatal: the remote end hung up unexpectedly\r\nWriting objects: 100% (413417/413417), 561.79 MiB | 1.75 MiB/s, done.\r\nTotal 413417 (delta 3098), reused 413302 (delta 3042)\r\nfatal: the remote end hung up unexpectedly\r\nEverything up-to-date\r\n\r\n> git push\r\nEnumerating objects: 413419, done.\r\nCounting objects: 100% (413419/413419), done.\r\nDelta compression using up to 6 threads\r\nCompressing objects: 100% (410375/410375), done.\r\nerror: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408\r\nfatal: the remote end hung up unexpectedly3 MiB | 239.00 KiB/s\r\nWriting objects: 100% (413417/413417), 561.79 MiB | 1.81 MiB/s, done.\r\nTotal 413417 (delta 3098), reused 413302 (delta 3042)\r\nfatal: the remote end hung up unexpectedly\r\nEverything up-to-date\r\n```\r\nThe total size of the data directory is around 5GB.",
"@julien-c It seems that I run all the time into the issue from above. I guess the best workaround would be to go for a single jsonl files setup, or what do you think?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, it worked fine with me when i connected to internet via my mobile hotspot .\r\ni don't know what was the reason but this was the solution",
"I also had the same problem, switching to a Wi-Fi network worked for me.",
"Switching to a Wi-Fi network also worked for me.",
"I also had the same problem, switching to a Wi-Fi network worked for me.\r\nbut i don't know y if any one know the answer please tell me",
"I also had the same problem, switching to a Wi-Fi network also worked for me.\r\nIf anybody know why please post it.",
"error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408\r\nI tried all possible method but it still fails. \r\n",
"It didn't worked for me",
"switching wifi does not work for me.",
"So for those of you still suffering from this issue and for those in the future, here is a detailed log of what I have tried:\r\nentering \r\ngit config http.postBuffer <number of bytes, default = 1 MiB> \r\ninto the terminal can work, but its a one-time fix and should be avoided. This is a good option if you've included large files, like zip files, into your commit history and dont want to fix that issue. Remember to set it back later. In my case, it just lead to another issue.\r\n\r\nIt could also be the time out feature when pushing/pulling. This could be the case if you are working with large files. Here you'd want to edit the http.lowSpeedLimit, http.lowSpeedTime config values. Documentation for this and http.postBuffer can be found here: https://git-scm.com/docs/git-config\r\n\r\nChanging wiFi can work for some people, but depending on the root cause of the issue this wont work. If you sometimes get this error and sometimes don't regardless of what you are pushing/pulling, then this is most likely the solution for you.\r\n\r\nI was finally able to fix my issue. First, I changed http.postBuffer to be 50MiB. This then showed me that lfs files were still being pushed despite being tracked by lfs. I had to do git lfs migrate import --everything --verbose --include=\"<file extension>\" to fix it for me. I hope this helps!\r\n\r\n",
"> I also had the same problem, switching to a Wi-Fi network also worked for me. If anybody know why please post it.\r\n\r\nYes,for me also it had the same issue.\r\nThanks.",
"Increase the buffer size by typing \r\n\r\ngit config http.postBuffer 9999999999999999\r\n\r\nand use the SSH link instead of HTTP as SSH is more stable\r\n\r\ngit remote set-url origin **git@github.com:username/repository.git**",
"> Increase the buffer size by typing\r\n> \r\n> git config http.postBuffer 9999999999999999\r\n> \r\n> and use the SSH link instead of HTTP as SSH is more stable\r\n> \r\n> git remote set-url origin **[git@github.com](mailto:git@github.com):username/repository.git**\r\n\r\nThank you. It worked for me.\r\n9999999999999999 was too big that it caused an error in my case.\r\n\r\nThe solution was for me to type :\r\n**git config http.postBuffer 99999999**",
"-> set to show hidden files in your pc and get access of .git folder\r\n-> delete .git folder\r\n-> init again, make sure you have README.md file\r\n-> now commit and push\r\n\r\nthis steps will definitely work ",
"switching from wi-fi to cable wroked for me.",
"thanks. the issue was wii-fi ",
"> So for those of you still suffering from this issue and for those in the future, here is a detailed log of what I have tried: entering git config http.postBuffer <number of bytes, default = 1 MiB> into the terminal can work, but its a one-time fix and should be avoided. This is a good option if you've included large files, like zip files, into your commit history and dont want to fix that issue. Remember to set it back later. In my case, it just lead to another issue.\r\n> \r\n> It could also be the time out feature when pushing/pulling. This could be the case if you are working with large files. Here you'd want to edit the http.lowSpeedLimit, http.lowSpeedTime config values. Documentation for this and http.postBuffer can be found here: https://git-scm.com/docs/git-config\r\n> \r\n> Changing wiFi can work for some people, but depending on the root cause of the issue this wont work. If you sometimes get this error and sometimes don't regardless of what you are pushing/pulling, then this is most likely the solution for you.\r\n> \r\n> I was finally able to fix my issue. First, I changed http.postBuffer to be 50MiB. This then showed me that lfs files were still being pushed despite being tracked by lfs. I had to do git lfs migrate import --everything --verbose --include=\"\" to fix it for me. I hope this helps!\r\n\r\ndo git lfs migrate import work for me "
] | 1,655
| 1,697
| 1,659
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following the docs at https://huggingface.co/course/chapter5/5#uploading-the-dataset-to-the-hugging-face-hub.
In my case I try to upload 413299 smaller jsonl files that are separate tasks that get joined depending on the data subset in the dataset setup later (which is why I would like to keep them separated).
This happens after some time when I run `repo.push_to_hub()` but nothing shows up on the dataset site on the HF hub (currently private until everything is finished):
```
Several commits (5) will be pushed upstream.
The progress bars may be unreliable.
error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
File ~/miniconda3/envs/lmproj2/lib/python3.8/site-packages/huggingface_hub/repository.py:1201, in Repository.git_push(self, upstream, blocking, auto_lfs_prune)
1200 if return_code:
-> 1201 raise subprocess.CalledProcessError(
1202 return_code, process.args, output=stdout, stderr=stderr
1203 )
1205 except subprocess.CalledProcessError as exc:
CalledProcessError: Command '['git', 'push', '--set-upstream', 'origin', 'main']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
Input In [100], in <cell line: 1>()
----> 1 repo.push_to_hub()
File ~/miniconda3/envs/lmproj2/lib/python3.8/site-packages/huggingface_hub/repository.py:1475, in Repository.push_to_hub(self, commit_message, blocking, clean_ok, auto_lfs_prune)
1473 self.git_add(auto_lfs_track=True)
1474 self.git_commit(commit_message)
-> 1475 return self.git_push(
1476 upstream=f"origin {self.current_branch}",
1477 blocking=blocking,
1478 auto_lfs_prune=auto_lfs_prune,
1479 )
File ~/miniconda3/envs/lmproj2/lib/python3.8/site-packages/huggingface_hub/repository.py:1206, in Repository.git_push(self, upstream, blocking, auto_lfs_prune)
1201 raise subprocess.CalledProcessError(
1202 return_code, process.args, output=stdout, stderr=stderr
1203 )
1205 except subprocess.CalledProcessError as exc:
-> 1206 raise EnvironmentError(exc.stderr)
1208 if not blocking:
1210 def status_method():
OSError: error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
```
### Expected behavior
```shell
Data is completely uploaded and shows up on the dataset site on the HF hub.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17813/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17812
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17812/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17812/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17812/events
|
https://github.com/huggingface/transformers/issues/17812
| 1,279,348,479
|
I_kwDOCUB6oc5MQU7_
| 17,812
|
Constrained Beam Search outputs duplication and weird results
|
{
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.8.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. duplication case: the outputs should not contain the same sequence.
```python3
>>> from transformers import GPT2LMHeadModel, GPT2Tokenizer
>>> model = GPT2LMHeadModel.from_pretrained("gpt2")
>>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
>>> force_word = "are"
>>> force_words_ids = [
>>> tokenizer([force_word], add_prefix_space=True, add_special_tokens=False).input_ids,
>>> ]
>>> starting_text = ["The soldiers"]
>>> input_ids = tokenizer(starting_text, return_tensors="pt").input_ids
>>> outputs = model.generate(
>>> input_ids,
>>> max_new_tokens=3,
>>> force_words_ids=force_words_ids,
>>> num_beams=10,
>>> num_return_sequences=10,
>>> no_repeat_ngram_size=1,
>>> remove_invalid_values=True,
>>> )
>>> outputs = outputs[:, input_ids.shape[-1]:]
>>> print("Output:\n" + 100 * '-')
>>> for s in tokenizer.batch_decode(outputs, skip_special_tokens=True):
>>> print(s)
>>> import collections
>>> print(collections.Counter(map(tuple, outputs.tolist())).most_common(1))
Output:
----------------------------------------------------------------------------------------------------
, who are
, who were
who are in
, who had
, who are
who are fighting
who are still
who are killed
who are not
who are on
[((11, 508, 389), 2)]
```
2. weird case: the output looks weird, repeat progressing constraints with unreasonable tokens
```python3
>>> from transformers import GPT2LMHeadModel, GPT2Tokenizer
>>> model = GPT2LMHeadModel.from_pretrained("gpt2")
>>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
>>> force_word = "were not allowed"
>>> force_words_ids = [
>>> tokenizer([force_word], add_prefix_space=True, add_special_tokens=False).input_ids,
>>> ]
>>> starting_text = ["The soldiers"]
>>> input_ids = tokenizer(starting_text, return_tensors="pt").input_ids
>>> outputs = model.generate(
>>> input_ids,
>>> force_words_ids=force_words_ids,
>>> num_beams=10,
>>> num_return_sequences=10,
>>> no_repeat_ngram_size=1,
>>> remove_invalid_values=True,
>>> )
>>> print("Output:\n" + 100 * '-')
>>> for s in tokenizer.batch_decode(outputs, skip_special_tokens=True):
>>> print(s)
Output:
----------------------------------------------------------------------------------------------------
The soldiers, who were wearing were not in were not allowed to leave the barracks. The commander of
The soldiers, who were wearing were not in were not allowed to leave the barracks. The military said
The soldiers, who were wearing were not in were not allowed to enter the building. The military said
The soldiers, who were wearing were not in were not allowed to enter the building. The police said
The soldiers, who were wearing were not in were not allowed to leave the barracks. The commander said
The soldiers, who were wearing were not in were not allowed to enter the building. The police had
The soldiers, who were wearing were not in were not allowed to leave the barracks. The army said
The soldiers, who were wearing were not in were not allowed to enter the building. The police then
The soldiers, who were wearing were not in were not allowed to leave the barracks. The commander told
The soldiers, who were wearing were not in were not allowed to enter the building. The police and
```
### Expected behavior
```shell
1. I believe the bug is due to the insufficient initialization of [`track_new["new_seqs"]`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L658).
It appears if the top-k hypothesis also advances the constraints.
Then this hypothesis may appear multiple times in the beam as the example output.
The fix is one line in-place.
Updated to `track_new = {"new_seqs": full_hypotheses.tolist(), "new_states": [], "new_indices": [], "new_tokens": [], "new_scores": []}`
2. I believe the bug is due to the incorrect value assignment of [`scores_for_all_vocab`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_utils.py#L3223),
what stores the next **token scores**.
`scores_for_all_vocab` is first passed to [`constrained_beam_scorer.process()`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_utils.py#L3262), and later passed to [`constrained_beam_scorer.step_sentence_constraint()`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L596) as `vocab_scores`.
Within its scope, `vocab_scores` is sliced to [`this_batch_token_scores`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L654).
`this_batch_token_scores` is finally added to [`track_new["new_scores"]`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L686).
However, the derived [`new_scores`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L727) is concated with [`sent_beam_scores`](https://github.com/huggingface/transformers/blob/3b00b623b7cad9e1b7c71c97fff24a0286b37045/src/transformers/generation_beam_search.py#L731),
reveal it should represent **beam scores**.
The **token scores** is larger than the expected **beam scores** as ignoring past token scores.
So the unfulfilled hypothesis will advance with an unexpected higher score, and dominate the beam as the example output.
The fix is also straightforward in-place.
# scores_for_all_vocab = next_token_scores_processed.clone()
next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)
scores_for_all_vocab = next_token_scores.clone()
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17812/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17811
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17811/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17811/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17811/events
|
https://github.com/huggingface/transformers/pull/17811
| 1,279,187,562
|
PR_kwDOCUB6oc46Eb8W
| 17,811
|
Fix GPT-NeoX-20B past handling, attention computation
|
{
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There are a few equivalence tests failing with the PR, if you can dive into it. Let us know if you need any help!",
"I've run the tests locally and they pass, so I can't seem to reproduce the test errors. Can someone else give them a try?",
"The tests pass on GPU but not on CPU on my side. So doing\r\n```\r\nCUDA_VISIBLE_DEVICES=\"\" pytest tests/models/gpt_neox/test_modeling_gpt_neox.py\r\n```\r\nreproduces the failure.",
"Thanks again! Nice to be able to use GPT-Neo-X in float16 for generations :-)"
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
* Fixes GPT-NeoX-20B handing of the past object to correctly be used in .generate
* Swaps attention computation for one more similar in the original training code, to hopefully avoid NaNs
* Update docstring, removed unnecessary dropout configs in config object
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/17632
https://github.com/huggingface/transformers/issues/17452 (Hopefully)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17811/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17811",
"html_url": "https://github.com/huggingface/transformers/pull/17811",
"diff_url": "https://github.com/huggingface/transformers/pull/17811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17811.patch",
"merged_at": 1656593260000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17810
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17810/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17810/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17810/events
|
https://github.com/huggingface/transformers/pull/17810
| 1,279,155,714
|
PR_kwDOCUB6oc46EUt5
| 17,810
|
Offload fixes
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
This PR fixes a few bugs in the current offload to disk implementation via Accelerate.
1. The `offload_folder` is not created if it doesn't exists, loading to cryptic errors about missing files.
2. When the model is a task model and the checkpoint one of the base model (like for OPT), there are two issues arising:
- if `offload_state_dict=True`, the weights should be reloaded in `model_to_load` from the temporary offload
- all the weights offloaded to disk are missing the `base_model_cls` prefix since they were offloaded as weights of `model_to_load` and not of `model`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17810/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17810",
"html_url": "https://github.com/huggingface/transformers/pull/17810",
"diff_url": "https://github.com/huggingface/transformers/pull/17810.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17810.patch",
"merged_at": 1655914988000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17809
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17809/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17809/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17809/events
|
https://github.com/huggingface/transformers/issues/17809
| 1,279,056,538
|
I_kwDOCUB6oc5MPNqa
| 17,809
|
AutoTokenizer vs. BertTokenizer
|
{
"login": "macleginn",
"id": 4831042,
"node_id": "MDQ6VXNlcjQ4MzEwNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4831042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/macleginn",
"html_url": "https://github.com/macleginn",
"followers_url": "https://api.github.com/users/macleginn/followers",
"following_url": "https://api.github.com/users/macleginn/following{/other_user}",
"gists_url": "https://api.github.com/users/macleginn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/macleginn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macleginn/subscriptions",
"organizations_url": "https://api.github.com/users/macleginn/orgs",
"repos_url": "https://api.github.com/users/macleginn/repos",
"events_url": "https://api.github.com/users/macleginn/events{/privacy}",
"received_events_url": "https://api.github.com/users/macleginn/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThe `AutoTokenizer` defaults to a fast, Rust-based tokenizer. Hence, when typing `AutoTokenizer.from_pretrained(\"bert-base-uncased\")`, it will instantiate a `BertTokenizerFast` behind the scenes. Fast tokenizers support `word_ids`.\r\n\r\nHere you're comparing it to a `BertTokenizer`, which is a slow, Python-based tokenizer.\r\n\r\nSo the behaviour is expected, and the error message pretty self-explanatory if you ask me.",
"The docs for AutoTokenizer say, \r\n\r\n> The tokenizer class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it’s missing, by falling back to using pattern matching on `pretrained_model_name_or_path`. <...> \r\n> \r\n> bert — [BertTokenizer](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/bert#transformers.BertTokenizer) or [BertTokenizerFast](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/bert#transformers.BertTokenizerFast) (BERT model). \r\n\r\nI do not pass a config, so I would assume that AutoTokenizer would instantiate `BertTokenizer`, which goes first in the list of options. Moreover, the docs for `BertTokenizer` and `BertTokenizerFast` do not mention that they are Python and Rust based respectively, so the user cannot really figure this out.",
"Hi @macleginn ,\r\n\r\nThanks for letting us know that this behavior isn't intuitive for you! \r\n\r\nRegarding the fact that `AutoTokenizer.from_pretrained` loads a fast tokenizer by default, we have in [the documentation](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/auto#transformers.AutoTokenizer.from_pretrained) a line for the `use_fast` argument that you can change in the `from_pretrained` method. As indicated in the documentation, this argument is set to `True`:\r\n> use_fast (bool, optional, defaults to True) — Whether or not to try to load the fast version of the tokenizer.\r\n\r\nDo you think we should do something differently to make it clearer?\r\n\r\nRegarding the error message that you're getting, do you think it would have been clearer to have: \r\n> ValueError: word_ids() is not available when using non-fast tokenizers (e.g. `XxxTokenizerFast`)",
"Hi @SaulLu,\r\n\r\n> Regarding the error message that you're getting, do you think it would have been clearer to have:\r\n>> ValueError: word_ids() is not available when using non-fast tokenizers (e.g. XxxTokenizerFast)\r\n\r\nYes, sure. Given this message, I would realise, first, that I need to use `BertTokenzerFast` if I want `word_id`s, and second, that this is what `AutoTokenizer` most likely resolved to.\r\n\r\n> Do you think we should do something differently to make it clearer?\r\n\r\nPerhaps mention this in the preamble to the model list? Something along the lines of \r\n\r\n> Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.\r\n>\r\n> The tokenizer class to instantiate is selected based on the `model_type` property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it’s missing, by falling back to using pattern matching on `pretrained_model_name_or_path`. The fast version of the tokenizer will be selected by default when available (see the `use_fast` parameter above).\r\n\r\nBut if you assume that the user should familiarise themselves with the params, it's okay as it is, as long as the error message points to something that can be found in the docs.",
"Hi, \r\nIt seems the `AutoTokenizer` class has a problem with the character-based model _google/canine-s_. However, I set `use_fast` to True, I got this value error `word_ids() is not available when using non-fast tokenizers`.",
"Hi,\r\n\r\nCANINE is a bit of a special model, it doesn't have a fast implementation since it's character based (Rust implementations are only for these fancy tokenization algorithms like WordPiece, BPE etc). I'd recommend to just use `CanineTokenizer`",
"Hello, using `CanineTokenizer `doesn't solve the problem... It doesn't have a \"Fast\" version with `word_ids()` implemented"
] | 1,655
| 1,681
| 1,655
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.1
- Platform: Linux-5.17.4-200.fc35.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.7
- Huggingface_hub version: 0.1.0
- PyTorch version (GPU?): 1.9.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
With transformers-4.20.1 and tokenizers-0.12.1, I get the following behaviour:
```python
In [1]: from transformers import AutoTokenizer, BertTokenizer
In [2]: auto_tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased')
In [3]: auto_tokens = auto_tokenizer('This is a sentence.'.split(), is_split_into_words=True)
In [4]: auto_tokens.word_ids()
Out[4]: [None, 0, 1, 2, 3, 3, None]
In [7]: bert_tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
In [9]: bert_tokens = bert_tokenizer('This is a sentence.'.split(), is_split_into_words=True)
In [10]: bert_tokens.word_ids()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-d69d0750fb87> in <module>
----> 1 bert_tokens.word_ids()
/mount/arbeitsdaten33/projekte/tcl/Users/nikolady/embedalign/lib/python3.9/site-packages/transformers/tokenization_utils_base.py in word_ids(self, batch_index)
350 """
351 if not self._encodings:
--> 352 raise ValueError("word_ids() is not available when using Python-based tokenizers")
353 return self._encodings[batch_index].word_ids
354
ValueError: word_ids() is not available when using Python-based tokenizers
```
Regardless of whether this is expected or not, this is unintuitive and confusing. E.g., am I even getting correct tokenisation by using a more general tokeniser class?
@SaulLu @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See above.
### Expected behavior
```shell
Word ids from BertTokenizer or a more informative error message.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17809/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17808
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17808/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17808/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17808/events
|
https://github.com/huggingface/transformers/issues/17808
| 1,279,017,546
|
I_kwDOCUB6oc5MPEJK
| 17,808
|
Unable to Load the Pre-Trained Model using Spark-Submit
|
{
"login": "paresmi",
"id": 90341577,
"node_id": "MDQ6VXNlcjkwMzQxNTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/90341577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paresmi",
"html_url": "https://github.com/paresmi",
"followers_url": "https://api.github.com/users/paresmi/followers",
"following_url": "https://api.github.com/users/paresmi/following{/other_user}",
"gists_url": "https://api.github.com/users/paresmi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paresmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paresmi/subscriptions",
"organizations_url": "https://api.github.com/users/paresmi/orgs",
"repos_url": "https://api.github.com/users/paresmi/repos",
"events_url": "https://api.github.com/users/paresmi/events{/privacy}",
"received_events_url": "https://api.github.com/users/paresmi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
Hi Team,
I am trying to trigger spark-submit and passing the pre-trained model with --archives ; however Spark is not able to locate the model while executing the BertTokenizer.from_pretrained(BERT_MODEL_NAME).
Please advise how to I get the spark to point to the BERT Model ; I tried placing in HDFS as well as Local Linux path - but same error.
`22/06/20 17:09:23 INFO e: Traceback (most recent call last):
File "etl.py", line 509, in run
self.ingest()
File "etl.py", line 315, in ingest
BERT_MODEL_NAME="score/bert-base-cased/",
File "/data-12/hadoop/yarn/local/usercache/svc/appcache/application_1641/container_e397_000001/e_process-0.0.1-py3.7.egg/classification/pipelinescore.py", line 30, in __init__
self.tokenizer = BertTokenizer.from_pretrained(BERT_MODEL_NAME,cache_dir="/cache")
File "/data-12/hadoop/yarn/local/usercache/svc/appcache/application_1641/container_e397_00001/dep.tar.gz/transformers/tokenization_utils_base.py", line 1773, in from_pretrained
f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from "
OSError: Can't load tokenizer for 'classification/score/bert-base-cased/'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'classification/score/bert-base-cased/' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer.
`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17808/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17807
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17807/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17807/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17807/events
|
https://github.com/huggingface/transformers/pull/17807
| 1,279,006,805
|
PR_kwDOCUB6oc46Dyt4
| 17,807
|
Add Spanish translation of custom_models.mdx
|
{
"login": "donelianc",
"id": 7807897,
"node_id": "MDQ6VXNlcjc4MDc4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donelianc",
"html_url": "https://github.com/donelianc",
"followers_url": "https://api.github.com/users/donelianc/followers",
"following_url": "https://api.github.com/users/donelianc/following{/other_user}",
"gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donelianc/subscriptions",
"organizations_url": "https://api.github.com/users/donelianc/orgs",
"repos_url": "https://api.github.com/users/donelianc/repos",
"events_url": "https://api.github.com/users/donelianc/events{/privacy}",
"received_events_url": "https://api.github.com/users/donelianc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@omarespejel could you take a look at this PR Please?",
"Hola @donelianc! Thank you very much for your translation. Sorry for the delay in replying; it won't happen again.\r\n\r\nI made some small comments to be applied.\r\n\r\nThank you!",
"@omarespejel I committed the suggested changes :) If you don't mind, can you assign me the translation of [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx)? I'll be happy to help with one more doc."
] | 1,655
| 1,659
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add the Spanish translation for `custom_models.mdx` as part of the #15947 issue.
Changes include the Spanish version of the original document and the updated `_toctree.yml` file.
Tried to follow the [guideline](https://github.com/huggingface/transformers/blob/26a6a426087582c48593f8be980603951a7bcddd/CONTRIBUTING.md#start-contributing-pull-requests) to generate the Markdown files and check them before submitting this PR but the command `doc-builder build transformers docs/source/ --build_dir ~/tmp/test-build` fails since the `_toctree.yml` file is no longer in `./docs/source/` (but in `./docs/source/en/`).
First contribution to the 🤗 Transformers project!
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Task assignment [here](https://github.com/huggingface/transformers/issues/15947#issuecomment-1161854258).
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] ~~Did you write any new necessary tests?~~
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Tagging @omarespejel, @osanseviero, or @sgugger to review or assign reviewers :)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17807/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17807",
"html_url": "https://github.com/huggingface/transformers/pull/17807",
"diff_url": "https://github.com/huggingface/transformers/pull/17807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17807.patch",
"merged_at": 1658844638000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17806
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17806/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17806/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17806/events
|
https://github.com/huggingface/transformers/pull/17806
| 1,278,926,305
|
PR_kwDOCUB6oc46Dgsf
| 17,806
|
Add TF DeiT implementation
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Please also incorporate the updates of #17731 ",
"> Please also incorporate the updates of https://github.com/huggingface/transformers/pull/17731\r\n\r\n@NielsRogge Will do! I originally added, but it resulted in lots of changes because of all the `Copied From` statements, so will wait until your PR is merged and those updates finalised. ",
"@sgugger could you maybe give it a quick review as it's vision? (otherwise happy to do it if you're busy)"
] | 1,655
| 1,657
| 1,657
|
COLLABORATOR
| null |
# What does this PR do?
Adds the TF implementations of DeiT
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17806/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17806",
"html_url": "https://github.com/huggingface/transformers/pull/17806",
"diff_url": "https://github.com/huggingface/transformers/pull/17806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17806.patch",
"merged_at": 1657731848000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17805
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17805/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17805/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17805/events
|
https://github.com/huggingface/transformers/pull/17805
| 1,278,711,224
|
PR_kwDOCUB6oc46Cx0U
| 17,805
|
Add logits_processor parameter, used by `generate`, to `Seq2SeqTrainer` methods `evaluate` and `predict`
|
{
"login": "eranhirs",
"id": 3372820,
"node_id": "MDQ6VXNlcjMzNzI4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3372820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eranhirs",
"html_url": "https://github.com/eranhirs",
"followers_url": "https://api.github.com/users/eranhirs/followers",
"following_url": "https://api.github.com/users/eranhirs/following{/other_user}",
"gists_url": "https://api.github.com/users/eranhirs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eranhirs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eranhirs/subscriptions",
"organizations_url": "https://api.github.com/users/eranhirs/orgs",
"repos_url": "https://api.github.com/users/eranhirs/repos",
"events_url": "https://api.github.com/users/eranhirs/events{/privacy}",
"received_events_url": "https://api.github.com/users/eranhirs/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger What are your thoughts about adding to the `test_finetune_bert2bert` a call to `trainer.evaluate()` and `trainer.predict(eval_dataset)`?",
"Sure, we can add this. But is it to test any of this functionality? If not, it should go in its own PR.",
"It somewhat tests this functionality by calling the methods I changed, but not directly. I can add it later in its own PR.",
"Merging this one then, thanks again for your contribution!"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Following the discussion in #17748, this PR adds the `logits_processor` param to Seq2SeqTrainer `predict` and `evaluate` methods, for easy extensibility.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17805/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17805",
"html_url": "https://github.com/huggingface/transformers/pull/17805",
"diff_url": "https://github.com/huggingface/transformers/pull/17805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17805.patch",
"merged_at": 1655899900000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17804
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17804/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17804/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17804/events
|
https://github.com/huggingface/transformers/issues/17804
| 1,278,710,213
|
I_kwDOCUB6oc5MN5HF
| 17,804
|
`seed_generator` would casue tensorflow gpu memory growth immediately
|
{
"login": "Atakey",
"id": 29856062,
"node_id": "MDQ6VXNlcjI5ODU2MDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/29856062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Atakey",
"html_url": "https://github.com/Atakey",
"followers_url": "https://api.github.com/users/Atakey/followers",
"following_url": "https://api.github.com/users/Atakey/following{/other_user}",
"gists_url": "https://api.github.com/users/Atakey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Atakey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Atakey/subscriptions",
"organizations_url": "https://api.github.com/users/Atakey/orgs",
"repos_url": "https://api.github.com/users/Atakey/repos",
"events_url": "https://api.github.com/users/Atakey/events{/privacy}",
"received_events_url": "https://api.github.com/users/Atakey/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @Atakey 👋 Great finding! Would you be able to open a PR? The suggestion you gave looks great!"
] | 1,655
| 1,657
| 1,657
|
NONE
| null |
https://github.com/huggingface/transformers/blob/7cced021fa8ddc59f0f77384300760d34545394e/src/transformers/generation_tf_utils.py#L349
In transformers version >=4.18.0, import something from `generation_tf_utils.py` would casue gpu memory growth immediately if I don't set env `TF_FORCE_GPU_ALLOW_GROWTH=true` before importing modules from this file.
And I locate the problem at this line.
Making attr `seed_generator` lazy loaded could solve this.
```
# ...
class TFGenerationMixin:
"""
A class containing all of the functions supporting generation, to be used as a mixin in [`TFPreTrainedModel`].
"""
# seed_generator = tf.random.Generator.from_non_deterministic_state()
_seed_generator = None
@property
def seed_generator(self):
if self._seed_generator is None:
self._seed_generator = tf.random.Generator.from_non_deterministic_state()
return self._seed_generator
# ...
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17804/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17803
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17803/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17803/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17803/events
|
https://github.com/huggingface/transformers/pull/17803
| 1,278,671,649
|
PR_kwDOCUB6oc46Cpcd
| 17,803
|
Fix test for BF16 detection
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
When `no_cuda=True`, we shouldn't try to detect the support for BF16 GPU.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17803/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17803/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17803",
"html_url": "https://github.com/huggingface/transformers/pull/17803",
"diff_url": "https://github.com/huggingface/transformers/pull/17803.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17803.patch",
"merged_at": 1655829075000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17802
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17802/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17802/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17802/events
|
https://github.com/huggingface/transformers/pull/17802
| 1,278,650,591
|
PR_kwDOCUB6oc46Ck9t
| 17,802
|
Properly check for a TPU device in is_torch_tpu_available
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I have the same problem in vertex AI. it is not solver even with `pip install git+https://github.com/huggingface/transformers` \r\nThe weird thing is that I dont even use TPU!\r\n\r\nI made it work by `!pip uninstall torch-xla` (at least on vertex, I dont know about other env.)\r\n\r\nthanks :)"
] | 1,655
| 1,656
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a better check to see if a TPU is available in the environment when checking `is_torch_tpu_available` by adding a quick import check for `xm.tpu_device()`. This raises a `RuntimeError` if a TPU device is not found.
Mimics solution in https://github.com/huggingface/accelerate/pull/456
Fixes # (issue)
https://github.com/huggingface/transformers/issues/17752
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17802/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17802",
"html_url": "https://github.com/huggingface/transformers/pull/17802",
"diff_url": "https://github.com/huggingface/transformers/pull/17802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17802.patch",
"merged_at": 1655833195000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17801
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17801/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17801/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17801/events
|
https://github.com/huggingface/transformers/pull/17801
| 1,278,586,104
|
PR_kwDOCUB6oc46CXOF
| 17,801
|
TF: generate without `tf.TensorArray`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @ydshieh -- this PR fixes the XLNet generate error we have been seeing :)",
"Cool!",
"@Rocketknight1 no differences in terms of execution speed 👍 \r\n\r\n`GPT2` `sample` on a 3090, average of 10 runs (excluding compilation time)\r\n- Eager: 884 ms -> 888 ms\r\n- XLA: 29.2 ms -> 29.3 ms\r\n- JAX: 19.7 ms"
] | 1,655
| 1,655
| 1,655
|
MEMBER
| null |
# What does this PR do?
Some models, like XLNet, need more than just the previous token when `past` is used. This PR solves this problem with the help of some refactoring -- we no longer use `TensorArray`, instead we scatter updates into a fixed-size tensor. This refactor simplifies `generate`, especially `beam_search`, which may prove to be helpful in enabling XLA.
Slow tests have been run for the usual generate models (gpt2, t5, rag, speech_to_text, encoder_decoder, vision_encoder_decoder, bart).
### Why was this refactor needed?
As it can be read in [this issue](https://github.com/tensorflow/tensorflow/issues/56272), `TensorArray` is meant to be used as a write-once array, anything else falls in the unexpected behavior domain -- in other words, our use was dangerous. The original solution to the XLNet problem was to read all existing tokens from the `TensorArray`, using the same logic as in this PR, but it failed with XLA -- and the behavior depended on what was written into the variable on its first write. Since we use fixed-size tensors, a normal tensor works just fine, and with simpler code (assuming the reader is familiar with how scatter works :D ).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17801/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17801",
"html_url": "https://github.com/huggingface/transformers/pull/17801",
"diff_url": "https://github.com/huggingface/transformers/pull/17801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17801.patch",
"merged_at": 1655983689000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17800
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17800/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17800/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17800/events
|
https://github.com/huggingface/transformers/pull/17800
| 1,278,580,006
|
PR_kwDOCUB6oc46CV6U
| 17,800
|
Fix forward reference imports in DeBERTa configs
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"FYI @JingyaHuang @michaelbenayoun I'll take a look at the slow test tomorrow :)"
] | 1,655
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
#17617 introduced some imports that will create cyclical references, just for type hinting. This PR makes them only imported in a type checking block.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17800/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17800",
"html_url": "https://github.com/huggingface/transformers/pull/17800",
"diff_url": "https://github.com/huggingface/transformers/pull/17800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17800.patch",
"merged_at": 1655824867000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17799
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17799/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17799/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17799/events
|
https://github.com/huggingface/transformers/pull/17799
| 1,278,538,666
|
PR_kwDOCUB6oc46CNDf
| 17,799
|
add MobileNetV1 model
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Note: The checkpoints still point to my own account, but should be changed to `google` once the changes are approved.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17799). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17799). All of your documentation changes will be reflected on that endpoint.",
"I've rebased it so that it can merged. Perhaps @sgugger or @NielsRogge could merge it?"
] | 1,655
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds the MobileNet V1 model to the library.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17799/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17799",
"html_url": "https://github.com/huggingface/transformers/pull/17799",
"diff_url": "https://github.com/huggingface/transformers/pull/17799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17799.patch",
"merged_at": 1669044088000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17798
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17798/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17798/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17798/events
|
https://github.com/huggingface/transformers/pull/17798
| 1,278,492,619
|
PR_kwDOCUB6oc46CDCH
| 17,798
|
Update CodeParrot readme to include training in Megatron
|
{
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
This PR updates the README to explain how to train CodeParrot with [Megatron](https://github.com/NVIDIA/Megatron-LM), and redirects model and dataset imports to [CodeParrot organization](https://huggingface.co/codeparrot).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17798/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17798",
"html_url": "https://github.com/huggingface/transformers/pull/17798",
"diff_url": "https://github.com/huggingface/transformers/pull/17798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17798.patch",
"merged_at": 1658915948000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17797
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17797/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17797/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17797/events
|
https://github.com/huggingface/transformers/pull/17797
| 1,278,418,090
|
PR_kwDOCUB6oc46By0m
| 17,797
|
Fix properties of unset special tokens in non verbose mode
|
{
"login": "guillaumekln",
"id": 4805513,
"node_id": "MDQ6VXNlcjQ4MDU1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4805513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaumekln",
"html_url": "https://github.com/guillaumekln",
"followers_url": "https://api.github.com/users/guillaumekln/followers",
"following_url": "https://api.github.com/users/guillaumekln/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaumekln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaumekln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaumekln/subscriptions",
"organizations_url": "https://api.github.com/users/guillaumekln/orgs",
"repos_url": "https://api.github.com/users/guillaumekln/repos",
"events_url": "https://api.github.com/users/guillaumekln/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaumekln/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #17796.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@n1t0, @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17797/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17797",
"html_url": "https://github.com/huggingface/transformers/pull/17797",
"diff_url": "https://github.com/huggingface/transformers/pull/17797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17797.patch",
"merged_at": 1655988014000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17796
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17796/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17796/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17796/events
|
https://github.com/huggingface/transformers/issues/17796
| 1,278,361,309
|
I_kwDOCUB6oc5MMj7d
| 17,796
|
Properties of unset special tokens return the string 'None' in non verbose Tokenizers
|
{
"login": "guillaumekln",
"id": 4805513,
"node_id": "MDQ6VXNlcjQ4MDU1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4805513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaumekln",
"html_url": "https://github.com/guillaumekln",
"followers_url": "https://api.github.com/users/guillaumekln/followers",
"following_url": "https://api.github.com/users/guillaumekln/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaumekln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaumekln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaumekln/subscriptions",
"organizations_url": "https://api.github.com/users/guillaumekln/orgs",
"repos_url": "https://api.github.com/users/guillaumekln/repos",
"events_url": "https://api.github.com/users/guillaumekln/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaumekln/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @guillaumekln ,\r\n\r\nThanks a lot for letting us know about this issue! ~~I'm planning to fix it in the PR #17824~~ EDIT: sorry, I haven't seen your PR. Closing mine in favor of yours :hugs: ",
"Thanks for the update! I also suggested a fix here #17797. Feel free to close this PR if needed.",
"Let's keep yours! "
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27
- Python version: 3.8.0
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For example, `MarianTokenizer` does not set `bos_token`. The corresponding property returns the string `'None'`:
```python
>>> import transformers
>>> tokenizer = transformers.MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de", verbose=False)
>>> tokenizer.bos_token
'None'
```
### Expected behavior
```shell
The property should return None, not the string 'None'.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17796/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17795
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17795/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17795/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17795/events
|
https://github.com/huggingface/transformers/issues/17795
| 1,278,320,280
|
I_kwDOCUB6oc5MMZ6Y
| 17,795
|
Big Model Inference: OOM with simple forward pass
|
{
"login": "ggbetz",
"id": 3662782,
"node_id": "MDQ6VXNlcjM2NjI3ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3662782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggbetz",
"html_url": "https://github.com/ggbetz",
"followers_url": "https://api.github.com/users/ggbetz/followers",
"following_url": "https://api.github.com/users/ggbetz/following{/other_user}",
"gists_url": "https://api.github.com/users/ggbetz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggbetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggbetz/subscriptions",
"organizations_url": "https://api.github.com/users/ggbetz/orgs",
"repos_url": "https://api.github.com/users/ggbetz/repos",
"events_url": "https://api.github.com/users/ggbetz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggbetz/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"You need to put your forward pass inside a `torch.no_grad()` context manager, otherwise you will get memory used for the activations saved for the backward pass (which is not supported anyway for big model inference), which is why you get OOM.",
"Thanks for the quick reply, @sgugger! With the context manager I get another error (which seems to relate to the fact the model resides on two GPUs?):\r\n\r\n```python\r\n# inputs and labels are on cuda:0\r\n>>> inputs\r\n{'input_ids': tensor([[16107, 10, 2405, 68, 497, 8, 6401, 5, 276, 9945,\r\n 751, 165, 1588, 581, 1386, 658, 5, 1]],\r\n device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],\r\n device='cuda:0')}\r\n>>> labels\r\n{'input_ids': tensor([[ 276, 9945, 1513, 165, 1588, 581, 1386, 658, 5, 1]],\r\n device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')}\r\n\r\n# forward:\r\n>>> with torch.no_grad():\r\n... output = model(**inputs, labels=labels['input_ids'])\r\n... \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 2, in <module>\r\n File \"/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py\", line 148, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py\", line 1671, in forward\r\n loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))\r\n File \"/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/loss.py\", line 1163, in forward\r\n return F.cross_entropy(input, target, weight=self.weight,\r\n File \"/home/###-py3.8/lib/python3.8/site-packages/torch/nn/functional.py\", line 2996, in cross_entropy\r\n return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_nll_loss_forward)\r\n```",
"Indeed, your labels will need to be on the same device as the last layer of the model if you want to compute the loss inside the model (here device 1).",
"Thanks so much, that makes sense -- and works:\r\n\r\n```python\r\n>>> labels = labels.to(1)\r\n>>> with torch.no_grad():\r\n... output = model(**inputs, labels=labels['input_ids'])\r\n>>> output[0].tolist()\r\n0.9372970461845398\r\n```\r\n\r\n(As far as I'm concerned, this is solved.)"
] | 1,655
| 1,656
| 1,655
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.0
- Platform: Linux-5.11.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:01:00.0 Off | N/A |
| 30% 50C P8 13W / 250W | 5MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:02:00.0 Off | N/A |
| 29% 42C P8 27W / 250W | 5MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1237 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1237 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Loading `T03B` for big model inference:
```python
$ python3
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B")
```
2. The model is distributed over two GPUs:
```python
>>> model.hf_device_map
{'shared': 0, 'decoder': 0, 'encoder.embed_tokens': 0, 'encoder.block.0': 0, 'encoder.block.1': 0, 'encoder.block.2': 0, 'encoder.block.3': 0, 'encoder.block.4': 0, 'encoder.block.5': 0, 'encoder.block.6': 0, 'encoder.block.7': 0, 'encoder.block.8': 0, 'encoder.block.9': 0, 'encoder.block.10': 0, 'encoder.block.11': 0, 'encoder.block.12': 0, 'encoder.block.13': 0, 'encoder.block.14': 0, 'encoder.block.15': 0, 'encoder.block.16': 0, 'encoder.block.17': 0, 'encoder.block.18': 0, 'encoder.block.19': 1, 'encoder.block.20': 1, 'encoder.block.21': 1, 'encoder.block.22': 1, 'encoder.block.23': 1, 'encoder.final_layer_norm': 1, 'encoder.dropout': 1, 'lm_head': 1}
```
3. Generation works fine:
```python
>>> inputs = tokenizer("Task: copy but say the opposite. PSG won its match against Barca.", return_tensors="pt")
>>> inputs = inputs.to(0)
>>> tokenizer.decode(model.generate(inputs['input_ids'])[0].tolist())
'<pad> Paris St-Germain beat Barcelona 1-0 in their Champions League Group B match on Tuesday.</s>'
```
4. But a simple forward pass throws an OOM error:
```python
>>> inputs = tokenizer("Task: copy but say the opposite. PSG won its match against Barca.", return_tensors="pt")
>>> inputs = inputs.to(0)
>>> labels = tokenizer("PSG lost its match against Barca.", return_tensors="pt")
>>> labels = labels.to(0)
>>> output = model(**inputs, labels=labels['input_ids'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1601, in forward
encoder_outputs = self.encoder(
File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1033, in forward
layer_outputs = layer_module(
File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 716, in forward
hidden_states = self.layer[-1](hidden_states)
File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 326, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 305, in forward
hidden_gelu = self.act(self.wi_0(hidden_states))
File "/home/###-py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/###-py3.8/lib/python3.8/site-packages/transformers/activations.py", line 34, in forward
return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 10.76 GiB total capacity; 9.72 GiB already allocated; 3.69 MiB free; 9.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Expected behavior
```shell
There should be no OOM error, especially so as generation works fine.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17795/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17794
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17794/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17794/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17794/events
|
https://github.com/huggingface/transformers/issues/17794
| 1,278,299,271
|
I_kwDOCUB6oc5MMUyH
| 17,794
|
Token indices sequence length is longer than the specified maximum sequence length for this model (821 > 512). Running this sequence through the model will result in indexing errors
|
{
"login": "Roshni3499",
"id": 72002381,
"node_id": "MDQ6VXNlcjcyMDAyMzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/72002381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Roshni3499",
"html_url": "https://github.com/Roshni3499",
"followers_url": "https://api.github.com/users/Roshni3499/followers",
"following_url": "https://api.github.com/users/Roshni3499/following{/other_user}",
"gists_url": "https://api.github.com/users/Roshni3499/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Roshni3499/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Roshni3499/subscriptions",
"organizations_url": "https://api.github.com/users/Roshni3499/orgs",
"repos_url": "https://api.github.com/users/Roshni3499/repos",
"events_url": "https://api.github.com/users/Roshni3499/events{/privacy}",
"received_events_url": "https://api.github.com/users/Roshni3499/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThis question is better suited for our [forum](https://discuss.huggingface.co/), as we'd like to keep Github issues for bugs/feature requests.\r\n\r\nThis one is not a bug, you just need to truncate the inputs as they are longer than what the model expects:\r\n\r\n````\r\nencoding = tokenizer(text, truncation=True, return_tensors=\"pt\")\r\n````\r\n\r\nThanks!",
"Hey @NielsRogge shouldn't ideally in such a case there should be an error triggered or even if the warning is triggered the ideal nature of the tokenization process be to, forcefully truncate the input, as this could cause errors on a production grade system if left unchecked.",
"No we don't truncate by default, that would break the design of the library quite a bit. Users need to specify the truncation or padding behaviour themselves, as it can happen from the right or the left for instance, the max length can differ, etc.\r\n\r\nSee here for a guide on [\"everything you always wanted to know about tokenization\"](https://huggingface.co/docs/transformers/v4.15.0/preprocessing#everything-you-always-wanted-to-know-about-padding-and-truncation). ",
"Okay sure @NielsRogge, I understand now.\r\n"
] | 1,655
| 1,663
| 1,655
|
NONE
| null |
I am trying to generate a Boolean question using T5 transformer. using [this](https://github.com/ramsrigouthamg/generate_boolean_questions_using_T5_transformer/blob/master/t5_inference.py

) link as a reference.
```
def topkp_decoding (inp_ids,attn_mask):
topkp_output = model.generate(input_ids=inp_ids,
attention_mask=attn_mask,
max_length=256,
do_sample=True,
top_k=40,
top_p=0.80,
num_return_sequences=(len(z)*2),
no_repeat_ngram_size=2,
early_stopping=True
)
Questions = [tokenizer.decode(out, skip_special_tokens=True,clean_up_tokenization_spaces=True) for out in topkp_output]
return [Question.strip().capitalize() for Question in Questions]
```
```
start = time.time()
passage =wo
truefalse ="no" + "yes"
#text = "truefalse: %s passage: %s </s>" % (passage, truefalse)
max_len = 256
encoding = tokenizer.encode_plus(text, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
#print ("Context: ",passage)
global output
output = beam_search_decoding(input_ids,attention_masks)
#print ("\nBeam decoding [Most accurate questions] ::\n")
#for out in output:
#print(out)
global outputs
outputs = topkp_decoding(input_ids,attention_masks)
#print ("\nTopKP decoding [Not very accurate but more variety in questions] ::\n")
#for out in outputs:
#print (out)
end = time.time()
#print ("\nTime elapsed ", end-start)
#print ("\n")
```
But I am getting an error **Token indices sequence length is longer than the specified maximum sequence length for this model (821 > 512). Running this sequence through the model will result in indexing errors** while running this code in jupyter. how to solve this issue?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17794/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17793
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17793/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17793/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17793/events
|
https://github.com/huggingface/transformers/issues/17793
| 1,278,245,525
|
I_kwDOCUB6oc5MMHqV
| 17,793
|
IndexError with Reformer Model when padding the sequence
|
{
"login": "RobinGeibel",
"id": 51984028,
"node_id": "MDQ6VXNlcjUxOTg0MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/51984028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobinGeibel",
"html_url": "https://github.com/RobinGeibel",
"followers_url": "https://api.github.com/users/RobinGeibel/followers",
"following_url": "https://api.github.com/users/RobinGeibel/following{/other_user}",
"gists_url": "https://api.github.com/users/RobinGeibel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobinGeibel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobinGeibel/subscriptions",
"organizations_url": "https://api.github.com/users/RobinGeibel/orgs",
"repos_url": "https://api.github.com/users/RobinGeibel/repos",
"events_url": "https://api.github.com/users/RobinGeibel/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobinGeibel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"If I add the Eos token as padding token it works. In any case, the tokenizer tells me to add a padding token. I do believe that the documentation says that one should exit however.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
Hi,
I am trying to use Reformer for text classification, but I am getting the following RuntimeError. I tried both using a custom head and Huggingface's Reformer for sequence classification.
The code runs fine when I swap Reformer for BigBird. I am using the google/reformer-crime-and-punishment checkpoint and have padded the text to max_length.
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-27-5c75e8f9ad52>](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in <module>()
9 for batch in training_loader:
10 batch = {k: v.to(config['device']) for k, v in batch.items()}
---> 11 outputs = model(**batch)
12 loss = criterion(outputs, batch['labels'])
13 loss.backward()
8 frames
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, position_ids, attention_mask, head_mask, inputs_embeds, num_hashes, labels, output_hidden_states, output_attentions, return_dict)
2554 output_hidden_states=output_hidden_states,
2555 output_attentions=output_attentions,
-> 2556 return_dict=return_dict,
2557 )
2558
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, num_hashes, past_buckets_states, use_cache, output_hidden_states, output_attentions, return_dict)
2102 position_ids=position_ids,
2103 inputs_embeds=inputs_embeds,
-> 2104 start_idx_pos_encodings=start_idx_pos_encodings,
2105 )
2106
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, position_ids, inputs_embeds, start_idx_pos_encodings)
253
254 if inputs_embeds is None:
--> 255 inputs_embeds = self.word_embeddings(input_ids)
256
257 if position_ids.shape[-1] > self.max_position_embeddings:
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input)
158 return F.embedding(
159 input, self.weight, self.padding_idx, self.max_norm,
--> 160 self.norm_type, self.scale_grad_by_freq, self.sparse)
161
162 def extra_repr(self) -> str:
[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2181 # remove once script supports set_grad_enabled
2182 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2183 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2184
2185
RuntimeError: CUDA error: device-side assert triggered
```
I have tried the following example on CPU and I think padding is the problem:
```
This works:
from transformers import ReformerTokenizer, ReformerModel
import torch
tokenizer = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = ReformerModel.from_pretrained("google/reformer-crime-and-punishment")
inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute"], return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
This doesn't:
```
from transformers import ReformerTokenizer, ReformerModel
import torch
tokenizer = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = ReformerModel.from_pretrained("google/reformer-crime-and-punishment")
inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute"], return_tensors="pt", padding='max_length', max_length=524288)
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Error:
```
IndexError Traceback (most recent call last)
[<ipython-input-68-d9b515284501>](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in <module>()
7
8 inputs = tokenizer(["Hello, my dog is cute", "Hello, my dog is cute"], return_tensors="pt", padding='max_length', max_length=524288)
----> 9 outputs = model(**inputs)
10
11 last_hidden_states = outputs.last_hidden_state
6 frames
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, num_hashes, past_buckets_states, use_cache, output_hidden_states, output_attentions, return_dict)
2102 position_ids=position_ids,
2103 inputs_embeds=inputs_embeds,
-> 2104 start_idx_pos_encodings=start_idx_pos_encodings,
2105 )
2106
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input_ids, position_ids, inputs_embeds, start_idx_pos_encodings)
253
254 if inputs_embeds is None:
--> 255 inputs_embeds = self.word_embeddings(input_ids)
256
257 if position_ids.shape[-1] > self.max_position_embeddings:
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in forward(self, input)
158 return F.embedding(
159 input, self.weight, self.padding_idx, self.max_norm,
--> 160 self.norm_type, self.scale_grad_by_freq, self.sparse)
161
162 def extra_repr(self) -> str:
[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://a55g9zay06c-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220616-060045-RC00_455344893#) in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2181 # remove once script supports set_grad_enabled
2182 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2183 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2184
2185
IndexError: index out of range in self
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17793/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17792
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17792/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17792/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17792/events
|
https://github.com/huggingface/transformers/issues/17792
| 1,278,243,279
|
I_kwDOCUB6oc5MMHHP
| 17,792
|
__init__() got an unexpected keyword argument '_name_or_path'
|
{
"login": "rajnishrajput12",
"id": 43354631,
"node_id": "MDQ6VXNlcjQzMzU0NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/43354631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajnishrajput12",
"html_url": "https://github.com/rajnishrajput12",
"followers_url": "https://api.github.com/users/rajnishrajput12/followers",
"following_url": "https://api.github.com/users/rajnishrajput12/following{/other_user}",
"gists_url": "https://api.github.com/users/rajnishrajput12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajnishrajput12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajnishrajput12/subscriptions",
"organizations_url": "https://api.github.com/users/rajnishrajput12/orgs",
"repos_url": "https://api.github.com/users/rajnishrajput12/repos",
"events_url": "https://api.github.com/users/rajnishrajput12/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajnishrajput12/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @rajnishrajput12 \r\n\r\nIt might be better to ask on the [community tab](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens/discussions) of the model on the Hub. Otherwise, on [sentence-transformers](https://github.com/UKPLab/sentence-transformers) repository.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Close as this is a question for the library `sentence_transformer`"
] | 1,655
| 1,658
| 1,658
|
NONE
| null |
### System Info
```shell
sentence_transformer versiob 2.2.0
windows
python version- 3.9.12
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying the following code :-
1. it gave me error no file "C:\\Users\\ra*****\\Downloads\\bert-base-nli-mean-tokens\\1_Pooling\\config.json"
the i create folder 1_Pooling and kept the downloaded json but then it gave me error
2.__init__() got an unexpected keyword argument '_name_or_path'
NOTE:- I downloaded all the files from https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens/tree/main"
from sentence_transformers import SentenceTransformer
embedder = SentenceTransformer(r'C:\Users\raj****\Downloads\bert-base-nli-mean-tokens')
corpus = ['A man is eating food.',
'A man is eating a piece of bread.',
'The girl is carrying a baby.',
'A man is riding a horse.',
'A woman is playing violin.',
'Two men pushed carts through the woods.',
'A man is riding a white horse on an enclosed ground.',
'A monkey is playing drums.',
'A cheetah is running behind its prey.']
corpus_embeddings = embedder.encode(corpus)
sentence_transformer version 2.2.0
### Expected behavior
```shell
we should be able to load the sentence_transformer from local
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17792/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17791
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17791/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17791/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17791/events
|
https://github.com/huggingface/transformers/pull/17791
| 1,278,207,641
|
PR_kwDOCUB6oc46BFj6
| 17,791
|
Add link to Albumentations notebook
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a link to the image classification with Albumentations notebook (present in our Notebooks repository).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17791/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17791",
"html_url": "https://github.com/huggingface/transformers/pull/17791",
"diff_url": "https://github.com/huggingface/transformers/pull/17791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17791.patch",
"merged_at": 1655815988000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17790
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17790/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17790/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17790/events
|
https://github.com/huggingface/transformers/issues/17790
| 1,278,198,203
|
I_kwDOCUB6oc5ML8G7
| 17,790
|
OPT-350m cannot be loaded from local files generated using the save_pretrained method
|
{
"login": "greg2451",
"id": 51173502,
"node_id": "MDQ6VXNlcjUxMTczNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/51173502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greg2451",
"html_url": "https://github.com/greg2451",
"followers_url": "https://api.github.com/users/greg2451/followers",
"following_url": "https://api.github.com/users/greg2451/following{/other_user}",
"gists_url": "https://api.github.com/users/greg2451/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greg2451/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greg2451/subscriptions",
"organizations_url": "https://api.github.com/users/greg2451/orgs",
"repos_url": "https://api.github.com/users/greg2451/repos",
"events_url": "https://api.github.com/users/greg2451/events{/privacy}",
"received_events_url": "https://api.github.com/users/greg2451/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"From what I understand, this is a related to a known issue that is unique to opt-350m amongst the opt models, because the `hidden_size` (512) is different from the `word_embed_proj_dim` (1024).\r\n\r\nWhen I load the opt model using the hub and do `opt350m.lm_head.weight.shape` I have `torch.Size([50272, 512])`. \r\n\r\nWhen I manually load the weight files saved by the `save_pretrained` method, and go to the lm_head weight, it is also `torch.Size([50272, 512])`\r\n\r\nBut for some reasons, as the lm_head is a `Linear(in_features=1024, out_features=50272, bias=False)`, there is a problem loading the weights in the model.\r\n\r\nI have tried to reproduce with other models (namely `opt-125m` and `opt-1.3b`), but it works well with them.",
"Hmm sorry I did not see it was fixed in last release. Duplicate of #17389. Closing",
"Just to clarify, I just ran the command with Transformers >= 4.20.1 and it works as expected. Are there still any problems?",
"Yes @patrickvonplaten, now it works as expected, just had to bump my transformers version"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.19.3
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
```
### Who can help?
@younesbelkada @patrickvonplaten @Lys
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Load opt-350m model from the hub with `AutoModelForCausalLM.from_pretrained('facebook/opt-350m')`
2. Save the model using `model.save_pretrained('save_dir/')`
3. Try loading back the model with `AutoModelForCausalLM.from_pretrained('save_dir/')`
Full code:
```python
from transformers import AutoModelForCausalLM
opt350m = AutoModelForCausalLM.from_pretrained('facebook/opt-350m')
opt350m.save_pretrained("local_save_dir/")
loaded_opt = AutoModelForCausalLM.from_pretrained('local_save_dir/')
```
### Expected behavior
```shell
A RuntimeError will be raised when loading the model from save_dir/
Logs:
Traceback (most recent call last):
File "/Users/gregoireretourne/opt/miniconda3/envs/health/lib/python3.9/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 5, in <module>
File "lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "lib/python3.9/site-packages/transformers/modeling_utils.py", line 2059, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "lib/python3.9/site-packages/transformers/modeling_utils.py", line 2251, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for OPTForCausalLM:
size mismatch for lm_head.weight: copying a param with shape torch.Size([50272, 512]) from checkpoint, the shape in current model is torch.Size([50272, 1024]).
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17790/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17789
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17789/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17789/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17789/events
|
https://github.com/huggingface/transformers/issues/17789
| 1,278,194,261
|
I_kwDOCUB6oc5ML7JV
| 17,789
|
assertEqual of non-frozen parameters in test_resume_training_with_frozen_params
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Please use the forums for questions like this as we keep issues for bugs and feature requests only.\r\n\r\nb and b1 should be equal because they are the results of the same trainings. One started from scratch and one resumed from an intermediate checkpoint. If that test fails, you have a problem in reproducibility on HPUs.",
"Sure, sorry for the inconvenience!\r\n\r\nYep I thought `checkpoint-5` comes from a completely different run, closing this as this issue does not come from the test definition."
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0
- Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0a0+gita4c10ee (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
- Specific hardware: Habana HPU
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behaviour:
1. Setup an AWS DL1 instance
2. Clone [optimum-habana](https://github.com/huggingface/optimum-habana)
3. Install the package with `pip install optimum-habana/[tests]`
4. Uncomment `test_resume_training_with_frozen_params` in `optimum-habana/tests/test_trainer.py`
5. Run `pytest tests/test_trainer.py -k "frozen"`
### Expected behavior
`test_resume_training_with_frozen_params` in `tests/trainer/test_trainer.py` should not assert if `b` and `b1` are equal [here](https://github.com/huggingface/transformers/blob/eb16be415a74328e5ab62e050330a43054f6bd11/tests/trainer/test_trainer.py#L1429). While `a` is frozen, `b` is not so why should `b` and `b1` be equal?
I understand I use a very specific hardware and the test passes on a GPU, but I actually think it should not pass on the latter and would like to understand why it does :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17789/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17788
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17788/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17788/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17788/events
|
https://github.com/huggingface/transformers/pull/17788
| 1,278,164,095
|
PR_kwDOCUB6oc46A8Y7
| 17,788
|
Fix artifact path for cuda extension test in push CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,655
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
Fix artifact path for cuda extension test in push CI
### More details
In #17335, `working-director` is updated (a fix) for (single-gpu) cuda extension test, but the artifact path was not updated, which leads to strange slack report.
```bash
- name: Run all non-slow selected tests on GPU
working-directory: /workspace/transformers
...
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17788/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17788",
"html_url": "https://github.com/huggingface/transformers/pull/17788",
"diff_url": "https://github.com/huggingface/transformers/pull/17788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17788.patch",
"merged_at": 1655980283000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17787
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17787/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17787/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17787/events
|
https://github.com/huggingface/transformers/pull/17787
| 1,278,156,182
|
PR_kwDOCUB6oc46A6tH
| 17,787
|
Add MVP model
|
{
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @StevenTang1998, \r\n\r\nCool to see a new model here! Do you need help with the failing CI? ",
"Thanks for your concern. And I really met two issues.\r\n1. I want to add two tokens in the tokenizer. I tried two ways, but they both didn't pass the test.\r\n- add them to the `additional_special_tokens` during tokenizer init.\r\n- use `tokenizer.unique_no_split_tokens` to add them.\r\n\r\n2. Another issue is my model didn't pass the test `MvpModelTest.test_beam_sample_generate`, I am a little confused why my model passed other generation tests, while failed in this one. And I didn't understand the motivation of this test, so I don't have some ideas to fix it.\r\n\r\nCould you offer some instructions about these issues? Thanks very much!",
"Hi, @patrickvonplaten. I have fixed the tokenizer issue.\r\n\r\nThe model issue is\r\n```\r\nE AssertionError: Lists differ: [[2, 84, 28, 28], [2, 51, 35, 35], [2, 51, 28, 28], [2, 51, 28, 51]] != [[2, 0, 0, 12], [2, 0, 29, 29], [2, 0, 0, 12], [2, 0, 0, 12]]\r\nE \r\nE First differing element 0:\r\nE [2, 84, 28, 28]\r\nE [2, 0, 0, 12]\r\nE \r\nE - [[2, 84, 28, 28], [2, 51, 35, 35], [2, 51, 28, 28], [2, 51, 28, 51]]\r\nE + [[2, 0, 0, 12], [2, 0, 29, 29], [2, 0, 0, 12], [2, 0, 0, 12]]\r\n```\r\n\r\n- As for the model issue, I found the reason:\r\nMy model inherits from BART, and I set `forced_bos_token_id` explicitly.\r\nDuring the generation test (for example in [`_beam_sample_generate`](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L423)), the test [`generate`](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L440) option will reprocesss the `logits_processor` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L1253). So the `logits_processor` has `ForcedBOSTokenLogitsProcessor`. Whereas, the test [`beam_sample`](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L474) option only adds `InfNanRemoveLogitsProcessor` [here](https://github.com/huggingface/transformers/blob/main/tests/generation/test_generation_utils.py#L470).\r\nAccording to the error result, the first test option generates the `bos_token` (0) in the second position, where the second test option samples the second token.\r\n\r\n- Moreover, after I upload files, the failing CI contains errors not related to us.\r\n\r\nCould you offer some help to solve them?\r\n\r\n",
"Hi @patrickvonplaten, remove specifying `forced_bos_token_id` in the my Config can solve the issue.\r\n\r\nHowever, I still met some issues not related to my model. Could you offer some help?",
"Hey @StevenTang1998,\r\n\r\nYes indeed the failing tests are not your fault -> could you try to do the same as explained here: https://github.com/huggingface/transformers/pull/17784#issuecomment-1162446039",
"Hi @patrickvonplaten, thanks for your help! Now I passed all the test, so what is the next step?",
"Thanks for your comments, I will update it soon.",
"> 3.) I'd be slighly in favor of renaming all classes to MVPTokenizer and MVPModel because MVP is an acronym the actual name of the paper (but ok for me to leave here as well what do you think @sgugger ?)\r\n\r\nActually disagree rather strongly here ;-) BERT is an acronym and we still use Bert for the model classes. IMO it was a mistake to use all capital GPT, so would prefer keeping Mvp as is!",
"Hi @patrickvonplaten, thanks very much for your comments.\r\n- We have uploaded our paper in [here](https://github.com/RUCAIBox/MVP/blob/main/paper.pdf), it will be published on ArXiv at Mon, 27 Jun 2022 00:00:00 GMT. We will update the paper url once it announced.\r\n- We have added `# Copied from ...` statements everywhere where applicable.\r\n- Our model can be fine-tuned for sequence classification and question answering (we conducted experiments in our paper), so we reserve them.\r\n- According to the comment of @sgugger, maybe our model name do not need to be changed?\r\n- We have merged conflicts.\r\n",
"Hi @patrickvonplaten, we have update the arxiv paper link.",
"@sgugger Thanks for your comments! We will fix them following your advice.",
"@patrickvonplaten @sgugger Thanks for your valuable advice! I have made changes, please review them at your convenience.",
"Can you just explain why there can't be any Copied from statements for `MvpAttention`, `MvpEncoderLayer` and `MvpDecoderLayer`? The first twos don't have any answer on my comment and the last one as a cryptic \"same\". Thanks :-)",
"Hi @sgugger, I'm sorry maybe I didn't press the Comment button yesterday.\r\nWe didn't add `Copied from` because we add prompts in these three module. We are a little unclear about the `Copied from` mechanism that we should add them when the code are exactly the same? or almost the same?",
"Thanks for your explanation! That was the last thing standing, so merging this new model. Thanks a lot for your contribution!",
"Thanks a lot for your patient comments and guidance!"
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add MVP models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17787/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17787",
"html_url": "https://github.com/huggingface/transformers/pull/17787",
"diff_url": "https://github.com/huggingface/transformers/pull/17787.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17787.patch",
"merged_at": 1656509455000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17786
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17786/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17786/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17786/events
|
https://github.com/huggingface/transformers/pull/17786
| 1,278,123,549
|
PR_kwDOCUB6oc46Az1B
| 17,786
|
add doctests for DETR
|
{
"login": "qherreros",
"id": 7406885,
"node_id": "MDQ6VXNlcjc0MDY4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7406885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qherreros",
"html_url": "https://github.com/qherreros",
"followers_url": "https://api.github.com/users/qherreros/followers",
"following_url": "https://api.github.com/users/qherreros/following{/other_user}",
"gists_url": "https://api.github.com/users/qherreros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qherreros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qherreros/subscriptions",
"organizations_url": "https://api.github.com/users/qherreros/orgs",
"repos_url": "https://api.github.com/users/qherreros/repos",
"events_url": "https://api.github.com/users/qherreros/events{/privacy}",
"received_events_url": "https://api.github.com/users/qherreros/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Good to merge for me!"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Enable doctests for DETR
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17786/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17786",
"html_url": "https://github.com/huggingface/transformers/pull/17786",
"diff_url": "https://github.com/huggingface/transformers/pull/17786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17786.patch",
"merged_at": 1655983575000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17785
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17785/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17785/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17785/events
|
https://github.com/huggingface/transformers/pull/17785
| 1,277,460,975
|
PR_kwDOCUB6oc45-iYN
| 17,785
|
Add final_layer_norm to OPT model
|
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for the fix! I think that we'll have to change the generation tests a bit for the other models as well ",
"Great find @thomasw21 - thanks a lot for fixing it!\r\n\r\nThink the checkpoints were then also incorrectly loaded inside the metaseq codebase - could you maybe double check that the following script gives identical results between fairseq and transformers: https://huggingface.co/patrickvonplaten/opt_metaseq_125m -> The logits should match there (maybe an incorrect configuration in the metaseq model?)\r\n\r\nAlso could you please update the slow model tests?",
"@thomasw21 I can update the tests and check the outputs if you want ",
"@patrickvonplaten from what I understood logits comparison equality test were only done in 350m? @younesbelkada \r\nI actually manually converted `restored.pt` from https://huggingface.co/patrickvonplaten/opt_metaseq_125m using the updated conversion script.\r\n\r\n@ArthurZucker if you have the bandwidth, I'd appreciate it! Thanks!",
"@patrickvonplaten Yep I've looked at the changes with your comment, feel free to merge those : D",
"When releasing the patch can we merge at the same time #17437 ? The problem of NaNs for batched generation still persists with this fix, but is resolved with #17437 ",
"BTW @patrickvonplaten do you have the expected values for the slow test? ",
"> BTW @patrickvonplaten do you have the expected values for the slow test?\r\n\r\nCorrected the tests as well now",
"Good job @thomasw21 !"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #17653 , #17545
OPT models have a final_layer_norm: https://github.com/facebookresearch/metaseq/blob/e0c4f6b0e4c523906ad8d561f727e3f2ac3a8e73/metaseq/models/transformer.py#L466-L477
So we update HF models + conversion script to take in account that missing layer norm.
Test on OPT-125m (`restored.pt` file from `patrickvonplaten/opt_metaseq_125m`):
```
>>> model_path="fixed_opt_125m"
>>> prompt="Hello my name is"
>>> log_probs_with_ppl(model_path, prompt)
Input torch.Size([1, 5])
Logits torch.Size([1, 5, 50272])
torch.return_types.max(
values=tensor([[0.2398, 0.2326, 0.3332, 0.9363, 0.0097]], grad_fn=<MaxBackward0>),
indices=tensor([[ 100, 6, 766, 16, 1236]]))
argmax probility: [[0.23982257 0.23258895 0.33315504 0.9362957 0.00967377]]
argmax log probability: [[-1.4278558 -1.4584825 -1.0991473 -0.06582398 -4.6383367 ]]
argmax tokens: I, name is j
cross entropy loss: 4.051314830780029
ppl: 57.47297286987305
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17785/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17785/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17785",
"html_url": "https://github.com/huggingface/transformers/pull/17785",
"diff_url": "https://github.com/huggingface/transformers/pull/17785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17785.patch",
"merged_at": 1655835996000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17784
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17784/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17784/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17784/events
|
https://github.com/huggingface/transformers/pull/17784
| 1,277,416,861
|
PR_kwDOCUB6oc45-Yte
| 17,784
|
Flax t5 Encoder
|
{
"login": "crystina-z",
"id": 31640436,
"node_id": "MDQ6VXNlcjMxNjQwNDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/31640436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crystina-z",
"html_url": "https://github.com/crystina-z",
"followers_url": "https://api.github.com/users/crystina-z/followers",
"following_url": "https://api.github.com/users/crystina-z/following{/other_user}",
"gists_url": "https://api.github.com/users/crystina-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crystina-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crystina-z/subscriptions",
"organizations_url": "https://api.github.com/users/crystina-z/orgs",
"repos_url": "https://api.github.com/users/crystina-z/repos",
"events_url": "https://api.github.com/users/crystina-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/crystina-z/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @crystina-z, \r\n\r\nThe PR looks very nice already - think we're close to merging it! Great job so far :-) A FlaxT5Encoder model class is very useful (also to build Diffusion Pipelines like Imagen cc @borisdayma @patil-suraj ) \r\n\r\nSome tests are currently failing because the branch of the PR is not up to date with Transformers' main branch I think. \r\n\r\nCould you try merging the `main` branch into your PR, e.g.:\r\n\r\n```\r\ngit pull upstream main\r\n```\r\n\r\n(if you called the remote `upstream`) ",
"Let me know if you need help with anything, otherwise I think we can do a final review once the Circle CI is green :-)",
"Hi @patrickvonplaten , thanks for comment! Lemme try merge it with main first then.\r\n\r\nThere are indeed some tests failure I'm not sure about the reason now, like `jax.errors.ConcretizationTypeError` that happens when running `tests/models/t5/test_modeling_flax_t5.py` in my local environment. might need some help later if merging with main not solves it and I can't still find the reason later :P",
"Hi @patil-suraj, @patrickvonplaten. I do have two questions regarding to the failing CI tests - \r\n1. under the `run_tests_tf`, it shows `FAILED tests/models/mobilebert/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings\r\n` [here](https://app.circleci.com/pipelines/github/huggingface/transformers/42780/workflows/5d5e1e0f-0561-4bd2-a3ac-7dbc520a8d73/jobs/492835?invite=true#step-111-4402) without specific failing reason. however, when running the test locally, the test seems passed, tho with some warning cases. Wonder if you have any idea why it's happening?\r\n```\r\n$ pytest tests/models/mobilebert/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings \r\n \r\n=============================================================== test session starts =============================================================== \r\nplatform linux -- Python 3.7.13, pytest-7.1.2, pluggy-1.0.0 rootdir: /scratch/czhang/src/task-mrtydi/mrtydi-ood/transformers, configfile: setup.cfg \r\nplugins: mock-3.6.1, typeguard-2.12.1, xdist-2.5.0, hypothesis-6.47.3, dash-2.5.1, timeout-2.1.0, forked-1.4.0 \r\ncollected 1 item \r\n \r\ntests/models/mobilebert/test_modeling_tf_mobilebert.py . [100%] \r\n \r\n================================================================ warnings summary ================================================================= \r\n../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/flatbuffers/compat.py:19 \r\n /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses \r\n import imp \r\n ../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:36 /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead. \r\n 'nearest': pil_image.NEAREST, \r\n \r\n../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:37 \r\n /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead. \r\n 'bilinear': pil_image.BILINEAR, \r\n \r\n../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:38 \r\n /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead. \r\n 'bicubic': pil_image.BICUBIC, \r\n \r\n../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:39 \r\n /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead. \r\n 'hamming': pil_image.HAMMING, \r\n \r\n../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:40 \r\n /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead. \r\n 'box': pil_image.BOX, \r\n \r\n../../../../../../home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:41 \r\n /home/czhang/miniconda3/envs/transformers/lib/python3.7/site-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead. \r\n 'lanczos': pil_image.LANCZOS, \r\n \r\nsrc/transformers/modeling_tf_utils.py:575 \r\n /scratch/czhang/src/task-mrtydi/mrtydi-ood/transformers/src/transformers/modeling_tf_utils.py:575: DeprecationWarning: invalid escape sequence \\d \r\n bit_search = re.search(\"[^\\d](\\d+)$\", dtype.name) \r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html \r\n==================================================== 1 passed, 8 warnings in 265.15s (0:04:25) ==================================================== \r\n```\r\n\r\n2. under the `check_repository_consistency`, it says public documentation is needed for `FlaxMT5EncoderModel ` and `FlaxT5EncoderModel` [here](https://github.com/huggingface/transformers/runs/7014080791?check_suite_focus=true#step:9:50). While in the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs#generating-the-documentation), it says we only need to build documentations locally for inspection but not to commit them - \r\n> You only need to generate the documentation to inspect it locally (if you're planning changes and want to check how they look like before committing for instance). You don't have to commit the built documentation.\r\n\r\nI'm a bit confused about how should I deal with the documentation files. and if we'll need to update the documentation, do we just copy the contents under `~/tmp/test-build` to `docs/source/`?\r\n\r\nThanks so much in advance!",
"Hey @crystina-z \r\n\r\n> 1. under the run_tests_tf, it shows FAILED\r\n\r\nYou can ignore the `run_tests_tf` since it's unrelated, rebasing should fix this.\r\n\r\n> I'm a bit confused about how should I deal with the documentation files. and if we'll need to update the documentation, do we just copy the contents under ~/tmp/test-build to docs/source/?\r\n\r\nThis means we need to add `FlaxT5EncoderModel ` and `FlaxMT5EncoderModel` in the [t5.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/t5.mdx) and [mt5.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/mt5.mdx) files respectively.\r\n",
"Hi @patrickvonplaten @patil-suraj All tests passed! Also updated all comments above. lmk if there's anything else to change!",
"That's great! Reviewing now :-)",
"Nice! Thank you all for reviewing as well! Just wonder when could I expect this PR to be merged? @patrickvonplaten @sanchit-gandhi ",
"Thanks for being patient @crystina-z. Let's wait for @patil-suraj to give his final review then we can merge!",
"Awesome sg, thanks!",
"Good to merge I think - @patil-suraj feel free to leave comments if you don't like something, but overall it's good to merge :-) \r\n\r\nGreat job @crystina-z !!!"
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
I notice that there hasn't been a T5EncoderModel for Flax implementation, taking a stab to add it. I didn't find an issue related to this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
In terms of the test, I wrote it under `pytest tests/models/t5/test_modeling_flax_t5.py` now but seems I'm having trouble running it on my machine so far, although I can import the `FlaxT5EncoderModel` externally and able to use it successfully. I'm working on that, and just want to send the PR first to know if there's more I should add.
## Who can review?
t5: @patrickvonplaten, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17784/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17784/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17784",
"html_url": "https://github.com/huggingface/transformers/pull/17784",
"diff_url": "https://github.com/huggingface/transformers/pull/17784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17784.patch",
"merged_at": 1656542942000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17783
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17783/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17783/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17783/events
|
https://github.com/huggingface/transformers/pull/17783
| 1,277,414,333
|
PR_kwDOCUB6oc45-YNt
| 17,783
|
Add missing type hints for QDQBertModel
|
{
"login": "willtai",
"id": 20279061,
"node_id": "MDQ6VXNlcjIwMjc5MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/20279061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willtai",
"html_url": "https://github.com/willtai",
"followers_url": "https://api.github.com/users/willtai/followers",
"following_url": "https://api.github.com/users/willtai/following{/other_user}",
"gists_url": "https://api.github.com/users/willtai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willtai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willtai/subscriptions",
"organizations_url": "https://api.github.com/users/willtai/orgs",
"repos_url": "https://api.github.com/users/willtai/repos",
"events_url": "https://api.github.com/users/willtai/events{/privacy}",
"received_events_url": "https://api.github.com/users/willtai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey, this looks really good! Would you be willing to add type hints to the other model classes in the same file too? (`QDQBertLMHeadModel` and the classes that start with `QDQBertFor...`)",
"@Rocketknight1 I've pushed a change that also adds type hints for `QDQBertLMHeadModel` and the classes that start with `QDQBertFor...`. I've also removed `config` which was unused parameter, but left the other classes untouched, as those seem a bit tricky to type hint for now",
"@Rocketknight1 I am getting the following error for tests\r\n\r\n```\r\nE ImportError: cannot import name 'login' from 'huggingface_hub' (/home/circleci/.local/lib/python3.7/site-packages/huggingface_hub/__init__.py)\r\n```\r\n\r\nIs there a dependency change for huggingface_hub perhaps?\r\n\r\nAlso, I have removed the type hints for the config objects passed as `python utils/check_copies.py` fails due to line above `class QDQBertEmbeddings`\r\n```\r\n# Copied from transformers.models.bert.modeling_bert.BertEmbeddings with Bert -> QDQBert\r\n```",
"@willtai I suspect something like that is the problem, yes. Can you try:\r\n\r\n1) Pulling upstream commits from `transformers` to your repository's `main` branch (you can do this in the Github interface)\r\n2) Pull those changes from Github to your local machine's `main` branch\r\n3) Rebase your PR branch onto `main` locally\r\n4) Force push (`push -f`) your PR branch to Github\r\n\r\nThis should resolve those issues if they're caused by a recent code change ",
"This is perfect, thank you! The rebase looks clean and all tests are passing, so I'm happy to merge it now."
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding missing type hints for QDQBertModel as referenced in this issue.(https://github.com/huggingface/transformers/issues/16059#issuecomment-1160830014).
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
EDIT: Anyone feel free to review too 😄
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17783/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17783",
"html_url": "https://github.com/huggingface/transformers/pull/17783",
"diff_url": "https://github.com/huggingface/transformers/pull/17783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17783.patch",
"merged_at": 1655985524000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17782
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17782/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17782/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17782/events
|
https://github.com/huggingface/transformers/issues/17782
| 1,277,089,981
|
I_kwDOCUB6oc5MHti9
| 17,782
|
TF element-wise equals requires tf.equals() instead of ==
|
{
"login": "ekayen",
"id": 25679936,
"node_id": "MDQ6VXNlcjI1Njc5OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/25679936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekayen",
"html_url": "https://github.com/ekayen",
"followers_url": "https://api.github.com/users/ekayen/followers",
"following_url": "https://api.github.com/users/ekayen/following{/other_user}",
"gists_url": "https://api.github.com/users/ekayen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekayen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekayen/subscriptions",
"organizations_url": "https://api.github.com/users/ekayen/orgs",
"repos_url": "https://api.github.com/users/ekayen/repos",
"events_url": "https://api.github.com/users/ekayen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekayen/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @ekayen 👋 \r\n\r\n`self.tokenizer.mask_token_id` (on [L56 in fill_mask.py](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/fill_mask.py#L56)) should be a single integer, and `tf.where(tensor == integer)` works well in TF1 and TF2 (see the examples below)\r\n\r\n```python\r\nimport tensorflow as tf\r\na = tf.range(10)\r\nb = tf.where(a == 5)\r\nprint(b)\r\n```\r\n\r\n```python\r\nimport tensorflow.compat.v1 as tf\r\na = tf.range(10)\r\nb = tf.where(a == 5)\r\nprint(b)\r\n```\r\n\r\nFrom the error message, we can read \"... Unhandled input dimensions: 0\". Can you confirm that your tokenizer has `tokenizer.mask_token_id` set? That would be my biggest suspicion :)\r\n\r\nP.S.:\r\n1 - `transformers` does not support TF1 behavior -- it is possible that things break, and we won't be able to provide help in that situation :( I'd highly recommend updating your project to TF2, if you have the resources to do so.\r\n2 - Our bandwidth as maintainers of a large project is very limited. When opening an issue, if you can provide an example that can run on any machine (and that does not depend on local files), the odds of getting a useful response are much higher 🙌 \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
Running in a Google Colab with GPU backend.
```
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running the 'fill-mask' pipeline:
```
import tensorflow.compat.v1 as tf
import tensorflow_datasets
import transformers
from transformers import pipeline, set_seed
# Load tokenizer and model from presaved locations:
tokenizer = transformers.GPT2Tokenizer.from_pretrained('/path/to/pretrained/GPT2/')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = transformers.TFGPT2LMHeadModel.from_pretrained('/path/to/pretrained/GPT2/')
# Create pipeline with this model and tokenizer:
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
print(unmasker("Hello I'm a [MASK] model"))
```
This produces the following error:
`{{function_node __wrapped__Where_device_/job:localhost/replica:0/task:0/device:GPU:0}} WhereOp: Unhandled input dimensions: 0 [Op:Where] `
My best guess is that this is because of line 56 of fill_mask.py:
`masked_index = tf.where(input_ids == self.tokenizer.mask_token_id).numpy()`
TF requires `tf.equals(tensor1, tensor2)` instead of `==`. This bug may occur elsewhere as well -- I haven't checked exhaustively.
### Expected behavior
```shell
A dictionary object with the keys `sequence`, `score`, and `token` is printed.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17782/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17781
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17781/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17781/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17781/events
|
https://github.com/huggingface/transformers/issues/17781
| 1,277,057,862
|
I_kwDOCUB6oc5MHltG
| 17,781
|
KeyError: 'src_texts' in train_distil_marian_enro.sh
|
{
"login": "Bachstelze",
"id": 19904888,
"node_id": "MDQ6VXNlcjE5OTA0ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19904888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bachstelze",
"html_url": "https://github.com/Bachstelze",
"followers_url": "https://api.github.com/users/Bachstelze/followers",
"following_url": "https://api.github.com/users/Bachstelze/following{/other_user}",
"gists_url": "https://api.github.com/users/Bachstelze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bachstelze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bachstelze/subscriptions",
"organizations_url": "https://api.github.com/users/Bachstelze/orgs",
"repos_url": "https://api.github.com/users/Bachstelze/repos",
"events_url": "https://api.github.com/users/Bachstelze/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bachstelze/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Same issue",
"pip install transformers==4.15.0"
] | 1,655
| 1,675
| 1,659
|
NONE
| null |
### System Info
```shell
0% 0/6 [00:00<?, ?it/s][INFO|trainer_utils.py:686] 2022-06-20 14:53:01,661 >> The following columns in the training set don't have a corresponding argument in `MarianMTModel.forward` and have been ignored: id, src_texts, tgt_texts. If id, src_texts, tgt_texts are not expected by `MarianMTModel.forward`, you can safely ignore this message.
Traceback (most recent call last):
File "transformers/examples/legacy/seq2seq/finetune_trainer.py", line 375, in <module>
main()
File "transformers/examples/legacy/seq2seq/finetune_trainer.py", line 313, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1413, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1625, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 530, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 570, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer_utils.py", line 696, in __call__
return self.data_collator(features)
File "/content/transformers/examples/legacy/seq2seq/utils.py", line 298, in __call__
batch = self._encode(batch)
File "/content/transformers/examples/legacy/seq2seq/utils.py", line 334, in _encode
[x["src_texts"] for x in batch],
File "/content/transformers/examples/legacy/seq2seq/utils.py", line 334, in <listcomp>
[x["src_texts"] for x in batch],
KeyError: 'src_texts'
0% 0/6 [00:00<?, ?it/s]
```
```
### Who can help?
Model Marian: @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
./examples/legacy/seq2seq/train_distil_marian_enro.sh
### Expected behavior
```shell
fine-tune the marian model
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17781/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17781/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17780
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17780/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17780/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17780/events
|
https://github.com/huggingface/transformers/pull/17780
| 1,277,036,234
|
PR_kwDOCUB6oc459IUM
| 17,780
|
Use 5e-5 For BigBird PT/Flax equivalence tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
Use `5e-5` For BigBird PT/Flax equivalence tests to avoid flaky test failure.
Also change the name `check_outputs` to `check_pt_flax_outputs` (similar to `check_pt_tf_outputs`) and update its logic similar to `check_pt_tf_outputs`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17780/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17780",
"html_url": "https://github.com/huggingface/transformers/pull/17780",
"diff_url": "https://github.com/huggingface/transformers/pull/17780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17780.patch",
"merged_at": 1655826926000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17779
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17779/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17779/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17779/events
|
https://github.com/huggingface/transformers/pull/17779
| 1,276,674,217
|
PR_kwDOCUB6oc4576BZ
| 17,779
|
Add DPT Flax
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"All the keys match now but the equivalency test does not pass with `1e-5` but `1e-4` instead",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17779). All of your documentation changes will be reflected on that endpoint.",
"Would be great to also incorporate the updates of #17731 ",
"The Flax model finally predicts the correct depths for the cats (left is Flax and right is Pytorch)! \r\n\r\nFor that it appears that the transpose conv does not give the same result as Pytorch's implementation that uses a gradient based operation. I fixed it by creating a custom function based on this PR: https://github.com/google/jax/pull/5772 the PR does not seem to be merged soon. We can probably go for this hack for now until the PR in JAX gets merged \r\n",
"As wee discussed, it seems that `align_corners` set to `False` for both model would not require lowering the tolerance in one of the cases right? \r\n",
"@ArthurZucker exact. I have put a new attribute in the `DPTConfig` and modified a bit the original modeling code but should not break backward compatibility. Now all tests pass with a tolerance of `1e-5` ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Thank you very much @sanchit-gandhi for the very detailed review! I had a second round of refactoring while catching up on Flax projects and would love to have a second round of review (left also some unresolved comments) 💪 Thanks again 🙏 \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
I tried to implement DPT (Dense Prediction with Transformers) in Flax during my free time! 🚀
By the way it is the first Segmentation and Depth Estimation model implemented in Flax on the library!
Nits/TODOs:
- [x] Figure out how to properly call `BatchNorm` and `Dropout` inside a `Sequential`
- [x] Deal correctly with `Sequential` layers
- [x] Test equivalency tests
- [ ] Write documentation - For now they're just copy/pasted
Quetions:
- Why the loss is not implemented in `modeling_dpt.py` ? I can probably help on that since I have already implemented the loss for a university project: https://github.com/antocad/FocusOnDepth/blob/master/FOD/Loss.py
cc @NielsRogge @sanchit-gandhi @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17779/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17779/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17779",
"html_url": "https://github.com/huggingface/transformers/pull/17779",
"diff_url": "https://github.com/huggingface/transformers/pull/17779.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17779.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17778
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17778/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17778/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17778/events
|
https://github.com/huggingface/transformers/issues/17778
| 1,276,390,344
|
I_kwDOCUB6oc5MFCvI
| 17,778
|
Dataset Format for training RAG on custom Dataset
|
{
"login": "mdshah930",
"id": 46818925,
"node_id": "MDQ6VXNlcjQ2ODE4OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/46818925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mdshah930",
"html_url": "https://github.com/mdshah930",
"followers_url": "https://api.github.com/users/mdshah930/followers",
"following_url": "https://api.github.com/users/mdshah930/following{/other_user}",
"gists_url": "https://api.github.com/users/mdshah930/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mdshah930/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mdshah930/subscriptions",
"organizations_url": "https://api.github.com/users/mdshah930/orgs",
"repos_url": "https://api.github.com/users/mdshah930/repos",
"events_url": "https://api.github.com/users/mdshah930/events{/privacy}",
"received_events_url": "https://api.github.com/users/mdshah930/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,655
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-4.14.219-164.354.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.9
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.1+cu102 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce error - No format mentioned for data preparation for training RAG on custom dataset.
[link to script](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/finetune_rag.py)
[broken link to format in which data needs to be prepared](https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag#:~:text=Our%20finetuning%20logic%20is%20based%20on%20scripts%20from%20examples/seq2seq.%20We%20accept%20training%20data%20in%20the%20same%20format%20as%20specified%20there%20%2D%20we%20expect%20a%20directory%20consisting%20of%206%20text%20files%3A)
### Expected behavior
```shell
https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/finetune_rag.py
This is a script to fine tune rag on a custom dataset, however the link mentioned on github (format in which data needs to be prepared if training on custom dataset) is broken. Please let me know in what format do I have to prepare my training data if I want to train RAG on a custom dataset
https://github.com/huggingface/transformers/tree/main/examples/research_projects/rag#:~:text=Our%20finetuning%20logic%20is%20based%20on%20scripts%20from%20examples/seq2seq.%20We%20accept%20training%20data%20in%20the%20same%20format%20as%20specified%20there%20%2D%20we%20expect%20a%20directory%20consisting%20of%206%20text%20files%3A
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17778/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17777
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17777/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17777/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17777/events
|
https://github.com/huggingface/transformers/pull/17777
| 1,276,139,419
|
PR_kwDOCUB6oc456Jxi
| 17,777
|
For model long-t5-tglobal-x, fix 'float' object cannot be interpreted as an integer
|
{
"login": "bjascob",
"id": 22728060,
"node_id": "MDQ6VXNlcjIyNzI4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/22728060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bjascob",
"html_url": "https://github.com/bjascob",
"followers_url": "https://api.github.com/users/bjascob/followers",
"following_url": "https://api.github.com/users/bjascob/following{/other_user}",
"gists_url": "https://api.github.com/users/bjascob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bjascob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bjascob/subscriptions",
"organizations_url": "https://api.github.com/users/bjascob/orgs",
"repos_url": "https://api.github.com/users/bjascob/repos",
"events_url": "https://api.github.com/users/bjascob/events{/privacy}",
"received_events_url": "https://api.github.com/users/bjascob/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@patil-suraj could you maybe take a look here? :-)\r\n\r\nAlso cc @stancld in case you're interested and have an idea what the problem could be",
"Interesting. Looks like this is a change in python, not torch. UBT 22.04 uses Python 3.10.4 and this is fully broken for that version.",
"Great looks good to me than as well!"
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
On line 180, `torch.tensor(-1.0, dtype=global_block_ids.dtype)` gives the error __TypeError: 'float' object cannot be interpreted as an integer__ . This is because the dtype here is `int64`. For `dtype=int64`, this needs to simply be `-1`.
This impacts the `long-t5-tglogbal-x` model. It does not impact the `long-t5-local-x` version which does not appear to call this line in the code.
The torch version where I see this is 1.11.0+cu113. I'm not certain if older, or non-gpu versions of torch allowed this but 1.11.0+cu113 does not.
Note that torch does not complain when casting an int to a float so it should be safe to change this to `-1` even if there are occasions where `global_block_ids.dtype` is a float.
# What does this PR do?
Fixes # (no issue # created).
There is a simple error in the code where torch fails when trying to create a constant int64 tensor using `-1.0` instead of `-1`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
This model is new. I would suggest someone from the original upload team review this. Here are the first 3 in the file history..
@stancld, @PhungVanDuy, @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17777/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17777",
"html_url": "https://github.com/huggingface/transformers/pull/17777",
"diff_url": "https://github.com/huggingface/transformers/pull/17777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17777.patch",
"merged_at": 1655743749000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17776
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17776/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17776/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17776/events
|
https://github.com/huggingface/transformers/pull/17776
| 1,276,087,063
|
PR_kwDOCUB6oc455_8W
| 17,776
|
Nezha Pytorch implementation
|
{
"login": "sijunhe",
"id": 11987277,
"node_id": "MDQ6VXNlcjExOTg3Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11987277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sijunhe",
"html_url": "https://github.com/sijunhe",
"followers_url": "https://api.github.com/users/sijunhe/followers",
"following_url": "https://api.github.com/users/sijunhe/following{/other_user}",
"gists_url": "https://api.github.com/users/sijunhe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sijunhe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sijunhe/subscriptions",
"organizations_url": "https://api.github.com/users/sijunhe/orgs",
"repos_url": "https://api.github.com/users/sijunhe/repos",
"events_url": "https://api.github.com/users/sijunhe/events{/privacy}",
"received_events_url": "https://api.github.com/users/sijunhe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"ready for review! I'll upload the rest of the pre-trained models later today",
"addressed all the comments from @sgugger and uploaded the two remaining models. Ready for a final round of review.",
"Thanks again for your contribution!"
] | 1,655
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a pytorch implementation of the NEZHA model to transformers. [NEZHA](https://arxiv.org/abs/1909.00204) was introduced by Huawei Noah's Ark Lab in late 2019 and it is widely used in the Chinese NLP community. This implementation is based on the official pytorch implementation of NEZHA and the current BERT pytorch implementation . The model checkpoints are also from the [official implementation](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-TensorFlow).
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Since the model is quite similar to bert, maybe @LysandreJik?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17776/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17776/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17776",
"html_url": "https://github.com/huggingface/transformers/pull/17776",
"diff_url": "https://github.com/huggingface/transformers/pull/17776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17776.patch",
"merged_at": 1656002183000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17775
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17775/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17775/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17775/events
|
https://github.com/huggingface/transformers/pull/17775
| 1,276,022,222
|
PR_kwDOCUB6oc4550UW
| 17,775
|
Translation troubleshooting 17459
|
{
"login": "F02934",
"id": 56677617,
"node_id": "MDQ6VXNlcjU2Njc3NjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/56677617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/F02934",
"html_url": "https://github.com/F02934",
"followers_url": "https://api.github.com/users/F02934/followers",
"following_url": "https://api.github.com/users/F02934/following{/other_user}",
"gists_url": "https://api.github.com/users/F02934/gists{/gist_id}",
"starred_url": "https://api.github.com/users/F02934/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/F02934/subscriptions",
"organizations_url": "https://api.github.com/users/F02934/orgs",
"repos_url": "https://api.github.com/users/F02934/repos",
"events_url": "https://api.github.com/users/F02934/events{/privacy}",
"received_events_url": "https://api.github.com/users/F02934/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@mfumanelli Hi, I don't know why it contains commits from previous pull request.",
"I will remake it because I made a mistake with the branch @mfumanelli "
] | 1,655
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
-->
<!-- Remove if not applicable -->
Fixes # (issue) Translation in Italian of Troubleshooting 17459
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mfumanelli
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17775/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17775",
"html_url": "https://github.com/huggingface/transformers/pull/17775",
"diff_url": "https://github.com/huggingface/transformers/pull/17775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17775.patch",
"merged_at": null
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.