url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/17574
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17574/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17574/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17574/events
|
https://github.com/huggingface/transformers/pull/17574
| 1,261,959,192
|
PR_kwDOCUB6oc45LFSF
| 17,574
|
[generate] return past_key_values
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17574). All of your documentation changes will be reflected on that endpoint.",
"We'll just need to fix the failing tests now :-) Think you'll have to overwrite this \"checking\" function in the respective individual test files",
"Hey there, sorry to nag, but any chance of moving this along? Anything I can do to help?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"(@patrickvonplaten @patil-suraj should I take over this PR? :) )",
"If ok for you @gante this would be amazing!",
"Hi, Thank you all for working on this feature! Is this going to be merged into the main branch soon?",
"@shunzh I haven't started working on it and it's hard to give estimates -- hopefully less than a month :)",
"Was this closed because it's now possible to retrieve `past_key_values` or was there another reason?",
"@gilljon it is not closed :)",
"@gante I'm sorry for the confusion! Any idea when it will be merged?",
"hi @gante . Any idea when this will be merged? Interested in using it and building something on top of it. I'll happy to put on the finishing touches if needed too!",
"Hey! Just a friendly reminder. Any chance to get it merged soon?",
"I would absolutely **love** this feature! This would open up so much for me, because I have prompts like:\r\n\r\n```\r\nprompt = '''\r\nStuff\r\n* <generate X>\r\n* <generate Y>\r\n\r\nStuff\r\nYou said [X], and [Y] previously, now:\r\n* <generate Z>\r\n'''\r\n```\r\n\r\nThis is so expensive without `past_key_values`.\r\n\r\nSo this PR is now Merge-Conflicting, and I tried applying the patch but upon inspection, it's quite severely out of date now.\r\n\r\n**Is there another way to accomplish this?**\r\n\r\nI notice that `model.forward` typically allows to return `past_key_values`. But then I... have to make use of a sampling alg myself? Would this be the best way without needing upstream changes, and if so, how can I chain together `model.forward` and a sampler?\r\n\r\n**EDIT**: IIUC, `generation_utils` is where `model.generate` comes from, so the new place to make these edits is: https://github.com/huggingface/transformers/blob/0b192de1f353b0e04dad4813e02e2c672de077be/src/transformers/generation/utils.py#L1301",
"Is this ticket dead because some other technique exists already for returning and reusing `past_key_values`? This is a killer feature.",
"The following PR is more up to date: https://github.com/huggingface/transformers/pull/25086",
"(deprecated in favor of #25086)",
"Hey folks 👋 \r\n\r\n#25086 was merged.\r\n\r\nIf you install from `main` and add `return_dict_in_generate=True` to `generate`, `past_key_values` will be part of the output, assuming your model is configured with `use_cache=True` (the default).\r\n\r\nYou can then pass `past_key_values` to `generate` to continue generating!",
"I cant get it to work with intel neural_chat what vartion was this on?\r\n"
] | 1,654
| 1,701
| 1,697
|
MEMBER
| null |
# What does this PR do?
Allows returning `past_key_values` from `generate` when `use_cache=True`.
Like other returned values, `past_key_values` are also returned as `Tuple`, one element per generated token.
Fixes #17016
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17574/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17574",
"html_url": "https://github.com/huggingface/transformers/pull/17574",
"diff_url": "https://github.com/huggingface/transformers/pull/17574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17574.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17573
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17573/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17573/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17573/events
|
https://github.com/huggingface/transformers/pull/17573
| 1,261,751,105
|
PR_kwDOCUB6oc45KX4a
| 17,573
|
Auto-build Docker images before on-merge if setup.py was changed
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Do not merge for now. The push CI in transformers is somehow tricky, after the recent change in #17369.\r\nI will review tomorrow, but I think some changes have to be made.",
"@muellerzr Thank you for the PR. \r\n\r\nThe most direct approach would be\r\n\r\nIntegrate the check `check-for-setup` and `build-docker-containers` in \r\nhttps://github.com/huggingface/transformers/blob/main/.github/workflows/self-push-caller.yml\r\n(before the job `run_push_ci`)\r\n\r\nOtherwise (if you really want to keep the logic you have), the following block\r\n```\r\n workflow_run:\r\n workflows: [\"Check for dependency modification\"]\r\n branches: [\"main\"]\r\n types: [completed]\r\n```\r\nshould go in `self-push-caller.yml`.\r\n\r\nThe main point is to run the actual push CI tests on another branch (`push-ci`), otherwise there will be more than 256 job results shown in the commit history page.\r\n\r\nI would prefer the most direct approach (the first one).",
"@ydshieh I *believe* I addressed what you wanted, let me know if otherwise 😄 "
] | 1,654
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR introduces a new workflow that will check if the `setup.py` was modified during a pull request merge. If so, it will trigger the docker images to be rebuilt before running the `on-merge` tests.
It also changes `self-push` to be ran on a `workflow_run`, specifically the new `check-dependencies` job. This new job also maintains the same "on-merge" check the previous job had, when it comes to determining if it should be ran and when.
Finally, `build-docker-images` is now also ran on a `workflow_call`, so that `check-dependencies` can trigger it.
This is the same as done in Accelerate recently, with the only difference being additional file filters https://github.com/huggingface/accelerate/pull/424
## Why is this needed?
A frustration I've noticed over the last few months in this repo is the main tests runners are failing for an entire day, due to a new dependency introduced. This solves this problem, since the issue roots from the docker images being used.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17573/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17573",
"html_url": "https://github.com/huggingface/transformers/pull/17573",
"diff_url": "https://github.com/huggingface/transformers/pull/17573.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17573.patch",
"merged_at": 1656017493000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17572
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17572/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17572/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17572/events
|
https://github.com/huggingface/transformers/pull/17572
| 1,261,744,070
|
PR_kwDOCUB6oc45KWWf
| 17,572
|
DETR: Add comment regarding backbones
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17572). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,659
| 1,659
|
MEMBER
| null |
Adds an informative message in DETR to mention that the backbone gets initialized for its architecture and not necessarily for its weights.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17572/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17572",
"html_url": "https://github.com/huggingface/transformers/pull/17572",
"diff_url": "https://github.com/huggingface/transformers/pull/17572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17572.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17571
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17571/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17571/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17571/events
|
https://github.com/huggingface/transformers/pull/17571
| 1,261,736,790
|
PR_kwDOCUB6oc45KUyB
| 17,571
|
Add batchnorm running calc weight to porting script
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing as the change was added in this PR :https://github.com/huggingface/transformers/pull/17271\r\n\r\n@sgugger I'll open a follow-up PR to address your comments. ",
"Actually, I misread the file where it's modified 😅 \r\nIt's fine for the conversion like this, it's the code in modeling_utils that does this I don't want (like [here](https://github.com/huggingface/transformers/blob/9fc34235fa3329c918d5ba67ce09a0cc8f399c59/src/transformers/modeling_utils.py#L432)). Sorry I didn't pay close enough attention."
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Add two weight name mappings for PyTorch -> TensorFlow necessary for cross-loading weights for batchnorm layers which have been trained with `tracking_running_stats=True`.
This was necessary for cross-loading weights for the ResNet and RegNet ports. #17536 , #17554
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
NB. I couldn't find tests corresponding to the current weight loading logic.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17571/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17571/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17571",
"html_url": "https://github.com/huggingface/transformers/pull/17571",
"diff_url": "https://github.com/huggingface/transformers/pull/17571.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17571.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17570
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17570/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17570/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17570/events
|
https://github.com/huggingface/transformers/pull/17570
| 1,261,561,053
|
PR_kwDOCUB6oc45Juhf
| 17,570
|
enable cpu distribution training using mpirun
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger please have review or invite someone else.",
"I open a issue https://github.com/huggingface/transformers/issues/17581 for it.",
"@yao-matrix"
] | 1,654
| 1,666
| 1,655
|
CONTRIBUTOR
| null |
*command like
* mpirun -n 2 python3 run_qa.py --no_cuda --xpu_backend ccl xxxx
*MASTER_ADDR and MASTER_PORT should be set as env
*export MASTER_ADDR=127.0.0.1
*export MASTER_PORT=29500
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/17581
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17570/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17570/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17570",
"html_url": "https://github.com/huggingface/transformers/pull/17570",
"diff_url": "https://github.com/huggingface/transformers/pull/17570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17570.patch",
"merged_at": 1655141648000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17569
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17569/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17569/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17569/events
|
https://github.com/huggingface/transformers/issues/17569
| 1,261,531,239
|
I_kwDOCUB6oc5LMXBn
| 17,569
|
Microsoft's SpeechT5 for Spoken Language Processing (ASR, TTS, ST...)
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
closed
| false
| null |
[] |
[
"Piecing these modules together should be a fun challenge! Happy to help with integration here :-)",
"Hi @sanchit-gandhi, if compute resources would not be a problem here ( just have a 4GB GPU on my laptop or Google Colab with me here :upside_down_face: ) then I want to help in adding this model as well",
"Hey @ayushtues! Lovely to meet you :-) Compute shouldn't be a problem! We can begin with 'dummy' versions of the model in order to verify that our implementations work, then scale up to the full size ones and share resources. \r\n\r\nAs a starting point, we can start the PR with the 'add-new-model-like' command:\r\nhttps://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the Wav2Vec2 model:\r\nhttps://github.com/huggingface/transformers/tree/main/src/transformers/models/wav2vec2\r\nThis will take care of the Wav2Vec2 Feature Encoder (speech pre-net) and Wav2Vec2 Encoder (speech encoder block) for us! Having automatically created all the files, the next step will be to verify that the feature extractor and speech encoder block match Microsoft's implementation.\r\n\r\nFeel free to ping me on Slack (sanchit[at]huggingface.co) if you have any questions!",
"Hi @sanchit-gandhi, thanks for the reply, great I'll start reading the paper in detail, checking out its repo, and also Huggingface's contribution documentation, and then start working on the PR.\r\n\r\nI am not added on slack, and can't find a public invite link, can you send me an invite if possible?",
"Awesome! Let me know how you get on :-)\r\n\r\nIf you drop me an email at sanchit[at]huggingface.co I can send you over an invite!",
"Hey there @sanchit-gandhi @ayushtues, I’m also interested in contributing, if any assistance is still required. Please do let me know!",
"Hey @mingboiz! Great to have you on-board! I'll invite you over to the Slack channel too!",
"Hey there @sanchit-gandhi @ayushtues, I’m also interested in contributing. @sanchit-gandhi I have sent you a DM on slack. ",
"Hey @mingboiz and @anuragshas, if you guys could drop me an email at sanchit[at]huggingface.co I can send you over email invites to the Slack channel!",
"Great to see so much interest in adding this model! 🔥 Should be a fun collaborative project!",
"Hi @sanchit-gandhi, I have send you the rquest to send me the invite link. I would also love to contribute here.\r\n",
"I'm also open to helping out with this.",
"I created a branch with a model made from wav2vec2.\r\nhttps://github.com/huggingface/transformers/pull/17982",
"Is this still being developed on? I'd be happy to contribute here as well. @sanchit-gandhi ",
"Hey @kasmith11, sorry for the late response! If you drop me an email at sanchit [at] huggingface.co I can add you to the Slack channel for the model addition. There's plenty of opportunity for contribution!",
"Closed via https://github.com/huggingface/transformers/pull/18922"
] | 1,654
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
### Model description
Motivated by the success of [T5](https://arxiv.org/abs/1910.10683) for pre-training NLP models, [SpeechT5](https://arxiv.org/abs/2110.07205) explores a cross-modal framework for learning joint contextual representations for speech and text data via a shared encoder-decoder structure.
The model architecture consists of an encoder-decoder transformer module and six modal-specific pre/post nets. The pre-nets convert the input speech $\mathbf{X}^{s} \in \mathcal{D}^{s}$ or text $\mathbf{X}^{t} \in \mathcal{D}^{t}$ to a unified space of hidden representations. The hidden representations are then fed into the shared encoder-decoder to perform the sequence-to-sequence conversion. Finally, the post-nets generate the output in the speech or text modality, based on the decoder output.
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
The paper was accepted at the ACL 2022 main conference: https://arxiv.org/abs/2110.07205
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code and checkpoints: https://github.com/microsoft/SpeechT5/tree/main/SpeechT5
SpeechT5 combines a transformer encoder-decoder backbone with speech/text specific pre/post-nets. Thus, many of the modules required for the SpeechT5 model are already partially or fully implemented in Transformers.
Model architecture:
1. Transformer encoder block: Wav2Vec2/Hubert-encoder transformer block (Wav2Vec2EncoderLayer) https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L725
2. Transformer decoder block: BERT-decoder transformer block (BertLMHeadModel) https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/bert/modeling_bert.py#L1157
3. Speech-encoder pre-net: convolutional feature extractor of Wav2Vec2 (Wav2Vec2FeatureEncoder) https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L408
4. Speech-decoder pre-net: three fully connected layers with the ReLU activation, fed with the log Mel-filterbank of the speech signal (new, original code: https://github.com/microsoft/SpeechT5/blob/main/SpeechT5/speecht5/models/modules/speech_decoder_prenet.py)
5. Speech-decoder post-net: linear layer fed with the decoder output to predict the log Mel-filterbank $\mathbf{Y}_f = (\mathbf{y}_f , \dots, \mathbf{y}_f )$, followed by five 1-dimensional convolutional layers to refine the predicted $\mathbf{Y}_f$ (new, original code: https://github.com/microsoft/SpeechT5/blob/main/SpeechT5/speecht5/models/modules/speech_decoder_postnet.py)
6. Text pre/post-net: shared embeddings. See https://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/models/bart/modeling_bart.py#L1151
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17569/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17569/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17568
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17568/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17568/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17568/events
|
https://github.com/huggingface/transformers/issues/17568
| 1,261,488,981
|
I_kwDOCUB6oc5LMMtV
| 17,568
|
LayoutLMv3 not downloading via official code samples
|
{
"login": "microcoder-py",
"id": 71311548,
"node_id": "MDQ6VXNlcjcxMzExNTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/71311548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/microcoder-py",
"html_url": "https://github.com/microcoder-py",
"followers_url": "https://api.github.com/users/microcoder-py/followers",
"following_url": "https://api.github.com/users/microcoder-py/following{/other_user}",
"gists_url": "https://api.github.com/users/microcoder-py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/microcoder-py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/microcoder-py/subscriptions",
"organizations_url": "https://api.github.com/users/microcoder-py/orgs",
"repos_url": "https://api.github.com/users/microcoder-py/repos",
"events_url": "https://api.github.com/users/microcoder-py/events{/privacy}",
"received_events_url": "https://api.github.com/users/microcoder-py/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I believe LayoutLM-v3 is not in an official release yet, so you'll need to install from source in order to use it for now.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
Used the official code sample for the microsoft/layoutlmv3-base model, but it is not working
Link to code: https://huggingface.co/docs/transformers/main/en/model_doc/layoutlmv3#transformers.LayoutLMv3Model
Error:
KeyError Traceback (most recent call last)
<ipython-input-17-2e4d79cdf031> in <module>()
3 import torch
4
----> 5 processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base")
6 model = AutoModelForSequenceClassification.from_pretrained("microsoft/layoutlmv3-base")
7
2 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/processing_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
198 )
199 if tokenizer_config_file is not None:
--> 200 with open(tokenizer_config_file, encoding="utf-8") as reader:
201 config_dict = json.load(reader)
202
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
698 " set the option `trust_remote_code=True` to remove this error."
699 )
--> 700 if kwargs.get("revision", None) is None:
701 logger.warning(
702 "Explicitly passing a `revision` is encouraged when loading a configuration with custom code to "
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in __getitem__(self, key)
407 A dictionary that lazily load its values when they are requested.
408 """
--> 409
410 def __init__(self, mapping):
411 self._mapping = mapping
KeyError: 'layoutlmv3'
```
```
### Who can help?
@Lysan
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/11aIci0c_UuId5BK2-U6QwZgEKE3ran9I?usp=sharing
### Expected behavior
```shell
The model downloads and executes with the same behaviour as described on HF
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17568/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17567
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17567/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17567/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17567/events
|
https://github.com/huggingface/transformers/issues/17567
| 1,261,459,910
|
I_kwDOCUB6oc5LMFnG
| 17,567
|
Permission denied
|
{
"login": "muhammad-ahmed-ghani",
"id": 63394104,
"node_id": "MDQ6VXNlcjYzMzk0MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/63394104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muhammad-ahmed-ghani",
"html_url": "https://github.com/muhammad-ahmed-ghani",
"followers_url": "https://api.github.com/users/muhammad-ahmed-ghani/followers",
"following_url": "https://api.github.com/users/muhammad-ahmed-ghani/following{/other_user}",
"gists_url": "https://api.github.com/users/muhammad-ahmed-ghani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muhammad-ahmed-ghani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muhammad-ahmed-ghani/subscriptions",
"organizations_url": "https://api.github.com/users/muhammad-ahmed-ghani/orgs",
"repos_url": "https://api.github.com/users/muhammad-ahmed-ghani/repos",
"events_url": "https://api.github.com/users/muhammad-ahmed-ghani/events{/privacy}",
"received_events_url": "https://api.github.com/users/muhammad-ahmed-ghani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi, can you also open a Discussion on the model repo, i.e. https://huggingface.co/hkunlp/from_all_T5_large_prefix_spider_with_cell_value2 (and link this issue to/from there)?\r\n\r\nThanks 🙏 "
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
### System Info
```shell
transformers==4.19.2
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("hkunlp/from_all_T5_large_prefix_spider_with_cell_value2")
model = AutoModel.from_pretrained("hkunlp/from_all_T5_large_prefix_spider_with_cell_value2")
### Expected behavior
```shell
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1850, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 128, in __init__
super().__init__(
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 107, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: Permission denied (os error 13)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17567/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17566
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17566/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17566/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17566/events
|
https://github.com/huggingface/transformers/issues/17566
| 1,261,334,851
|
I_kwDOCUB6oc5LLnFD
| 17,566
|
Trouble parallelizing GPT-NeoX
|
{
"login": "StellaAthena",
"id": 15899312,
"node_id": "MDQ6VXNlcjE1ODk5MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StellaAthena",
"html_url": "https://github.com/StellaAthena",
"followers_url": "https://api.github.com/users/StellaAthena/followers",
"following_url": "https://api.github.com/users/StellaAthena/following{/other_user}",
"gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions",
"organizations_url": "https://api.github.com/users/StellaAthena/orgs",
"repos_url": "https://api.github.com/users/StellaAthena/repos",
"events_url": "https://api.github.com/users/StellaAthena/events{/privacy}",
"received_events_url": "https://api.github.com/users/StellaAthena/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Gently pinging @patil-suraj and maybe also @sgugger regarding `accelerate` here - I think `accelerate` should be used here rather than `parallelize` no? \r\n\r\n@LysandreJik @sgugger should we maybe fully deprecate `parallelize()` now?",
"Yes the `parallelize` API will be fully deprecated soon (like this week or the next) so there is no point adding support to new models.\r\n\r\n> I've been having trouble getting accelerate working [with my custom codebase](https://github.com/bigscience-workshop/lm-evaluation-harness/tree/cjlovering/accelerate-2)\r\n\r\nCould you tell us more about the problem you are encountering there maybe?",
"Hi @sgugger , I want to do inference on t5-11b model and tried the `parallelize` method. It showed the error like this way\r\n\r\n```\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 683, in _call_and_handle_interrupt\r\n return trainer_fn(*args, **kwargs)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 950, in _test_impl\r\n results = self._run(model, ckpt_path=self.tested_ckpt_path)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1195, in _run\r\n self._dispatch()\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1271, in _dispatch\r\n self.training_type_plugin.start_evaluating(self)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py\", line 178, in start_evaluating\r\n self.spawn(self.new_process, trainer, self.mp_queue, return_result=False)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py\", line 201, in spawn\r\n mp.spawn(self._wrapped_function, args=(function, args, kwargs, return_queue), nprocs=self.num_processes)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 230, in spawn\r\n return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 188, in start_processes\r\n while not context.join():\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 136, in join\r\n signal_name=name\r\ntorch.multiprocessing.spawn.ProcessExitedException: process 2 terminated with signal SIGABRT\r\npython-BaseException\r\nwandb: Waiting for W&B process to finish... (success).\r\nwandb: \r\nwandb: Synced lrgenerative_logic_comp1_v7_1.0_new_seed42_trim_filtered_t5_11b_13_06_2022_ddd9ce1c: https://wandb.ai/soumya_research/lr_dataset/runs/32ujsgo3\r\nwandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)\r\nwandb: Find logs at: ./wandb/run-20220613_020333-32ujsgo3/logs\r\n[W CudaIPCTypes.cpp:21] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]\r\n\r\nProcess finished with exit code 139 (interrupted by signal 11: SIGSEGV)\r\n```\r\n\r\nI tried to find some solutions and also set the `num_workers` of dataset class to 0. But still doesn't work.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ZeyiLiao,\r\n\r\nplease don't use `parallelize` for the `t5-11b` model instead you can load it using the `device_map=\"auto\"` see [here](https://huggingface.co/docs/transformers/v4.20.1/en/main_classes/model#large-model-loading)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
```
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik, @stas00
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I have been trying to implement the `.parallelize()` method for GPT-NeoX. I am aware that it is soon going to be obsolete, but I've been having trouble getting `accelerate` working [with my custom codebase](https://github.com/bigscience-workshop/lm-evaluation-harness/tree/cjlovering/accelerate-2), but `.parallelize()` does work so I figured I would give it a shot. My fork can be found [here](https://github.com/StellaAthena/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py) and is based off the implementation for GPT-J.
Unfortunately, it does not seem like I have implemented it correctly. That said, the error message I am getting seems quite strange to me and is not what I would expect to get. I have verified that my code runs correctly for GPT-J-6B, including parallelism.
```
Traceback (most recent call last):
File "/home/mchorse/bigbio/lm-evaluation-harness/test.py", line 4, in <module>
model.parallelize()
File "/home/mchorse/miniconda3/envs/evalharness/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GPTNeoXForCausalLM' object has no attribute 'parallelize'
```
If you want to run my code, you can do so by following the instructions [here](https://github.com/bigscience-workshop/lm-evaluation-harness#overview). However I do not recommend doing so. Instead, the same error can be generated by running the following basic script:
```python
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
model.parallelize()
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
```
### Expected behavior
I expect my code to work.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17566/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17565
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17565/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17565/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17565/events
|
https://github.com/huggingface/transformers/pull/17565
| 1,261,207,890
|
PR_kwDOCUB6oc45Ii1b
| 17,565
|
Added translation of index.mdx to Portuguese Issue #16824
|
{
"login": "rzimmerdev",
"id": 35232794,
"node_id": "MDQ6VXNlcjM1MjMyNzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/35232794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rzimmerdev",
"html_url": "https://github.com/rzimmerdev",
"followers_url": "https://api.github.com/users/rzimmerdev/followers",
"following_url": "https://api.github.com/users/rzimmerdev/following{/other_user}",
"gists_url": "https://api.github.com/users/rzimmerdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rzimmerdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rzimmerdev/subscriptions",
"organizations_url": "https://api.github.com/users/rzimmerdev/orgs",
"repos_url": "https://api.github.com/users/rzimmerdev/repos",
"events_url": "https://api.github.com/users/rzimmerdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/rzimmerdev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you again @rzimmerdev for the translation of `index.mdx` and for correcting mistakes in `training.mdx`!\r\n\r\n@sgugger looks good to me :). If possible, this would be a good addition to the next release."
] | 1,654
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Creates folder pt in docs/source for translating documentation to Portuguese
Currently, only the index.mdx file was translated as of this PR.
Fixes issue #16824
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17565/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17565",
"html_url": "https://github.com/huggingface/transformers/pull/17565",
"diff_url": "https://github.com/huggingface/transformers/pull/17565.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17565.patch",
"merged_at": 1655510765000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17564
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17564/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17564/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17564/events
|
https://github.com/huggingface/transformers/issues/17564
| 1,261,191,632
|
I_kwDOCUB6oc5LLEHQ
| 17,564
|
Shard checkpoint for `tf` and `flax`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[] | 1,654
| 1,655
| 1,655
|
COLLABORATOR
| null |
### Feature request
The same sharding capabilities as pytorch should be available to `flax` and `tf`. This is required in order to push the OPT30B model.
### Motivation
Pushing $>45GB$ models (and have the same behaviour as in pytorch ).
### Your contribution
Could start working on that when I will be back from holidays!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17564/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17563
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17563/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17563/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17563/events
|
https://github.com/huggingface/transformers/pull/17563
| 1,261,176,709
|
PR_kwDOCUB6oc45IclV
| 17,563
|
Remove RuntimeErrors for NaN-checking in 20B
|
{
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes # (issue)
https://github.com/huggingface/transformers/issues/17452#issuecomment-1142141196
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17563/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17563",
"html_url": "https://github.com/huggingface/transformers/pull/17563",
"diff_url": "https://github.com/huggingface/transformers/pull/17563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17563.patch",
"merged_at": 1654522147000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17562
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17562/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17562/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17562/events
|
https://github.com/huggingface/transformers/issues/17562
| 1,261,169,555
|
I_kwDOCUB6oc5LK-uT
| 17,562
|
Add a stop_sequence option to text generation pipeline
|
{
"login": "Jcwscience",
"id": 14113132,
"node_id": "MDQ6VXNlcjE0MTEzMTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/14113132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jcwscience",
"html_url": "https://github.com/Jcwscience",
"followers_url": "https://api.github.com/users/Jcwscience/followers",
"following_url": "https://api.github.com/users/Jcwscience/following{/other_user}",
"gists_url": "https://api.github.com/users/Jcwscience/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jcwscience/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jcwscience/subscriptions",
"organizations_url": "https://api.github.com/users/Jcwscience/orgs",
"repos_url": "https://api.github.com/users/Jcwscience/repos",
"events_url": "https://api.github.com/users/Jcwscience/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jcwscience/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Unstale.\r\n\r\nThis is actually a great suggestion I've been meaning to add for quite a while.\r\n\r\nIf you're willing to do a PR here's the high level vision I have for this:\r\n\r\n- Add a new parameter `stop_sequence` (Here's some doc on how to add a pipeline which should cover parameters)\r\n- within `_sanitized_parameters` consume `stop_sequence`, tokenize it. Raise a warning if sequence is multiple tokens long (being able to stop on a multiple tokens sequence is not yet covered in `transformers` and would require even more work, we can start small here). Set within the `forward_parameters[\"generate_kwargs\"]` `eos_token_id` to the new stop sequence first token.\r\n- Add the docstring about this new parameter\r\n- Add some tests ( I can help with that as adding tests should be relatively forward, but as the test are really attempting to cover ALL models and all variants, they can fail in odd ways, or worse, the test could easily miss some configurations and fail to see any regression in the future).\r\n\r\nCheers.\r\nI would really like to add such a parameter myself, but at the moment I don't really have the time to dedicate to this, guidance on a PR is the best I can offer 100% !\r\n",
"Hi @Narsil I'd love to take this on if it's still open.",
"I think it's still open. Thanks for taking this !",
"Hi @Narsil is this issue still open? Is there anything else I can help with?",
"It seems like this is mostly done, except for documentation [here](https://huggingface.co/docs/transformers/main_classes/pipelines?highlight=transformers.TextGenerationPipeline#transformers.TextGenerationPipeline.__call__), which I'm happy to take. @KMFODA is there a reason why that wasn't added? ",
"Hey @pruksmhc good spot. I missed adding the docs for this, I'll add it as soon as I can. The PR was merged last year though so the functionality should be available in the main branch. Once docs are added I'll post here so we can close this Issue.",
"Hi, it seems that this issue is still open just waiting for a docstring to be added :thinking: \r\n@KMFODA will you be able to add it any time soon? if not and @pruksmhc is busy with other stuff, I'm happy to be the second backup :wink: ",
"If this is to be added, can it be a list of stop sequences instead of a single one?\r\n\r\nToolformer, for example, has the AI invoke tools, so I would want two stop sequences: one for tool invocation, and one for message completion\r\n\r\n```\r\nUser: What files are on my desktop?\r\nAssistant: On your desktop, there are [Get-ChildItem ~/Desktop ->\r\n```\r\n\r\nHere, the `->` arrow is meant to pause generation, execute the command, append it to the prompt, and then continue\r\n\r\n```\r\nUser: What files are on my desktop?\r\nAssistant: On your desktop, there are [Get-ChildItem ~/Desktop -> [\"a.txt\",\"b.txt\",\"c.md\"]] two text files called \"a\" and \"b\", and a markdown file called \"c\"<stop>\r\n```\r\n\r\nThe agent uses the command output to provide context when formulating its response. The second stop token, `<stop>`, is used to prevent the assistant from continuing to generate the conversation by simulating what the user might say.\r\n\r\n ",
"This issue seems to have been duplicated in #26280.\r\n\r\nDocumentation was written and merged for #26280, and the issue was recently closed. If that work was satisfactory, then this issue can be closed as well.",
"Indeed, thanks @Tanman2001 "
] | 1,654
| 1,696
| 1,696
|
NONE
| null |
### Feature request
A stop sequence option to allow text generation models to stop generating when a specific token is reached.
### Motivation
When I use GPT-J on a slower machine every extra generated token counts. If I need the model to answer a question for example, the only way I can ensure it isn’t cut off is to set the max length well above what I expect the answer to be. That takes a considerable amount of extra processing power for useless data.
### Your contribution
I am only beginning to understand the core workings of model inference, so I’m not sure what I can do to help. I might be able to gather documentation, or test code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17562/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17561
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17561/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17561/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17561/events
|
https://github.com/huggingface/transformers/issues/17561
| 1,261,123,682
|
I_kwDOCUB6oc5LKzhi
| 17,561
|
TokenClassification with DestillBert does not learn
|
{
"login": "KyloPrem",
"id": 26967090,
"node_id": "MDQ6VXNlcjI2OTY3MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/26967090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KyloPrem",
"html_url": "https://github.com/KyloPrem",
"followers_url": "https://api.github.com/users/KyloPrem/followers",
"following_url": "https://api.github.com/users/KyloPrem/following{/other_user}",
"gists_url": "https://api.github.com/users/KyloPrem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KyloPrem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KyloPrem/subscriptions",
"organizations_url": "https://api.github.com/users/KyloPrem/orgs",
"repos_url": "https://api.github.com/users/KyloPrem/repos",
"events_url": "https://api.github.com/users/KyloPrem/events{/privacy}",
"received_events_url": "https://api.github.com/users/KyloPrem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) as well?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
NONE
| null |
Hey guys,
I need to classify a sequence of tokens in a given sentence as either 0: **unrelevant** or 1: **relevant**. I tried to follow the tutorial for TokenClassification with DestillBert/Bert from Huggingface for NER, but the transfer does not seem to result in a model that learns to make predictions. I am not sure where the problem lays, does find a pointer to how I could debug the training process with trainer?
Data is of the following format:
**tokens**: List(String) ['I', 'am' 'an' 'example' , '.' ]
**labels**: List(Integer) [0, 1, 1, 1, 0]
Here is the Dataset format I use. As the tokens from the dataset were extracted with Bert, I just convert them to their IDs and stitch them together with the special tokens [CLS] and [SEP]. For the special tokens I assign a label of -100 to ignore them during the loss computation.
```
from torch.utils.data import Dataset
class relDataset(Dataset):
def __init__(self, tokens, labels, tokenizer):
self.tokens = tokens
self.labels = labels
self.tokenizer = tokenizer
def __getitem__(self, idx):
encoding = {}
encoding["input_ids"] = [101] + tokenizer.convert_tokens_to_ids(self.tokens[idx]) + [102]
encoding["attention_mask"] = [1]*len(encoding["input_ids"])
encoding["labels"] = [-100] + self.labels[idx] + [-100]
return encoding
def __len__(self):
return len(self.label)
```
```
train = rel_Dataset(tokens = train_df["tokens"].values,
labels = train_df["labels"].values,
tokenizer = tokenizer)
```
```
test = rel_Dataset(tokens = test_df["tokens"].values,
labels = test_df["labels"].values,
tokenizer = tokenizer)
```
Then I set up tokenizer, model, and trainer
```
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
from torch.nn.parallel import DataParallel
import torch
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased", num_labels=2)
device = torch.device("cuda")
model.to(device)
training_args = TrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
num_train_epochs=6,
weight_decay=0.01
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_A8,
eval_dataset=eval_A8,
tokenizer=tokenizer,
data_collator=data_collator,
#compute_metrics=compute_metrics
)
```
Then using Trainer to train the model I achieve the following performance
trainer.train()

The Training loss seems to be static, while the predictions on the test data do not seem to make a difference
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17561/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17560
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17560/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17560/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17560/events
|
https://github.com/huggingface/transformers/pull/17560
| 1,261,003,050
|
PR_kwDOCUB6oc45H7Cg
| 17,560
|
Fix some typos.
|
{
"login": "Yulv-git",
"id": 34329208,
"node_id": "MDQ6VXNlcjM0MzI5MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/34329208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yulv-git",
"html_url": "https://github.com/Yulv-git",
"followers_url": "https://api.github.com/users/Yulv-git/followers",
"following_url": "https://api.github.com/users/Yulv-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Yulv-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yulv-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yulv-git/subscriptions",
"organizations_url": "https://api.github.com/users/Yulv-git/orgs",
"repos_url": "https://api.github.com/users/Yulv-git/repos",
"events_url": "https://api.github.com/users/Yulv-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yulv-git/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix some typos.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17560/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17560",
"html_url": "https://github.com/huggingface/transformers/pull/17560",
"diff_url": "https://github.com/huggingface/transformers/pull/17560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17560.patch",
"merged_at": 1657530013000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17559
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17559/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17559/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17559/events
|
https://github.com/huggingface/transformers/issues/17559
| 1,260,991,693
|
I_kwDOCUB6oc5LKTTN
| 17,559
|
Save a Pytorch Bert finetuned model with custom forward function and heads with Hugginface
|
{
"login": "Ch-rode",
"id": 61243245,
"node_id": "MDQ6VXNlcjYxMjQzMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/61243245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ch-rode",
"html_url": "https://github.com/Ch-rode",
"followers_url": "https://api.github.com/users/Ch-rode/followers",
"following_url": "https://api.github.com/users/Ch-rode/following{/other_user}",
"gists_url": "https://api.github.com/users/Ch-rode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ch-rode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ch-rode/subscriptions",
"organizations_url": "https://api.github.com/users/Ch-rode/orgs",
"repos_url": "https://api.github.com/users/Ch-rode/repos",
"events_url": "https://api.github.com/users/Ch-rode/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ch-rode/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, \r\nI believe in order to load your model via Transformers’ `AutoModel` you need to implement your custom model class into Transofmers’ repo, and “register” it in “models/auto” package. \r\nHope this helps.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
NONE
| null |
### System Info
I have created my own BertClassifier model, starting from a pretrained and then added my own classification heads composed by different layers. After the training I want to save the model using `model.save_pretrained()` but when I print it after uploading it from pretrained I don't see my classifier head.
The model struture is the following. How can I save the all structure on my model and make it full accessible with
`AutoModel.from_preatrained('folder_path')` ?
. Thanks!
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
class BertClassifier(PreTrainedModel):
"""Bert Model for Classification Tasks."""
config_class = AutoConfig
def __init__(self,config, freeze_bert=True): #tuning only the head
"""
@param bert: a BertModel object
@param classifier: a torch.nn.Module classifier
@param freeze_bert (bool): Set `False` to fine-tune the BERT model
"""
#super(BertClassifier, self).__init__()
super().__init__(config)
# Instantiate BERT model
# Specify hidden size of BERT, hidden size of our classifier, and number of labels
self.D_in = 1024 #hidden size of Bert
self.H = 512
self.D_out = 2
# Instantiate the classifier head with some one-layer feed-forward classifier
self.classifier = nn.Sequential(
nn.Linear(self.D_in, 512),
nn.Tanh(),
nn.Linear(512, self.D_out),
nn.Tanh()
)
def forward(self, input_ids, attention_mask):
# Feed input to BERT
outputs = self.bert(input_ids=input_ids,
attention_mask=attention_mask)
# Extract the last hidden state of the token `[CLS]` for classification task
last_hidden_state_cls = outputs[0][:, 0, :]
# Feed input to classifier to compute logits
logits = self.classifier(last_hidden_state_cls)
return logits
```
```
configuration=AutoConfig.from_pretrained('Rostlab/prot_bert_bfd')
model = BertClassifier(config=configuration,freeze_bert=False)
```
after training
```
model.save_pretrained('path')
```
### Expected behavior
```
If I print the model after model = AutoModel.from_pretrained('path') I have as the last layer the following and missing my 2 linear layer:
(output): BertOutput(
(dense): Linear(in_features=4096, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.0, inplace=False)
(adapters): ModuleDict()
(adapter_fusion_layer): ModuleDict()
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(activation): Tanh()
)
(prefix_tuning): PrefixTuningPool(
(prefix_tunings): ModuleDict()
)
)
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17559/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17558
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17558/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17558/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17558/events
|
https://github.com/huggingface/transformers/pull/17558
| 1,260,912,305
|
PR_kwDOCUB6oc45HqcO
| 17,558
|
Spanish Docs - fix gendered sentence
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a gendered sentence in [es/index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/es/index.mdx).
## Notes
FYI @osanseviero, I believe this was the sentence that was still gendered in the Spanish docs :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17558/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17558",
"html_url": "https://github.com/huggingface/transformers/pull/17558",
"diff_url": "https://github.com/huggingface/transformers/pull/17558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17558.patch",
"merged_at": 1654603780000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17557
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17557/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17557/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17557/events
|
https://github.com/huggingface/transformers/issues/17557
| 1,260,897,845
|
I_kwDOCUB6oc5LJ8Y1
| 17,557
|
T5ForConditionalGeneration does not require resize_position_embeddings when input sequence length is longer than 512?
|
{
"login": "mshen2",
"id": 23084582,
"node_id": "MDQ6VXNlcjIzMDg0NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/23084582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mshen2",
"html_url": "https://github.com/mshen2",
"followers_url": "https://api.github.com/users/mshen2/followers",
"following_url": "https://api.github.com/users/mshen2/following{/other_user}",
"gists_url": "https://api.github.com/users/mshen2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mshen2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mshen2/subscriptions",
"organizations_url": "https://api.github.com/users/mshen2/orgs",
"repos_url": "https://api.github.com/users/mshen2/repos",
"events_url": "https://api.github.com/users/mshen2/events{/privacy}",
"received_events_url": "https://api.github.com/users/mshen2/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @patrickvonplaten ",
"Hey @mshen2,\r\n\r\nI don't think BART uses relative position embeddings, but rather \"fixed\" position embeddings (\"fixed\" in the sence that if seq_len > 1024 is provided the model gives an index error).\r\n\r\nCould you maybe look into this line of code in BART: https://github.com/huggingface/transformers/blob/66e8656778392609e1fb769f1a0d0839af3cd76a/src/transformers/models/bart/modeling_bart.py#L718 -> it shows that the position ids are a fixed-size matrix\r\n\r\nAlso cc @patil-suraj here",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
NONE
| null |
Hi, thanks in advance! I am looking at the run_summarization.py under examples/pytorch/summarization/, in the following code snippets where I want to set `max_source_length` bigger than 512 where 512 is the max length T5 was pre-trained on:
```
if (
hasattr(model.config, "max_position_embeddings")
and model.config.max_position_embeddings < data_args.max_source_length
):
if model_args.resize_position_embeddings is None:
logger.warning(
"Increasing the model's number of position embedding vectors from"
f" {model.config.max_position_embeddings} to {data_args.max_source_length}."
)
model.resize_position_embeddings(data_args.max_source_length)
elif model_args.resize_position_embeddings:
model.resize_position_embeddings(data_args.max_source_length)
else:
raise ValueError(
f"`--max_source_length` is set to {data_args.max_source_length}, but the model only has"
f" {model.config.max_position_embeddings} position encodings. Consider either reducing"
f" `--max_source_length` to {model.config.max_position_embeddings} or to automatically resize the"
" model's position encodings by passing `--resize_position_embeddings`."
)
```
My questions are:
1. I remembered T5Config was having a `max_position_embeddings` parameter before (was 512), why it is removed now?
2. In the script, the default `max_sequence_length` is set to 1024. Since it is bigger than 512, why it is not required to call `resize_position_embeddings` method like before in this issue: https://github.com/huggingface/transformers/issues/5204#issuecomment-648045999
4. Bart also used relative position embedding like T5, but BartConfig's `max_position_embeddings` is kept with 1024 and when setting `max_source_length` longer than 1024, it does require calling `resize_position_embeddings` according to the code snippets above. Is it because of different relative position embedding between BART and T5.
I think I must be misunderstanding something, appreciate if some explanations can be given here. Thanks!!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17557/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17555
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17555/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17555/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17555/events
|
https://github.com/huggingface/transformers/pull/17555
| 1,260,695,732
|
PR_kwDOCUB6oc45G_5v
| 17,555
|
Fixes the LevitIntegrationTest
|
{
"login": "AnugunjNaman",
"id": 42839570,
"node_id": "MDQ6VXNlcjQyODM5NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnugunjNaman",
"html_url": "https://github.com/AnugunjNaman",
"followers_url": "https://api.github.com/users/AnugunjNaman/followers",
"following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}",
"gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions",
"organizations_url": "https://api.github.com/users/AnugunjNaman/orgs",
"repos_url": "https://api.github.com/users/AnugunjNaman/repos",
"events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnugunjNaman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
There was a mismatch of logits which wasn't corrected. It's done now.
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17555/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17555",
"html_url": "https://github.com/huggingface/transformers/pull/17555",
"diff_url": "https://github.com/huggingface/transformers/pull/17555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17555.patch",
"merged_at": 1654516053000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17554
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17554/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17554/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17554/events
|
https://github.com/huggingface/transformers/pull/17554
| 1,260,624,661
|
PR_kwDOCUB6oc45Gy1M
| 17,554
|
TF implementation of RegNets
|
{
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts \r\n\r\nIf we run the following:\r\n\r\n```\r\nfrom PIL import Image\r\nimport numpy as np\r\n\r\nfrom src.transformers.models.regnet.modeling_tf_regnet import (\r\n TFRegNetForImageClassification\r\n)\r\nfrom transformers import AutoFeatureExtractor\r\n\r\ndef prepare_img():\r\n image = Image.open(\"./tests/fixtures/tests_samples/COCO/000000039769.png\")\r\n return image\r\n\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(\"facebook/regnet-y-040\")\r\nmodel = TFRegNetForImageClassification.from_pretrained(\"facebook/regnet-y-040\", from_pt=True)\r\n\r\nimage = prepare_img()\r\ninputs = feature_extractor(images=image, return_tensors=\"tf\") \r\noutputs = model(**inputs, training=False)\r\n\r\nprint(outputs.logits.shape)\r\n\r\nexpected_slice = np.array([-0.4180, -1.5051, -3.4836])\r\n\r\nnp.testing.assert_allclose(outputs.logits[0, :3].numpy(), expected_slice, atol=1e-4)\r\n```\r\n\r\nFirst, it complains the `moving_mean` and `moving_variance` params are not loaded properly.\r\n\r\nWe tested your solution in https://github.com/huggingface/transformers/pull/17571. With that, we're running into mismatches of `num_batches_tracked` and even `moving_mean`. It also complains about some of the mismatches stemming from the `shortcut` layer which wasn't the case for the earlier setup.\r\n\r\nDo you have any thoughts? ",
"Hi @sayakpaul \r\n\r\nCould you give a bit more information about the mismatches i.e. the printouts you're currently getting?\r\n\r\nRegarding `num_batches_tracked`, I don't believe this parameter will ever be cross-loaded into a `tf.keras.layers.BatchNormalization` layer as there isn't an equivalent parameter. This is only important if the corresponding PyTorch batch norm layer doesn't have its momentum set c.f. [param updates](https://github.com/pytorch/pytorch/blob/67badf0d5cefeb0d39767609e78aa5ff668a262e/torch/nn/modules/batchnorm.py#L149), which you'll need to verify for this model. I suggest looking at the implementations of both the [TF](https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/layers/normalization/batch_normalization.py#L1107-L1249) and PyTorch layer to see when/if these differences are important. If the parameter is necessary, then I think one approach might be subclassing to build a new layer and include the parameter as a registered weight + any necessary logic to use it, but I'm not sure at the moment. ",
"I tried debugging this today but no luck yet. But here's some information for all of us to navigate this through:\r\n\r\n* Amending [`src/transformers/modeling_tf_pytorch_utils.py`](https://github.com/huggingface/transformers/pull/17554/files#diff-67993ece845388c9a9a1f342f4c82a2ed7f790f454134c39c3a63146616ea37a) (following https://github.com/huggingface/transformers/pull/17571) resulted in this: https://pastebin.com/0CZJmvzh.\r\n* `num_batches_tracked` is likely not needed, I don't suspect that to be a trained parameter anyway. However, happy to stand corrected. \r\n* But what is surprising is even after incorporating the changes from https://github.com/huggingface/transformers/pull/17571 there's a complaint about `moving_mean` and `moving_variance`. \r\n* There's also a complaint about `convolution` params.\r\n\r\nAll these mismatches seem to be stemming from the `layers.0` of RegNet stages. Mismatches stemming from other `layers` (`layers.2` for example) are related to `num_batches_tracked`. \r\n\r\nThe test used to gather this information is the same one as mentioned in https://github.com/huggingface/transformers/pull/17554#issuecomment-1147700055. \r\n\r\n@amyeroberts ",
"@sayakpaul Thanks for your detailed update. Comments below:\r\n\r\n1. OK - thanks for posting that it really helps!\r\n\r\n2. `num_batches_tracked` isn't trainable, but it is updated during training. As I mentioned above, if the layer has `momentum` set (it's not `None`) then you can ignore it. However, if `momentum` isn't set, then the layer uses `num_batches_tracked` to update the `running_mean` and `running_var` calculations, which are used during evaluation to normalize the batch. You can quickly check if the momentum is set for the batchnorm layers running something like `all([x.momentum is not None for x in model.modules() if isinstance(x, nn.BatchNorm2d)])`. \r\n\r\n3. Looking at the printout you pasted above, it says `All the weights of TFRegNetForImageClassification were initialized from the PyTorch model.`. If this is the case, and some of the PyTorch weights weren't used, it makes me think some layers might be missing in your implementation. I would look at the two architectures and see if they differ anywhere. ",
"@amyeroberts a quick update:\r\n\r\n* `momentum` is actually not set. This is why we need to also retrieve `num_batches_tracked` too. We need to figure out a way to factor it in to use with `layers.BatchNormalization` in TensorFlow.\r\n* The TF model has a fewer number of params than the PT model so we'll look into why this is the case. One immediate reason would be the absence of `num_batches_tracked`. But that contributes a very small difference. We currently have 629440 fewer parameters in the TF model than the PT one. ",
"@sayakpaul Thanks for the update! \r\n\r\n* OK, this makes things a bit more difficult. Let me know if you want any help for this step. It's something that will likely need to be done in other PT -> TF ports so definitely valuable to the community if you added this!\r\n\r\n* It might be easier to print out the weight names instead of comparing number of parameters. The porting code works on the names, and so seeing where the two models differ can really help pinpoint what's happening. What I typically do is use the porting code to convert the tensorflow weight names and compare the two sets. For this model, it would look something like: \r\n```\r\nfrom transformers import RegNetForImageClassification\r\n# import directly once __init__ files updated\r\nfrom transformers.models.regnet.modeling_tf_regnet import TFRegNetForImageClassification \r\nfrom transformers.modeling_tf_pytorch_utils import convert_tf_weight_name_to_pt_weight_name\r\n\r\ncheckpoint = \"facebook/regnet-y-040\"\r\ntf_model = TFRegNetForImageClassification.from_pretrained(checkpoint, from_pt=True)\r\npt_model = RegNetForImageClassification.from_pretrained(checkpoint)\r\n\r\ntf_model_weights = set([convert_tf_weight_name_to_pt_weight_name(x.name)[0] for x in tf_model.trainable_variables])\r\npt_model_weights = set(pt_model.state_dict().keys())\r\n\r\nprint(tf_model_weights - pt_model_weights)\r\nprint(pt_model_weights - tf_model_weights)\r\n```",
"Thanks for the suggestions. Will try them out and update.",
"@amyeroberts \r\n\r\nI had to do a few minor modifications to your snippet in https://github.com/huggingface/transformers/pull/17554#issuecomment-1150933208:\r\n\r\n```\r\ntf_model_weights = set(\r\n [\r\n convert_tf_weight_name_to_pt_weight_name(x.name)[0]\r\n for x in tf_model.trainable_variables + tf_model.non_trainable_variables\r\n ]\r\n)\r\npt_model_weights = set(pt_model.state_dict().keys())\r\ntf_model_weights_new = set()\r\n\r\nfor name in tf_model_weights:\r\n if \"moving_mean\" in name:\r\n name = name.replace(\"moving_mean\", \"running_mean\")\r\n elif \"moving_variance\" in name:\r\n name = name.replace(\"moving_variance\", \"running_var\")\r\n tf_model_weights_new.add(name)\r\n\r\n\r\nprint(f\"Differences in the TF model and PT model: {tf_model_weights_new - pt_model_weights}\")\r\nprint(f\"Differences in the PT model and TF model: {pt_model_weights - tf_model_weights_new}\")\r\nprint(f\"Total weights differing: {len(pt_model_weights - tf_model_weights_new)}\")\r\n```\r\n\r\n`convert_tf_weight_name_to_pt_weight_name()` doesn't change the `moving_mean` and `moving_variance` to `running_mean` and `running_var` respectively. Instead, currently, it's handled [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/modeling_tf_pytorch_utils.py#L160-#L172) so that [this query](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/modeling_tf_pytorch_utils.py#L205) is successful. \r\n\r\nWith this change, the result of `pt_model_weights - tf_model_weights_new` is exactly matching with the complaint: \r\n\r\n```\r\nSome weights of the PyTorch model were not used when initializing the TF 2.0 model TFRegNetForImageClassification ...\r\n```\r\n\r\n(Full output [here](https://pastebin.com/cg10ET1c)). \r\n\r\nI have gone over the `modeling_tf_regnet.py` script a couple of times but I don't yet know what I can do here. Let me know what you usually do when you have these differences. ",
"Also an oversight on my end in reporting `momentum` in https://github.com/huggingface/transformers/pull/17554#issuecomment-1150851986.\r\n\r\n`all([x.momentum is not None for x in model.modules() if isinstance(x, nn.BatchNorm2d)])` actually gives `True` which means it's okay to ignore `num_batches_tracked`. ",
"@amyeroberts we were able to rectify the model implementation and make it work. The integration test (mentioned in https://github.com/huggingface/transformers/pull/17554#issuecomment-1147700055) is passing now.\r\n\r\nThe tests, however, are failing for a weird reason:\r\n\r\n```\r\nParameter config in `TFRegNetModel(config)` should be an instance of class `PretrainedConfig`. To create a model from a pretrained model use `model = TFRegNetModel.from_pretrained(PRETRAINED_MODEL_NAME)`\r\n```\r\n\r\nWeird because we tested a couple of things in isolation:\r\n\r\n```py\r\nfrom transformers import RegNetConfig\r\n\r\nconfig_class = RegNetConfig()\r\n\r\nprint(f\"RegNet Config class type: {type(config_class)}.\")\r\nprint(f\"RegNet Config is an instance of PretrainedConfig: {isinstance(config_class, PretrainedConfig)}\")\r\n```\r\n\r\nThe final print statement gives `True`. But when we do the following:\r\n\r\n```py\r\nfrom src.transformers.models.regnet.modeling_tf_regnet import TFRegNetForImageClassification, TFRegNetModel\r\n\r\nclass_from_config = TFRegNetModel(config_class)\r\nprint(\"Model class from config was initialized.\")\r\n```\r\n\r\nit complains:\r\n\r\n```\r\nParameter config in `TFRegNetModel(config)` should be an instance of class `PretrainedConfig`. To create a model from a pretrained model use `model = TFRegNetModel.from_pretrained(PRETRAINED_MODEL_NAME)`\r\n```\r\n\r\nDo you have any suggestions for this?",
"@sgugger @Rocketknight1 the PR is now ready for review. \r\n\r\nThis particular model actually has the largest vision model checkpoint available to date: https://huggingface.co/facebook/regnet-y-10b-seer. It's still in PyTorch and the corresponding model makes use of the `low_cpu_usage` argument. \r\n\r\nI had a chat with @Rocketknight1 a few days back on the possibility of supporting this checkpoint in TensorFlow too. This will require tweaks and they will be contributed in a separate PR. ",
"@ydshieh I thought I could use your help here. There's something really weird happening here. \r\n\r\nIf I omit the `image_size` argument from the [`RegNetConfig`](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L76) the cross-testing is failing (full [stack-trace](https://pastebin.com/Us6BgKvh)).\r\n\r\n```\r\nRUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 python -m pytest tests/models/regnet/test_modeling_tf_regnet.py\r\n```\r\n\r\nBut if I keep the argument, it runs successfully. The only use of `image_size` is [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/modeling_tf_regnet.py#L425). \r\n\r\nIn any case, the PyTorch cross-test of the same model is failing ([full stack-trace with `image_size` set, trace without `image_size` set](https://pastebin.com/LjmPjqpZ)):\r\n\r\n```\r\n RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 python -m pytest tests/models/regnet/test_modeling_regnet.py\r\n```\r\n\r\nHave you got any suggestions? \r\n\r\nCc: @Rocketknight1 \r\n",
"Hey @sayakpaul \r\n\r\nBy `I omit the image_size argument from the` [RegNetConfig](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L76), could you specify where you call config without passing `image_size `. I guess you mean somewhere in the TF Reg test file, but which line exactly 🙏 ?\r\n\r\n`dummy_inputs` is also used in https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/src/transformers/modeling_tf_utils.py#L1975\r\n\r\nto build the network, so the weights (with random values) will present in order to load the real weights. My best guess is that without specifying somewhere, the model is initialized to handle size 224 images, and it causes some shape inconsistency issue for your test. For more detailed investigation, I need to know where you don't specify this argument. But probably you are able to figure this out with this info. maybe?",
"BTW, why you need `pastebin` to post the traceback? A link to the CircleCI job run page is ok, no?",
"@ydshieh thanks for your inputs. \r\n\r\nSo, what I meant is I just comment [this line](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L76) and [this line](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/configuration_regnet.py#L89) and just pass a hardcoded value (224) [here](https://github.com/ariG23498/transformers/blob/aritra-regnets/src/transformers/models/regnet/modeling_tf_regnet.py#L425) (`3, self.config.num_channels, 224, 224`). Is this better to understand now? \r\n\r\nSo, if `image_size` specified in the config (like it is currently), the cross-test with TF (`test_modeling_tf_regnet.py`) passes successfully but not with PT (i.e., `test_modeling_regnet.py`). \r\n\r\n\r\n> BTW, why you need `pastebin` to post the traceback? A link to the CircleCI job run page is ok, no?\r\n\r\nThe CI trace seemed clunky so I ran the individual test to keep the outputs cleaner. ",
"The TF implementation of `TFAdaptiveAvgPool1D` (used for `TFAdaptiveAvgPool2D`) is not capable to handle any image size (once being initialized with an input), as its `build` method contains `self.map = tf.constant(sparse_map)`, which determines the input shape it can handle in the subsequent calls.\r\n\r\nI didn't check how PyTorch implements `nn.AdaptiveAvgPool2d((1, 1))`, but I think it is possible not to build `self.map`, but instead to prepare the necessary matrix dynamically in the `call` method.\r\n\r\nRegarding the test, the situation is similar: the method `dummy_inputs` is called in `from_pretrained` method. While hard-coded with `224`, the model can't run with other image size anymore. Even if you add `image_size` argument in `RegNetConfig`, once a \r\n`TFRegNetModel` is run with an input, it can't run with other input shape anymore. This contradicts to the PyTorch implementation.\r\n\r\nMaybe you can figure out a more robust way for TF?\r\n\r\nHere is the code snippet to make things more concrete:\r\n```python\r\nimport torch\r\nimport tensorflow as tf\r\nimport math\r\nfrom typing import List\r\nimport numpy as np\r\n\r\n\r\n# Copied from:\r\n# https://gist.github.com/Rocketknight1/43abbe6e73f1008e6e459486e01e0ceb\r\nclass TFAdaptiveAvgPool1D(tf.keras.layers.Layer):\r\n def __init__(self, output_dim, mode=\"dense\", **kwargs):\r\n super().__init__(**kwargs)\r\n self.output_dim = output_dim\r\n self.mode = mode\r\n self.map = None\r\n\r\n def build(self, input_shape):\r\n super().build(input_shape)\r\n \"\"\"We pre-compute the sparse matrix for the build() step once. The below code comes\r\n from https://stackoverflow.com/questions/53841509/how-does-adaptive-pooling-in-pytorch-work/63603993#63603993.\"\"\"\r\n\r\n def get_kernels(ind, outd) -> List:\r\n \"\"\"Returns a List [(kernel_offset_start,kernel_length)] defining all the pooling kernels for a 1-D adaptive\r\n pooling layer that takes an input of dimension `ind` and yields an output of dimension `outd`\"\"\"\r\n\r\n def start_index(a, b, c):\r\n return math.floor((float(a) * float(c)) / b)\r\n\r\n def end_index(a, b, c):\r\n return math.ceil((float(a + 1) * float(c)) / b)\r\n\r\n results = []\r\n for ow in range(outd):\r\n start = start_index(ow, outd, ind)\r\n end = end_index(ow, outd, ind)\r\n sz = end - start\r\n results.append((start, sz))\r\n return results\r\n\r\n in_dim = int(input_shape[-1])\r\n kernels = get_kernels(in_dim, self.output_dim)\r\n sparse_map = np.zeros((in_dim, self.output_dim), dtype=np.float32)\r\n for i, kernel in enumerate(kernels):\r\n sparse_map[kernel[0] : kernel[0] + kernel[1], i] = 1 / kernel[1]\r\n if self.mode == \"dense\":\r\n self.map = tf.constant(sparse_map)\r\n else:\r\n self.map = tf.sparse.from_dense(sparse_map)\r\n\r\n def call(self, inputs):\r\n if self.mode == \"dense\":\r\n return inputs @ self.map\r\n else:\r\n input_dims = inputs.shape\r\n input_matrix = tf.reshape(inputs, (-1, input_dims[-1]))\r\n out = tf.sparse.sparse_dense_matmul(input_matrix, self.map)\r\n return tf.reshape(out, input_dims[:-1].as_list() + [-1])\r\n\r\n\r\nclass TFAdaptiveAvgPool2D(tf.keras.layers.Layer):\r\n def __init__(self, output_shape, mode=\"dense\", **kwargs):\r\n super().__init__(**kwargs)\r\n self.w_pool = TFAdaptiveAvgPool1D(output_shape[1], mode=mode)\r\n\r\n def call(self, inputs):\r\n # Rearrange from NHWC -> NCHW\r\n inputs = tf.transpose(inputs, perm=[0, 3, 1, 2])\r\n # Perform W-pooling\r\n inputs = self.w_pool(inputs)\r\n\r\n\r\npt_2d_pooler = torch.nn.AdaptiveAvgPool2d((1, 1))\r\ntf_1d_pooler = TFAdaptiveAvgPool1D(output_dim=1, mode=\"dense\")\r\n\r\n# For image size 224\r\nN, C, H, W = (3, 10, 56, 56)\r\nnp_input_224_56 = np.random.random(size=(N, C, H, W))\r\npt_input_224_56 = torch.tensor(np_input_224_56)\r\ntf_input_224_56 = tf.constant(np_input_224_56)\r\n\r\n# For image size 32\r\nN, C, H, W = (3, 10, 8, 8)\r\nnp_input_32_8 = np.random.random(size=(N, C, H, W))\r\npt_input_32_8 = torch.tensor(np_input_32_8)\r\ntf_input_32_8 = tf.constant(np_input_32_8)\r\n\r\n# 1st run: pt OK\r\npt_o = pt_2d_pooler(pt_input_224_56)\r\nprint(pt_o.shape)\r\n\r\n# 1st run: tf OK\r\ntf_o = tf_1d_pooler(tf_input_224_56)\r\nprint(tf_o.shape)\r\nprint(f\"tf_1d_pooler.map has shape: {tf_1d_pooler.map.shape}\")\r\n\r\n\r\n# 2nd run: pt OK\r\npt_o = pt_2d_pooler(pt_input_32_8)\r\nprint(pt_o.shape)\r\n\r\n# 2nd run: tf failed\r\ntf_o = tf_1d_pooler(tf_input_32_8)\r\nprint(tf_o.shape)\r\n\r\n\r\n```",
"Thank you so much @ydshieh! Really appreciate this. \r\n\r\n> Regarding the test, the situation is similar: the method dummy_inputs is called in from_pretrained method. While hard-coded with 224, the model can't run with other image size anymore. Even if you add image_size argument in RegNetConfig, once a\r\nTFRegNetModel is run with an input, it can't run with other input shape anymore. This contradicts to the PyTorch implementation.\r\n\r\nI see. I am still a little unsure as to why the cross tests in TF would run then. \r\n\r\nAlso ccing @Rocketknight1 for https://github.com/huggingface/transformers/pull/17554#issuecomment-1159556670.",
"\r\n> I see. I am still a little unsure as to why the cross tests in TF would run then.\r\n\r\nIt runs successfully only if the config has the argument `image_size` (..right?). Is this where you have the question?\r\n\r\n",
"It runs successfully (the cross-test) with the `image_size` specified in the config. If that is the case, why the PyTorch cross-test would fail. This is my question. Sorry if it wasn't clear previously. ",
"In `RegNetModelTester`, the `get_config` method doesn't use `image_size`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/6fdcc6dcb5b9396aa4481513c69cc89a22c5533f/tests/models/regnet/test_modeling_regnet.py#L84-L92\r\n\r\nEven if the `image_size` argument is added to `RegNetConfig` with a default value `224`, the test doesn't pass `self.image_size (32)` to it.\r\nTherefore, the TF model will get `224` for `dummy_inputs`, but the subsequent calls in the test prepare image size 32 for testing. That's why it fails.\r\n\r\nIn TF test, you (also) added `image_size=self.image_size, ` to `get_config`\r\n\r\nhttps://github.com/huggingface/transformers/blob/6fdcc6dcb5b9396aa4481513c69cc89a22c5533f/tests/models/regnet/test_modeling_tf_regnet.py#L81-L90\r\n\r\nwhich is why it works.",
"Thanks, @ydshieh. That solved the problem. \r\n\r\n@sgugger @Rocketknight1 the tests should pass now but @ydshieh pointed out a potential concern here: https://github.com/huggingface/transformers/pull/17554#issuecomment-1159556670. Does it make sense to tackle it in a separate PR given Adaptive Average Pooling impacts quite a few models (RegNet, ResNet, Swin, etc.)? ",
"Ugh, yes. The layer precomputes a map for the input and output shapes at build() time, and will break if you pass inputs with different shapes to the layer afterwards.\r\n\r\nWith the implementation as written, I think it will be quite difficult to generate the `sparse_map` in the `call()` method. The reason is that there's a lot of computation that's just happening on the CPU to make it, which is fine as a once-off task in the `init`, but that won't really work if it compiled into the graph.\r\n\r\nI think this is a sign that we might have to write a proper `AdaptivePool` op for TF and make a PR to TFA, which I was talking with @amyeroberts about.",
"> With the implementation as written, I think it will be quite difficult to generate the `sparse_map` in the `call()` method.\r\n\r\nDo you think it is necessary to keep the logic for the sparse part?",
"We can absolutely drop the actual sparse matrix, however we'll still need to compute the dense matrix (which is called `sparse_map` because it's mostly zeros)",
"@Rocketknight1 WDYT we should do about this PR given the current state? Should we wait for ...\r\n\r\n> I think this is a sign that we might have to write a proper AdaptivePool op for TF and make a PR to TFA, which I was talking with @amyeroberts about.\r\n\r\n... or is there anything I can do at my end? ",
"@sayakpaul Hang on! I have a new implementation for `AdaptivePool` that I think will resolve some of these issues, and I should be able to finish it by today.",
"@sayakpaul I have an implementation [here](https://gist.github.com/Rocketknight1/b0baa8236f379b811fc6bce3da05cc2b), working on testing and optimizations now",
"@sayakpaul added comments as requested: https://gist.github.com/Rocketknight1/efc47242914788def0144b341b1ad638",
"> @sayakpaul added comments as requested: https://gist.github.com/Rocketknight1/efc47242914788def0144b341b1ad638\r\n\r\nJust read through. Excellent 👌"
] | 1,654
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
In this PR in which we (/w @sayakpaul) are proting the RegNets model into TensorFlow.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17554/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17554",
"html_url": "https://github.com/huggingface/transformers/pull/17554",
"diff_url": "https://github.com/huggingface/transformers/pull/17554.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17554.patch",
"merged_at": 1656506714000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17553
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17553/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17553/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17553/events
|
https://github.com/huggingface/transformers/pull/17553
| 1,260,425,778
|
PR_kwDOCUB6oc45GIUs
| 17,553
|
[deepspeed / testing] reset global state
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
This PR:
- adds a reset to the global state at the end of each in-pytest deepspeed test (and API to do that)
- fixes `test_load_best_model_zero2_fp16` to run the trainingargs first to get the ds config state right
cc: @ydshieh, who discovered the issues on CI
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17553/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17553",
"html_url": "https://github.com/huggingface/transformers/pull/17553",
"diff_url": "https://github.com/huggingface/transformers/pull/17553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17553.patch",
"merged_at": 1654526965000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17552
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17552/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17552/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17552/events
|
https://github.com/huggingface/transformers/pull/17552
| 1,260,245,732
|
PR_kwDOCUB6oc45FhnV
| 17,552
|
Add examples telemetry
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for working on this @sgugger - that's super useful!"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds a function to send telemetry to help us track the examples usage and uses it in the current examples. For now, I've just added in the PyTorch `run_glue.py`, but will paste it in all other examples if you agree with the format/data tracked.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17552/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17552/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17552",
"html_url": "https://github.com/huggingface/transformers/pull/17552",
"diff_url": "https://github.com/huggingface/transformers/pull/17552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17552.patch",
"merged_at": 1654617472000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17551
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17551/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17551/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17551/events
|
https://github.com/huggingface/transformers/issues/17551
| 1,260,203,927
|
I_kwDOCUB6oc5LHS-X
| 17,551
|
Character limit when tokenizing?
|
{
"login": "luadamek",
"id": 40247176,
"node_id": "MDQ6VXNlcjQwMjQ3MTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/40247176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luadamek",
"html_url": "https://github.com/luadamek",
"followers_url": "https://api.github.com/users/luadamek/followers",
"following_url": "https://api.github.com/users/luadamek/following{/other_user}",
"gists_url": "https://api.github.com/users/luadamek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luadamek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luadamek/subscriptions",
"organizations_url": "https://api.github.com/users/luadamek/orgs",
"repos_url": "https://api.github.com/users/luadamek/repos",
"events_url": "https://api.github.com/users/luadamek/events{/privacy}",
"received_events_url": "https://api.github.com/users/luadamek/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @SaulLu ",
"Hi @luadamek ,\r\n\r\nThank you for your detailed issue! I think you found a limitation of the Wordpiece model of the `tokenizers` library. \r\n\r\nIndeed, looking at the content of the tokens we can see that in the last case the text is identified as unknown:\r\n```python\r\nfrom transformers import DistilBertTokenizerFast\r\ntokenizer = DistilBertTokenizerFast.from_pretrained(\"distilbert-base-multilingual-cased\")\r\ntokens = tokenizer([\"Hello\",\"Hello\"*20, \"Hello\"*26])\r\n\r\nfor _input_ids in tokens.input_ids:\r\n print(tokenizer.convert_ids_to_tokens(_input_ids))\r\n\r\n# ['[CLS]', 'Hello', '[SEP]']\r\n# ['[CLS]', 'Hello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '##H', '##ello', '[SEP]']\r\n# ['[CLS]', '[UNK]', '[SEP]']\r\n```\r\nPersonally I think that this is a trade-off that was made for performance reasons. If you think this is a problem worth discussing further, the best thing to do would be to open an issue on the library that codes the model: https://github.com/huggingface/tokenizers. :relaxed: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @SaulLu \r\n\r\nThat makes sense. Apologies for the late reply. I was worried there might be a dirtier bug hiding underneath this. If it's just for performance reasons, this seems totally reasonable.\r\n\r\nThanks!"
] | 1,654
| 1,661
| 1,661
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-multilingual-cased")
tokens = tokenizer(["Hello","Hello"*20, "Hello"*26], truncation=True)
### Expected behavior
```shell
I noticed that when using DistilBertTokenizerFast, there appears to be a character limit of tokenization for a word. For example:
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-multilingual-cased")
tokens = tokenizer(["Hello","Hello"*20, "Hello"*26], truncation=True)
returns:
{'input_ids': [[101, 31178, 102],
[101, 31178, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 12396, 24829, 102],
[101, 100, 102]],
'attention_mask': [[1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1]]}
The third word here was assigned three tokens, which doesn't seem right to me. The second word was assigned many more tokens when the character length was at 100. Is this the intended behaviour of the tokenizer? This seems to happen with words longer than 100 characters.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17551/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17550
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17550/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17550/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17550/events
|
https://github.com/huggingface/transformers/pull/17550
| 1,260,203,882
|
PR_kwDOCUB6oc45FYlk
| 17,550
|
[deepspeed] fix load_best_model test
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
Fixes https://github.com/huggingface/transformers/pull/17151 to run `from_pretrained` with emulated dist env.
While it was working on my setup on CI it failed with:
```
tests/deepspeed/test_deepspeed.py:756: in test_load_best_model
model = T5ForConditionalGeneration.from_pretrained(T5_TINY)
src/transformers/modeling_utils.py:2116: in from_pretrained
init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py:693: in __init__
self.local_device = torch.device('cuda:{}'.format(os.environ["LOCAL_RANK"]))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = environ({'NPP_VERSION': '11.3.2.139', 'NVIDIA_VISIBLE_DEVICES': 'all', 'DALI_BUILD': '2054952', 'GITHUB_WORKSPACE': '/...RRENT_TEST': 'tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_best_model_zero3_fp16 (call)'})
key = 'LOCAL_RANK'
def __getitem__(self, key):
try:
value = self._data[self.encodekey(key)]
except KeyError:
# raise KeyError with the original key value
> raise KeyError(key) from None
E KeyError: 'LOCAL_RANK'
```
the test is exactly the same, just moved a big chunk of it into the `with mockenv_context` - no code changes
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17550/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17550",
"html_url": "https://github.com/huggingface/transformers/pull/17550",
"diff_url": "https://github.com/huggingface/transformers/pull/17550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17550.patch",
"merged_at": 1654280343000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17549
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17549/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17549/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17549/events
|
https://github.com/huggingface/transformers/pull/17549
| 1,260,202,730
|
PR_kwDOCUB6oc45FYWj
| 17,549
|
fix `train_new_from_iterator` in the case of byte-level tokenizers
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR aims at allowing to use `train_new_from_iterator` when the original tokenizer backend was using a ByteLevel pre-tokenization. Before this fix, the vocabulary learn wasn't correct because the initial bytes were missing.
Fixes #17371
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Would love to have the feedback of @LysandreJik and @sgugger on the tokenizer part and @Narsil on the pipeline tests (and also the tokenizer if you have more time!)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17549/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17549",
"html_url": "https://github.com/huggingface/transformers/pull/17549",
"diff_url": "https://github.com/huggingface/transformers/pull/17549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17549.patch",
"merged_at": 1654695041000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17548
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17548/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17548/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17548/events
|
https://github.com/huggingface/transformers/pull/17548
| 1,260,200,515
|
PR_kwDOCUB6oc45FX5K
| 17,548
|
Update index.mdx
|
{
"login": "BritneyMuller",
"id": 5594118,
"node_id": "MDQ6VXNlcjU1OTQxMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5594118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BritneyMuller",
"html_url": "https://github.com/BritneyMuller",
"followers_url": "https://api.github.com/users/BritneyMuller/followers",
"following_url": "https://api.github.com/users/BritneyMuller/following{/other_user}",
"gists_url": "https://api.github.com/users/BritneyMuller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BritneyMuller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BritneyMuller/subscriptions",
"organizations_url": "https://api.github.com/users/BritneyMuller/orgs",
"repos_url": "https://api.github.com/users/BritneyMuller/repos",
"events_url": "https://api.github.com/users/BritneyMuller/events{/privacy}",
"received_events_url": "https://api.github.com/users/BritneyMuller/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger I don't think I agree with the suggestion. This is not my area of expertise 😅, but I think HTML elements should be indented (e.g. `<img>` child element indented within the `<a>` parent element), even on a markdown file. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17548). All of your documentation changes will be reflected on that endpoint.",
"It wasn't indented before the first PR and was working perfectly fine.",
"Yes, it works in either case. I think it's usually suggested to use indentation in HTML elements for readability purposes. No strong opinion on this, though!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?

This removes the extra space in front of the new /support image. Thank you for the suggestion @sgugger !
Got too excited to merge the previous image update and missed this housekeeping fix.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17548/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17548",
"html_url": "https://github.com/huggingface/transformers/pull/17548",
"diff_url": "https://github.com/huggingface/transformers/pull/17548.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17548.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17547
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17547/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17547/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17547/events
|
https://github.com/huggingface/transformers/pull/17547
| 1,260,178,698
|
PR_kwDOCUB6oc45FTUP
| 17,547
|
Update index.mdx
|
{
"login": "BritneyMuller",
"id": 5594118,
"node_id": "MDQ6VXNlcjU1OTQxMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5594118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BritneyMuller",
"html_url": "https://github.com/BritneyMuller",
"followers_url": "https://api.github.com/users/BritneyMuller/followers",
"following_url": "https://api.github.com/users/BritneyMuller/following{/other_user}",
"gists_url": "https://api.github.com/users/BritneyMuller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BritneyMuller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BritneyMuller/subscriptions",
"organizations_url": "https://api.github.com/users/BritneyMuller/orgs",
"repos_url": "https://api.github.com/users/BritneyMuller/repos",
"events_url": "https://api.github.com/users/BritneyMuller/events{/privacy}",
"received_events_url": "https://api.github.com/users/BritneyMuller/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates our Expert Acceleration Program image with a new image featuring our experts.
This is similar to our [Transformers/README.md image update](https://github.com/huggingface/transformers/pull/16615) that has proven to be successful.

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17547/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17547",
"html_url": "https://github.com/huggingface/transformers/pull/17547",
"diff_url": "https://github.com/huggingface/transformers/pull/17547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17547.patch",
"merged_at": 1654278997000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17546
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17546/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17546/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17546/events
|
https://github.com/huggingface/transformers/pull/17546
| 1,260,107,316
|
PR_kwDOCUB6oc45FD7p
| 17,546
|
fix(typo): Update run_glue_no_trainer.py
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
@sgugger @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17546/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17546",
"html_url": "https://github.com/huggingface/transformers/pull/17546",
"diff_url": "https://github.com/huggingface/transformers/pull/17546.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17546.patch",
"merged_at": 1654273778000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17545
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17545/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17545/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17545/events
|
https://github.com/huggingface/transformers/issues/17545
| 1,260,047,976
|
I_kwDOCUB6oc5LGs5o
| 17,545
|
Repetitive sampling generations from opt1.3b but not from opt350m
|
{
"login": "ZhangShiyue",
"id": 11383558,
"node_id": "MDQ6VXNlcjExMzgzNTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/11383558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhangShiyue",
"html_url": "https://github.com/ZhangShiyue",
"followers_url": "https://api.github.com/users/ZhangShiyue/followers",
"following_url": "https://api.github.com/users/ZhangShiyue/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhangShiyue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhangShiyue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhangShiyue/subscriptions",
"organizations_url": "https://api.github.com/users/ZhangShiyue/orgs",
"repos_url": "https://api.github.com/users/ZhangShiyue/repos",
"events_url": "https://api.github.com/users/ZhangShiyue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhangShiyue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Interesting, thanks for opening the issue! \r\n\r\n@stephenroller @suchenzang have you see similar behavior with the fairseq model ? Could this be due to a bug in porting the model?",
"One thing I'd like to add here is that we enable `topk=50` by default -> does changing this value maybe help? But it indeed looks like a modeling issue",
"Related issue https://github.com/facebookresearch/metaseq/issues/136",
"Should be fixed now in https://github.com/huggingface/transformers/releases/tag/v4.20.1",
"Thanks for the update! After this patch, it should be able to pass the end-to-end regression test between metaseq and huggingface (https://github.com/facebookresearch/metaseq/issues/136)? I dug through the convo and it seems this is on @stephenroller @ArthurZucker @thomasw21 's radar? ❤️ \r\n\r\nIt would be great if we can load the model into metaseq directly without merging if possible, so that we can catch subtle bug in conversion.",
"thanks so much for fixing it! @patrickvonplaten "
] | 1,654
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.0-104-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@LysandreJik, @patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import OPTForCausalLM, GPT2Tokenizer
prompt = ["Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?",
"Hey, are you consciours? Can you talk to me?"]
print("=====OPT 1.3b Sampling=====")
model = OPTForCausalLM.from_pretrained("facebook/opt-1.3b")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer(prompt, return_tensors="pt", padding=True)
generate_ids = model.generate(inputs=inputs.input_ids, attention_mask=inputs.attention_mask,
max_new_tokens=100, do_sample=True, temperature=1.0)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True))
'''
Generations are repetitive, e.g.,
["Hey, are you consciours? Can you talk to me? Please?\nI'm sorry but I'm afraid I'm incapable of talking to you :(",
'Hey, are you consciours? Can you talk to me?\nYeah sure thing buddy! Whats your discord?',
"Hey, are you consciours? Can you talk to me?\nI'm sorry I'm not consciours :(",
'Hey, are you consciours? Can you talk to me? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please?',
'Hey, are you consciours? Can you talk to me? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please? Please?']
'''
print("=====OPT 350m Sampling=====")
model = OPTForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
inputs = tokenizer(prompt, return_tensors="pt", padding=True)
generate_ids = model.generate(inputs=inputs.input_ids, attention_mask=inputs.attention_mask,
max_new_tokens=100, do_sample=True, temperature=1.0)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True))
'''
Generations look normal, e.g.,
["Hey, are you consciours? Can you talk to me?\nThis is a repost, stop doing it lol\nI'm not trying to repost.... it's just happened to me like that, so I thought I'd do it, too.",
"Hey, are you consciours? Can you talk to me? A few months ago there seemed to be a large influx of new and existing consciourns around Melbourne. All we see at the top are people wanting to participate. They've got no skills, no money, no interest and they're not interested in anything other than the show and being popular. I'm not even kidding about that!\n\nI understand that the show is quite difficult and is full of competition, but does your hobby really need such an appeal?\n\nFor example, my",
"Hey, are you consciours? Can you talk to me?\nThe reason I can't talk to him is because I'm in a hurry. He's at the airport, waiting for his driver to come back (he doesn't drive, and he's not comfortable being in someone's car and trying to take it with him). I think I'm overthinking this one.\nIf you need someone to talk to, please PM me.",
"Hey, are you consciours? Can you talk to me?\nNo I'm not\nMy mistake, I've been looking around for a little while and don't have a chance to look up the exact wording. I just noticed your 'cognizant'.\nno worries and yea it's a common one so let's move on haha",
"Hey, are you consciours? Can you talk to me? I'm not a cop, but I've dealt with one or two\nOf course. I am in Texas, but we're not all like that."]
'''
```
### Expected behavior
```shell
Sampling generations from opt1.3b are often very repetitive (see the example above).
It does not seem to happen by chance. If you run the code multiple times, similar patterns will always appear.
It is unexpected because usually standard sampling should produce diverse results.
Interestingly, I did not see this issue when using opt350m.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17545/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17544
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17544/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17544/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17544/events
|
https://github.com/huggingface/transformers/issues/17544
| 1,260,008,570
|
I_kwDOCUB6oc5LGjR6
| 17,544
|
Descriptors cannot not be created directly.
|
{
"login": "juliencarbonnell",
"id": 3306328,
"node_id": "MDQ6VXNlcjMzMDYzMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3306328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juliencarbonnell",
"html_url": "https://github.com/juliencarbonnell",
"followers_url": "https://api.github.com/users/juliencarbonnell/followers",
"following_url": "https://api.github.com/users/juliencarbonnell/following{/other_user}",
"gists_url": "https://api.github.com/users/juliencarbonnell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juliencarbonnell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliencarbonnell/subscriptions",
"organizations_url": "https://api.github.com/users/juliencarbonnell/orgs",
"repos_url": "https://api.github.com/users/juliencarbonnell/repos",
"events_url": "https://api.github.com/users/juliencarbonnell/events{/privacy}",
"received_events_url": "https://api.github.com/users/juliencarbonnell/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"plus one",
"This error doesn't show up when you run the code on google colab. you should try that @gunjan075 in the meantime",
"I have the same problem with the T0 model. My protobuf version is `4.21.1`. Any idea how to fix it? Cannot use Google Colab, only desktop machines.\r\n\r\nEnvironment specs:\r\n```\r\nnumpy==1.22.4\r\nprotobuf==4.21.1\r\nsentencepiece==0.1.96\r\ntokenizers==0.12.1\r\ntorch==1.11.0\r\ntqdm==4.64.0\r\ntransformers==4.19.2\r\n```",
"Update: I used `protobuf==3.20.0` and it worked. It's not ideal but it will do for now.",
"Indeed please make sure to use the correct `protobuf` version . Google's protobuf release broke a lot of codebases - even TF https://github.com/tensorflow/tensorflow/issues/56077 . \r\n\r\nPlease make sure to use `\"protobuf<=3.20.1\"`\r\n\r\nJust FYI @sgugger ",
"This is all fixed on the main branch FYI.",
"There was a patch release v4.19.3 just done to fix this issue FYI. ",
"Awesome\nThank you\n\nOn Thu, Jun 9, 2022, 12:12 Sylvain Gugger ***@***.***> wrote:\n\n> There was a patch release v4.19.3 just done to fix this issue FYI.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/17544#issuecomment-1151388804>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAZHGWG6MCJIES4XVCBN72LVOIQWFANCNFSM5XZIE55A>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"With protobuf 4.21.8 and debertav2, this error still appears\r\n```\r\nAutoTokenizer.from_pretrained('microsoft/deberta-v3-small')\r\n```\r\n\r\n```\r\nTypeError: Descriptors cannot not be created directly.\r\nIf this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.\r\nIf you cannot immediately regenerate your protos, some other possible workarounds are:\r\n 1. Downgrade the protobuf package to 3.20.x or lower.\r\n 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).\r\n```",
"This error still happens for me. Downgrading is not an elegant solution because it breaks other packages that rely on latest version of protobuf e.g. google-cloud-documentai. Someome please fix properly.\r\nI just created a new issue for this. https://github.com/huggingface/transformers/issues/21128",
"I met this issue when using `AutoTokenizer`, I fix it by specific the tokenizer to `LlamaTokenizer`.",
"\r\n\r\n\r\n> I met this issue when using `AutoTokenizer`, I fix it by specific the tokenizer to `LlamaTokenizer`.\r\n\r\nThis absolutely save my day! I meet with this issue for LLMs/vicuna. And this is a useful workaround.",
"pip install protobuf==3.19.4 worked for me"
] | 1,654
| 1,692
| 1,657
|
NONE
| null |
Hi @patrickvonplaten
I'm trying to import the following:
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
and get the error:
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
All I can find on the web to fix it is to set:
!protobuf==3.20.1
but it's not working for me. Sorry I can't manage to fix it myself.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17544/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17544/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17543
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17543/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17543/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17543/events
|
https://github.com/huggingface/transformers/issues/17543
| 1,259,871,144
|
I_kwDOCUB6oc5LGBuo
| 17,543
|
word_ids() is not available when using FlauBERT
|
{
"login": "rgriot",
"id": 47383574,
"node_id": "MDQ6VXNlcjQ3MzgzNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47383574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgriot",
"html_url": "https://github.com/rgriot",
"followers_url": "https://api.github.com/users/rgriot/followers",
"following_url": "https://api.github.com/users/rgriot/following{/other_user}",
"gists_url": "https://api.github.com/users/rgriot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rgriot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rgriot/subscriptions",
"organizations_url": "https://api.github.com/users/rgriot/orgs",
"repos_url": "https://api.github.com/users/rgriot/repos",
"events_url": "https://api.github.com/users/rgriot/events{/privacy}",
"received_events_url": "https://api.github.com/users/rgriot/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @rgriot ,\r\n\r\nYou do get this error because Flaubert does not have a fast version implemented (yet!) in the library and unfortunately only fast versions support this feature.\r\n\r\nIf you're interested, feel free to work on a PR to add this fast version to FlauBERT! :hugs: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@SaulLu
Hi HuggingFace community,
I'm trying to fine-tune models for token classification task.
As french, I want to try different models trained in french or in multi languages.
I succeded to train camembert !
However, when using flauBERT, I have an issue when I align labels to the tokens after tokenisation.
I used to function :
`def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
previous_word_idx = None
label_ids = []
for word_idx in word_ids: # Set the special tokens to -100.
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx: # Only label the first token of a given word.
label_ids.append(label[word_idx])
else:
label_ids.append(-100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs`
Eventhough the function worked perfectly for camembert, I have the following error when using flauBERT:
`word_ids() is not available when using Python-based tokenizers`
I don't know if it's a tokenizer issue or if I have to write a new function to align the labels
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the error:
1. Import flaubert model
`model_name = "flaubert/flaubert_base_cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)`
2. Run the function
`def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples[f"ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
previous_word_idx = None
label_ids = []
for word_idx in word_ids: # Set the special tokens to -100.
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx: # Only label the first token of a given word.
label_ids.append(label[word_idx])
else:
label_ids.append(-100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs`
### Expected behavior
```shell
Get a new dataset with the labels align to the tokens after tokenization
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17543/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17542
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17542/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17542/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17542/events
|
https://github.com/huggingface/transformers/issues/17542
| 1,259,865,909
|
I_kwDOCUB6oc5LGAc1
| 17,542
|
Lazy load in pipelines
|
{
"login": "devrimcavusoglu",
"id": 46989091,
"node_id": "MDQ6VXNlcjQ2OTg5MDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46989091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devrimcavusoglu",
"html_url": "https://github.com/devrimcavusoglu",
"followers_url": "https://api.github.com/users/devrimcavusoglu/followers",
"following_url": "https://api.github.com/users/devrimcavusoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/devrimcavusoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devrimcavusoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devrimcavusoglu/subscriptions",
"organizations_url": "https://api.github.com/users/devrimcavusoglu/orgs",
"repos_url": "https://api.github.com/users/devrimcavusoglu/repos",
"events_url": "https://api.github.com/users/devrimcavusoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/devrimcavusoglu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"Hi @devrimcavusoglu ,\r\n\r\nI am unsure I understand the actual issue, do you have a reproducing script ? \r\nFor `preprocess` / `postprocess` you never need the model (everything model related should be in `_forward`).\r\n\r\nUsually pipelines are intended to run on many example (either live behind a http server or on a dataset) so loading/unloading would be innefficient.\r\n\r\nI am curious to understand better your use case to see how we could support that.\r\n\r\nCheers !",
"Hi @Narsil, \r\n\r\n> For preprocess / postprocess you never need the model (everything model related should be in _forward).\r\n\r\nActually, I was emphasizing what you said here, and I meant instantiation of the pipeline object, which requires model & tokenizer objects (not lazy load). Having said this, consider the current situation and assume that \"some-task\" is defined in `transformers`\r\n\r\n```python\r\n# normally we do this\r\nfrom transformers import pipeline\r\nmy_pipeline = pipeline(\"some-task\", model=\"bert-base-cased\", tokenizer=\"bert-base-cased\")\r\n```\r\n\r\nwith a pipeline class, we can alternatively perform this without the factory function `pipeline()` like this\r\n\r\n```python\r\n# and alternatively this can be done\r\nmodel = AutoModel.from_pretrained(\"bert-base-cased\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nmy_pipeline = SomeTaskPipeline(model, tokenizer)\r\n```\r\n\r\nI gave the alternative option (2) to emphasize my example. Now consider that I want to write a custom pipeline class `SomeTask2Pipeline` by extending an existing pipeline class `SomeTaskPipeline` (although this is not required)\r\n\r\n```python\r\nclass SomeTask2Pipeline(SomeTaskPipeline):\r\n pass\r\n```\r\n\r\nNow, afaik there is no way for me to inject `SomeTask2Pipeline` into `transformers` such that I can create a pipeline with the `pipeline()` factory function. If there is please let me know. Thus, I'm forced to use the second alternative above (2), as follows\r\n\r\n```python\r\n# and alternatively this can be done\r\nmodel = AutoModel.from_pretrained(\"bert-base-cased\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nmy_pipeline = SomeTask2Pipeline(model, tokenizer)\r\n```\r\n\r\nI can still wrap this class/instantiation of `SomeTask2Pipeline` into another class and still introduce lazy loading which is fine. However, I now come to my second point, that we probably do not need model in any methods other than `forward()` and called functions from `forward()`, thus we should be able to unload/unset the model, this is more needy in custom pipelines (`SomeTask2Pipeline`) as I want to use two seperate models inside the pipeline e.g consequently to process former's one outputs, having two models in memory is not a good thing as one of them is totally redundant.\r\n\r\nAll in all, this behavior can be considered as something that is not a big deal for a user to do it actually.",
"Hi @devrimcavusoglu ,\r\n\r\nSorry but I still don't really understand what you're doing, could you share a minimal example ? I don't really understand how the 2 models you describe interact.\r\n\r\nYou can use your custom class if you want (entirely bypassing the need to pass the `task` argument to `pipeline(...)`)\r\n\r\n```python\r\nmy_pipeline = pipeline(\"some-task\", model=\"bert-base-cased\", tokenizer=\"bert-base-cased\", pipeline_class=SomeTask2Pipeline)\r\n```\r\n\r\nthis was intended as simple overloading of pre/processing so I am not sure it fits your use case.",
"> Hi @devrimcavusoglu ,\r\n> \r\n> Sorry but I still don't really understand what you're doing, could you share a minimal example ? I don't really understand how the 2 models you describe interact.\r\n> \r\n> You can use your custom class if you want (entirely bypassing the need to pass the `task` argument to `pipeline(...)`)\r\n> \r\n> ```python\r\n> my_pipeline = pipeline(\"some-task\", model=\"bert-base-cased\", tokenizer=\"bert-base-cased\", pipeline_class=SomeTask2Pipeline)\r\n> ```\r\n> \r\n> this was intended as simple overloading of pre/processing so I am not sure it fits your use case.\r\n\r\nThank you @Narsil. Actually what you stated is addressing my first point, and I can handle my second point in my custom pipeline class, so I will close this issue for now. It can be re-opened though if there is a need for the following.\r\n\r\nAs a side note, I'll try to elaborate my 2nd point more clearly. In short, we do not need the model in some methods, e.g `preprocess()` or `postprocess()`, but the model is loaded into the memory with the object instantiation at `__init__`, not when used in the pipeline classes. Thus, my point is to load the model when used (before calling `forward()`), and unload the model afterwards, when the forward returns. But I can implement this in my custom class, or generally it can be implemented by the user, but having those load/unload model options in the pipeline classes would give better memory utilization to the users, especially for custom usecases (so I'd suggest if these load/unload methods are implemented, then they should be public methods, where user can call at desired points.). I hope this clarification gives better insight about what I was considering.\r\n\r\nA pseudo-ish minimal example, assume that I add my additional_postprocess to the run_single such that I will apply another task to modify my inputs\r\n\r\n```python\r\nANOTHER_TASK_MODEL_NAME = \"some-model\"\r\n\r\nclass SomeTask2Pipeline(SomeTaskPipeline):\r\n def run_single(...):\r\n self.preprocess()\r\n self.forward()\r\n self.postprocess()\r\n self.my_additional_postprocess()\r\n return ...\r\n\r\n def my_additional_postprocess(self, inputs):\r\n my_pipeline = pipeline(\"another-task\", model=ANOTHER_TASK_MODEL_NAME, tokenizer=ANOTHER_TASK_MODEL_NAME)\r\n # At this point two models in the memory the first one is the model for SomeTask2Pipeline\r\n # and the other one is the model for AnotherTask. \r\n # Note that the model for SomeTask2Pipeline is no longer needed.\r\n out = my_pipeline(**inputs)\r\n return out\r\n```\r\n\r\nwith load/unload methods it'd be,\r\n\r\n```python\r\nANOTHER_TASK_MODEL_NAME = \"some-model\"\r\n\r\nclass SomeTask2Pipeline(SomeTaskPipeline):\r\n def run_single(...):\r\n self.preprocess()\r\n self.load_model() # loads the model into the memory\r\n self.forward()\r\n self.unload_model() # unloads the model from the memory\r\n self.postprocess()\r\n self.my_additional_postprocess()\r\n return ...\r\n\r\n def my_additional_postprocess(self, inputs):\r\n my_pipeline = pipeline(\"another-task\", model=ANOTHER_TASK_MODEL_NAME, tokenizer=ANOTHER_TASK_MODEL_NAME)\r\n # At this point only the model associated with AnotherTask is in the memory.\r\n out = my_pipeline(**inputs)\r\n return out\r\n```\r\n\r\nThis is still can be managed by the user though, so you could say that one can apply the additional postprocess after having the output from the pipeline, and that's why I'm closing the issue. ",
"> Thus, my point is to load the model when used (before calling forward()), and unload the model afterwards, when the forward returns. But I can implement this in my custom class, or generally it can be implemented by the user, but having those load/unload model options in the pipeline classes would give better memory utilization to the users, especially for custom usecases (so I'd suggest if these load/unload methods are implemented, then they should be public methods, where user can call at desired points.). I hope this clarification gives better insight about what I was considering.\r\n\r\nThanks for the pseudo code much clearer ! \r\n\r\nI think it makes sense in your case, but we shouldn't do it by default since loading/unloading models is usually quite slow (compared to inferring on them) especially on GPU. so anyone that can afford to have everything loaded into CPU/GPU RAM should do so. \r\nIf you are short on either RAM memory, then yes, loading/unloading is necessary.\r\n\r\nStill thanks for your input, if more usage like yours seems to be developing, maybe we could add a flag or something in the future."
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
### Feature request
Currently pipeline classes expect model and tokenizer objects. I think it'd be beneficial to have lazy load option with load/unload options implemented in the base pipeline class. That would yield better memory utilization especially in custom classes that are derived from primitive pipelines. E.g for example for some postprocess custom operations, you may not need the loaded model for the pipeline, and use additional space which would decrease memory utilization or may cause in large swap which also decreases performances, if the model is redundant for your use case and if the model's work is finished, then we should be able to unload it within the scope.
### Motivation
Allow better memory utilization.
### Your contribution
I'd like to contribute for this FR as my schedule allows.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17542/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17541
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17541/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17541/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17541/events
|
https://github.com/huggingface/transformers/issues/17541
| 1,259,703,922
|
I_kwDOCUB6oc5LFY5y
| 17,541
|
[RAG] token discrepancy between question token which should be input to generator and the one actually encoded in postprocess_docs()
|
{
"login": "4kasha",
"id": 44837011,
"node_id": "MDQ6VXNlcjQ0ODM3MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/44837011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4kasha",
"html_url": "https://github.com/4kasha",
"followers_url": "https://api.github.com/users/4kasha/followers",
"following_url": "https://api.github.com/users/4kasha/following{/other_user}",
"gists_url": "https://api.github.com/users/4kasha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4kasha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4kasha/subscriptions",
"organizations_url": "https://api.github.com/users/4kasha/orgs",
"repos_url": "https://api.github.com/users/4kasha/repos",
"events_url": "https://api.github.com/users/4kasha/events{/privacy}",
"received_events_url": "https://api.github.com/users/4kasha/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Interesting question! @ola13 do you have a good answer here maybe? \r\n\r\nIntuitively I would think that this shouldn't be a problem since the model was trained also this way, but think @ola13 knows best here :-) ",
"Hi, sorry for the delayed response! Thanks for your question @4kasha!\r\n\r\nTo answer your point here:\r\n\r\n> In conclusion, I would be wrong, but postprocess_docs() should be better if it receives the raw question string without tokenizer encode-decode process.\r\n\r\nThis is in fact what happens - the `postprocess_docs` function does receivesthe question as a string and the documents as strings and only encodes them once with the generator's tokenizer to create inputs to the generator model. If we were passing tokens between the retriever and the generator we would indeed have a mismatch - we're not doing this though, and this is by design. Making a round-trip (first decoding retrieved docs tokens to pure strings and then encoding them with the generator tokenizer gives us the flexibility to combine retrievers and generators which don't have matching tokenization schemes.\r\n\r\nI hope this clarifies things, but feel free to re-open the issue if there's anything unclear still."
] | 1,654
| 1,656
| 1,656
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
RAG, DPR: @patrickvonplaten, @lhoestq
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In the [RAG implementation](https://github.com/huggingface/transformers/tree/v4.19.2/src/transformers/models/rag), especially in [here](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/rag/retrieval_rag.py#L466), input question and retrieved documents by DPR are concatenated in the form like
` doc_title + / + doc_text + // + input_string (question)`,
in `postprocess_docs(docs, input_strings, ... )` and this is the input to the generator. (eg, BART, T5 ).
This postprocess_docs() receives input_strings decoded by question tokenizer at [here](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/rag/retrieval_rag.py#L610-L613).
However, this postprocess may cause token mismatch in the case that question and generator tokenizer are different.
To see this, I choose basic retriever and generator as follows (original paper setting).
```
from transformers import DPRQuestionEncoderTokenizer, BartTokenizer
dpr_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
bart_tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
```
and then, following the implementation, compare their token ids each other
```
def check_ids(q_tokenizer, g_tokenizer, text):
print(f'>> {g_tokenizer.tokenize(text)}')
ids = q_tokenizer(text)['input_ids']
decode = q_tokenizer.decode(ids, skip_special_tokens=True)
print(f'>> {g_tokenizer.tokenize(decode)}')
_ids = g_tokenizer(decode)['input_ids']
print(ids == _ids)
```
As a sample text, `text = "Don't you love Transformers? We sure do."`
```
check_ids(dpr_tokenizer, bart_tokenizer, text)
>> ['Don', "'t", 'Ġyou', 'Ġlove', 'ĠTransformers', '?', 'ĠWe', 'Ġsure', 'Ġdo', '.']
>> ['don', "'t", 'Ġyou', 'Ġlove', 'Ġtransform', 'ers', '?', 'Ġwe', 'Ġsure', 'Ġdo', '.']
False
```
In conclusion, I would be wrong, but postprocess_docs() should be better if it receives the raw question string without tokenizer encode-decode process.
### Expected behavior
```shell
Question string part of the input to the generator is expected to be same with the question (input) to the RAG model.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17541/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17540
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17540/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17540/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17540/events
|
https://github.com/huggingface/transformers/issues/17540
| 1,259,699,909
|
I_kwDOCUB6oc5LFX7F
| 17,540
|
TFRemBertModelTest.test_resize_token_embeddings not working
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] |
[
"@gante Feel free to add WIP tag :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @ydshieh, I was able to reproduce the issue by running `python -m pytest -n auto --dist=loadfile -s -v ./tests/models/rembert` from the root directory on the latest transfomers version v4.23.1. The `test_resize_token_embeddings` on `TFRemBertModelTest` seemed to pass now without the merged code from https://github.com/huggingface/transformers/pull/17511. By any chance the cleanup of TF embeddings has been done and we could now close this issue? Let me know if I miss anything.\r\n\r\n**Test output:**\r\n```\r\ntests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_resize_token_embeddings If you want to use `TFRemBertForCausalLM` as a standalone, add `is_decoder=True.`\r\nIf you want to use `TFRemBertForCausalLM` as a standalone, add `is_decoder=True.`\r\nIf you want to use `TFRemBertForCausalLM` as a standalone, add `is_decoder=True.`\r\n\r\n[gw1] PASSED tests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_resize_token_embeddings\r\ntests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_save_load All model checkpoint layers were used when initializing TFRemBertModel.\r\n```",
"Hi @katiele47 This test is still failing on our CI. Could you share your environment information, by running `python utils\\print_env.py`. Also, could you share your hardware information (CPU, GPU) please? Thank you!",
"@ydshieh Sorry for the delay in response! This is what I got for environment (may ignore the last error):\r\n```\r\nPython version: 3.8.9 (default, Oct 26 2021, 07:25:54) \r\n[Clang 13.0.0 (clang-1300.0.29.30)]\r\ntransformers version: 4.24.0.dev0\r\nTorch version: 1.12.1\r\nCuda available: False\r\nCuda version: None\r\nCuDNN version: None\r\nNumber of GPUs available: 0\r\nTraceback (most recent call last):\r\n File \"utils/print_env.py\", line 39, in <module>\r\n print(\"NCCL version:\", torch.cuda.nccl.version())\r\n File \"/Users/Bibi/transformers/menv/lib/python3.8/site-packages/torch/cuda/nccl.py\", line 35, in version\r\n ver = torch._C._nccl_version()\r\nAttributeError: module 'torch._C' has no attribute '_nccl_version'\r\n``` \r\nGPU: Intel Iris Plus Graphics 1536 MB\r\nCPU: 2 GHz Quad-Core Intel Core i5\r\nPlatform: Mac version 11.6\r\n\r\nLet me know if you need any further information! Thanks."
] | 1,654
| 1,666
| null |
COLLABORATOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.9.11
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@gante @Rocketknight1
### Reproduction
`TFRemBertModelTest.test_resize_token_embeddings` has CI failed [here](https://github.com/huggingface/transformers/runs/6682139350?check_suite_focus=true)
This method (called during `resize_token_embeddings`)
https://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_tf_utils.py#L1449
assumes that `word_embedding_weight` has the same shape as `old_lm_head_decoder`, but this is not the case for `TFRemBertModel`, as it has `input_embedding_size` and `output_embedding_size` in config.
An PR #17511 was opened, but we decided to not merge it. Instead, a cleaning up of TF embeddings should be done first.
### Expected behavior
```shell
`resize_token_embeddings` should work for `TFRemBertModelTest`
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17540/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/17539
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17539/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17539/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17539/events
|
https://github.com/huggingface/transformers/pull/17539
| 1,259,695,336
|
PR_kwDOCUB6oc45DtDL
| 17,539
|
Fx support for Deberta-v[1-2], Hubert and LXMERT
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,655
| 1,654
|
MEMBER
| null |
# What does this PR do?
Adds `torch.fx` tracing support for:
- Deberta v1
- Deberta v2
- Hubert
- LXMERT
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17539/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17539",
"html_url": "https://github.com/huggingface/transformers/pull/17539",
"diff_url": "https://github.com/huggingface/transformers/pull/17539.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17539.patch",
"merged_at": 1654617921000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17538
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17538/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17538/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17538/events
|
https://github.com/huggingface/transformers/issues/17538
| 1,259,609,365
|
I_kwDOCUB6oc5LFB0V
| 17,538
|
Will data be loaded into multiple GPUs automatically?
|
{
"login": "Leli1024",
"id": 33652168,
"node_id": "MDQ6VXNlcjMzNjUyMTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/33652168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leli1024",
"html_url": "https://github.com/Leli1024",
"followers_url": "https://api.github.com/users/Leli1024/followers",
"following_url": "https://api.github.com/users/Leli1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Leli1024/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leli1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leli1024/subscriptions",
"organizations_url": "https://api.github.com/users/Leli1024/orgs",
"repos_url": "https://api.github.com/users/Leli1024/repos",
"events_url": "https://api.github.com/users/Leli1024/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leli1024/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Maybe of interest for @Rocketknight1 @gante",
"Hi @Leli1024 👋 By default, TensorFlow will allocate memory in all available GPUs, but will only execute in one GPU -- meaning that what you want to do won't happen by default. At the moment, we have no multiple GPU examples (@Rocketknight1 correct me if I'm wrong), so you will have to build a custom solution for yourself.\r\n\r\nHere is a TensorFlow guide for it: https://www.tensorflow.org/guide/distributed_training",
"We're co-ordinating with the TensorFlow team to make some examples of exactly this process available using the new `DTensor` API introduced in TensorFlow 2.9. I suspect it'll be another month or so before they're available, though!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,659
| 1,659
|
NONE
| null |
Will models that use TensorFlow as a backend be loaded into all available GPU memory or not?
I'm interested in training the OPT model's 1.3b and 30b variants. However there is no cloud computing GPU i can use which has more than 16gb at a time. The problem is that each individual sample in the batch is at least 24gb, when making use of tensorflow, will the memory be distributed among GPUs or will I still get an allocation error?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17538/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17537
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17537/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17537/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17537/events
|
https://github.com/huggingface/transformers/issues/17537
| 1,259,533,521
|
I_kwDOCUB6oc5LEvTR
| 17,537
|
Loading sharded model in `tf` from pytorch checkpoints
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Indeed, nice catch! Putting @sgugger in the loop",
"Simple reproducer:\r\n```py\r\nfrom transformers import TFBertModel\r\n\r\nmodel = TFBertModel.from_pretrained(\"sgugger/bert-sharded\")\r\n```",
"Putting it on my TODO (might take a few weeks as I have more urgent items, and we don't have a good solution on the TF side for large models right now anyway).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Fixing this 😄 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,659
| 1,659
|
COLLABORATOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.13.0.dev20220521 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
```
### Who can help?
@LysandreJik I am not sure who to ping on that 😅
Loading a big model from the hub in tensorflow is impossible if the model is sharded.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
>>> tf_model = TFOPTModel.from_pretrained("facebook/opt-13b",from_pt = True)
```
```python
Traceback (most recent call last):
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_tf_utils.py", line 1789, in from_pretrained
resolved_archive_file = cached_path(
File "/home/arthur_huggingface_co/transformers/src/transformers/utils/hub.py", line 282, in cached_path
output_path = get_from_cache(
File "/home/arthur_huggingface_co/transformers/src/transformers/utils/hub.py", line 486, in get_from_cache
_raise_for_status(r)
File "/home/arthur_huggingface_co/transformers/src/transformers/utils/hub.py", line 409, in _raise_for_status
raise EntryNotFoundError(f"404 Client Error: Entry Not Found for url: {request.url}")
transformers.utils.hub.EntryNotFoundError: 404 Client Error: Entry Not Found for url: https://huggingface.co/facebook/opt-13b/resolve/main/pytorch_model.bin
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_tf_utils.py", line 1833, in from_pretrained
raise EnvironmentError(
OSError: facebook/opt-13b does not appear to have a file named pytorch_model.bin.
```
The following script has to be used in order to convert the weights:
```python
path = "facebook/opt-13b"
pt_model = OPTModel.from_pretrained(path)
pt_model.save_pretrained(path,max_shard_size = "1000GB")
tf_model = TFOPTModel.from_pretrained(path,from_pt = True)
tf_model.save_pretrained(path,save_config=False)
```
### Expected behavior
```shell
Automatically do this in background?
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17537/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17537/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17536
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17536/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17536/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17536/events
|
https://github.com/huggingface/transformers/pull/17536
| 1,259,477,608
|
PR_kwDOCUB6oc45C-dS
| 17,536
|
[WIP] Add ResNets in TF
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Should have checked before. \r\n\r\nClosing in spirit of https://github.com/huggingface/transformers/pull/17427"
] | 1,654
| 1,654
| 1,654
|
MEMBER
| null |
This PR adds the OG of the OGs when it comes to computer vision models: ResNet. There are a few rough edges we need to figure out in the cross-loading of weights.
When I ran `RUN_SLOW=1 python -m pytest tests/models/resnet/test_modeling_resnet.py`, during the integration test, it complained about weight mismatches for all the layers.
So, naturally, when I did a little standalone integration test locally with the TF model the same issue surfaced and the logit assertion failed. FYI, it fails for the PT model too as mentioned earlier. Here's my integration test for TF model:
```py
from PIL import Image
import numpy as np
from src.transformers.models.resnet.modeling_tf_resnet import TFResNetForImageClassification
from transformers import AutoFeatureExtractor
def prepare_img():
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
return image
feature_extractor = AutoFeatureExtractor.from_pretrained(
"microsoft/resnet-50"
)
model = TFResNetForImageClassification.from_pretrained(
"microsoft/resnet-50", from_pt=True
)
image = prepare_img()
inputs = feature_extractor(images=image, return_tensors="tf")
outputs = model(**inputs)
expected_shape = [1, 1000]
assert outputs.logits.shape == expected_shape
expected_slice = np.array([-11.1069, -9.7877, -8.3777])
np.testing.assert_allclose(outputs.logits[0, :3].numpy(), expected_slice, atol=1e-4)
```
@amyeroberts @FrancescoSaverioZuppichini please advise here. After this issue is resolved I will start working on the test cases.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17536/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17536",
"html_url": "https://github.com/huggingface/transformers/pull/17536",
"diff_url": "https://github.com/huggingface/transformers/pull/17536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17536.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17535
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17535/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17535/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17535/events
|
https://github.com/huggingface/transformers/issues/17535
| 1,259,474,655
|
I_kwDOCUB6oc5LEg7f
| 17,535
|
require_accelerate wrapper missing?
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"As seen offline, cannot reproduce this but let's keep it open in case someone runs into the same issue/we identify a reproducible example.",
"Okay found the reproducing script :) (you need to have a good internet connection) Every sharded model would work I think\r\n\r\n```python \r\n>>> from transformers import OPTModel\r\n>>> model = OPTModel.from_pretrained(\"facebook/opt-13b\")\r\n```\r\n```python \r\nDownloading: 100%|███████████████████████████████████████████████████████| 611/611 [00:00<00:00, 197kB/s]\r\nDownloading: 100%|███████████████████████████████████████████████████| 51.0k/51.0k [00:00<00:00, 309kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████| 9.29G/9.29G [08:48<00:00, 18.9MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████| 9.18G/9.18G [10:26<00:00, 15.7MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████| 5.47G/5.47G [07:18<00:00, 13.4MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/arthurzucker/Work/transformers/src/transformers/modeling_utils.py\", line 2166, in from_pretrained\r\n model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(\r\n File \"/Users/arthurzucker/Work/transformers/src/transformers/modeling_utils.py\", line 2397, in _load_pretrained_model\r\n save_offload_index(offload_index, offload_folder)\r\nNameError: name 'save_offload_index' is not defined\r\n```\r\n",
"[](https://blog.ethereum.org/2022/06/03/ropsten-merge-ttd/)",
"This is normally fixed on main, are you sure you have the latest?",
"You are right sorry about this 👍🏻 "
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.13.0.dev20220521 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@LysandreJik Sorry again I am not really sure who is responsible of that 😅
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
If the accelerate library is not installed, loading a model fails with the following error :
```python
pt_model = OPTModel.from_pretrained(path)
```
```python
Traceback (most recent call last):
File "src/transformers/convert_opt.py", line 4, in <module>
pt_model = OPTModel.from_pretrained(path)
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_utils.py", line 2166, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
save_offload_index(offload_index, offload_folder)
```
Maybe a `@require_accelerate` is missing in `"/home/arthur_huggingface_co/transformers/src/transformers/modeling_utils.py"` , but my understanding is very limited.
### Expected behavior
```shell
Model should be loaded or accelerate should be in requirements.txt
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17535/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17534
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17534/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17534/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17534/events
|
https://github.com/huggingface/transformers/issues/17534
| 1,259,405,939
|
I_kwDOCUB6oc5LEQJz
| 17,534
|
How to use finetuner.py to train t5-large model
|
{
"login": "ZeyiLiao",
"id": 97815464,
"node_id": "U_kgDOBdSLqA",
"avatar_url": "https://avatars.githubusercontent.com/u/97815464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeyiLiao",
"html_url": "https://github.com/ZeyiLiao",
"followers_url": "https://api.github.com/users/ZeyiLiao/followers",
"following_url": "https://api.github.com/users/ZeyiLiao/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeyiLiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeyiLiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeyiLiao/subscriptions",
"organizations_url": "https://api.github.com/users/ZeyiLiao/orgs",
"repos_url": "https://api.github.com/users/ZeyiLiao/repos",
"events_url": "https://api.github.com/users/ZeyiLiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeyiLiao/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This approach you tried is very old and is not supported any longer.\r\n\r\nPlease switch to modern tools and it should just work. \r\n\r\nHere are a few current examples:\r\n\r\nstraight DDP:\r\n```\r\nrm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python \\\r\nexamples/pytorch/translation/run_translation.py --model_name_or_path t5-small \\\r\n--output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 \\\r\n--learning_rate 3e-5 --logging_first_step --logging_steps 500 \\\r\n--max_source_length 128 --max_target_length 128 --val_max_target_length 128 \\\r\n--num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 2 \\\r\n--predict_with_generate --sortish_sampler --source_lang en --target_lang ro \\\r\n--dataset_name wmt16 --dataset_config ro-en --source_prefix \\\r\n'translate English to Romanian: ' --warmup_steps 50 --max_train_samples 50 \r\n```\r\n\r\nsame with deepspeed\r\n```\r\nrm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 \\\r\nexamples/pytorch/translation/run_translation.py --model_name_or_path t5-small \\\r\n--output_dir output_dir --overwrite_output_dir --max_source_length 128 \\\r\n--max_target_length 128 --val_max_target_length 128 --do_train \\\r\n--num_train_epochs 1 --per_device_train_batch_size 2 --learning_rate 3e-3 \\\r\n--dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \\\r\n--source_prefix 'translate English to Romanian: ' --max_train_samples 50 \\\r\n--deepspeed tests/deepspeed/ds_config_zero3.json --save_steps 1 \r\n```\r\n\r\nmake sure it works, adapt to your data, and then replace with the large model size.\r\n\r\nPlease let me know if this unblocked you and please share the link where you found the old info so that we could update that thread with the new information.\r\n\r\nThank you\r\n",
"Hi @stas00 , the order info comes from [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400).\r\n\r\nI run the following scripts to install required package:\r\n\r\n> pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html\r\n> \r\n> git clone https://github.com/huggingface/transformers\r\n> pip install .\r\n> \r\n> pip install fairscale, deepspeed\r\n> \r\n> pip install -r /exmaples/pytorch/translation/requirement.txt\r\n\r\n>os.environment['CUDA_VISIBLE_DEVICES\"] = \"0,1,2,3\"\r\n\r\nI tried the straight DPP and deepspeed scripts, all says the following error though I add \"--per_device_train_batch_size 2\":\r\n\r\n> run_translation.py: error: argument --per_device_train_batch_size: expected one argument.\r\n\r\n\r\n\r\nWhat's more, I want to run language inference task with t5 model and do you have any recommendation which example script should I use?",
"> run_translation.py: error: argument --per_device_train_batch_size: expected one argument.\r\n\r\noops, my bad - I fixed the examples in my reply https://github.com/huggingface/transformers/issues/17534#issuecomment-1146249686\r\n\r\n> What's more, I want to run language inference task with t5 model and do you have any recommendation which example script should I use?\r\n\r\nsame script, you just tell it to eval instead of train, here is a few ways for one gpu:\r\n\r\n```\r\n\r\n# non-distributed 1-gpu fp32 eval only\r\n\r\nrm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 2 --predict_with_generate --eval_steps 2500 --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config \"ro-en\" --source_prefix \"translate English to Romanian: \" --val_max_target_length 128 --warmup_steps 50 --max_eval_samples 50 \r\n\r\n# non-distributed 1-gpu --fp16_full_eval eval only\r\n\r\nrm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 2 --predict_with_generate --eval_steps 2500 --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config \"ro-en\" --source_prefix \"translate English to Romanian: \" --val_max_target_length 128 --warmup_steps 50 --max_eval_samples 50 --fp16_full_eval\r\n```\r\n\r\nand you can adapt those to multi-gpu and/or deepspeed based on the first examples I shared.\r\n\r\nbut basically I removed the training args and replaced those with eval-only args.\r\n\r\nThe 2nd (last) example shows how to do it in half-precision which may not work well (depending on the model), so start with the normal fp32 eval (i.e. w/o `--fp16_full_eval`)\r\n\r\nOf course, play with the values of the args to fit your environment.\r\n\r\n> I just wonder how to download this dataset as the following script:\r\n\r\nyou don't download it directly - `load_dataset` does it automatically for you at runtime (should have Internet).\r\n",
"Thanks for your detailed reply @stas00 ! I tried the t5-small model and they works so I changed it to t5-11b with 3 questions here.\r\n\r\n1.\r\nIn my case, I could not use straight DDP otherwise CUDA will run out of memory.\r\n\r\nWhen I use deepspeed script\r\n```\r\nexport MASTER_PORT=9999; rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 4 \r\nexamples/pytorch/translation/run_translation.py --model_name_or_path t5-11b --output_dir output_dir --\r\noverwrite_output_dir --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --\r\nnum_train_epochs 4 --per_device_train_batch_size 8 --learning_rate 1e-4 --source_lang prompt --target_lang completion \r\n--train_file=/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_filtered/csv_file/train/train.json \r\n--test_file=/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_filtered/csv_file/test/test.json \r\n--validation_file=/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_filtered/csv_file/dev/dev.json \r\n--max_train_samples 50 --deepspeed tests/deepspeed/ds_config_zero3.json --save_strategy epoch\r\n```\r\nIt said that \r\n\r\n> Traceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 652, in <module>\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 261, in main\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n File \"/home/zeyi/transformers/src/transformers/hf_argparser.py\", line 214, in parse_args_into_dataclasses\r\n obj = dtype(**inputs)\r\n File \"<string>\", line 102, in __init__\r\n File \"/home/zeyi/transformers/src/transformers/training_args.py\", line 1012, in __post_init__\r\n and (self.device.type != \"cuda\")\r\n File \"/home/zeyi/transformers/src/transformers/utils/import_utils.py\", line 802, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/zeyi/transformers/src/transformers/training_args.py\", line 1264, in device\r\n return self._setup_devices\r\n File \"/home/zeyi/transformers/src/transformers/utils/generic.py\", line 49, in __get__\r\n cached = self.fget(obj)\r\n File \"/home/zeyi/transformers/src/transformers/utils/import_utils.py\", line 802, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/zeyi/transformers/src/transformers/training_args.py\", line 1225, in _setup_devices\r\n deepspeed.init_distributed()\r\n File \"/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/deepspeed/utils/distributed.py\", line 51, in init_distributed\r\n torch.distributed.init_process_group(backend=dist_backend,\r\n File \"/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 500, in init_process_group\r\n store, rank, world_size = next(rendezvous_iterator)\r\n File \"/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/rendezvous.py\", line 190, in _env_rendezvous_handler\r\n store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)\r\nRuntimeError: Address already in use\r\n\r\nBut I find some info and do add `export MASTER_PORT=9999` at the beginning of scripts.\r\nI also use `netstat -nltp` but can not find which jobs is the zombie task.\r\nWhat should I do to delete those zombie running process.\r\n\r\n2.\r\nAnd can I add a parameter like [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) i.e. --sharded_ddp to use sharded_ddp instead of straight ddp?(I am not sure I totally understand the definition of straight ddp and sharded ddp)\r\n\r\n\r\n3.\r\nIn my previous code, I will pass some generator option to t5 model \r\n```\r\nself.generator_options = {'min_length': 1, 'max_length': 128, 'num_beams': 1, 'num_return_sequences': 1, 'do_sample': False, 'top_k': 50, 'top_p': 1.0,\r\n'temperature': 1.0, 'length_penalty': 1.0, 'repetition_penalty': 1.0}\r\n\r\noutput_ids = self.reasoner.generate(batch['all_inps'], **self.generator_options)\r\n```\r\nSo how can I do the same thing here?\r\n\r\n",
"> > RuntimeError: Address already in use\r\n\r\n> But I find some info and do add `export MASTER_PORT=9999` at the beginning of scripts. I also use `netstat -nltp` but can not find which jobs is the zombie task. What should I do to delete those zombie running process.\r\n\r\nNormally you just kill them manually. Upgrade your `deepspeed`, the zombies should get killed automatically.\r\n\r\nYou should pass an explicit argument to `deepspeed` with the desired setting if you don't want the default port.\r\n\r\n```\r\n --master_port MASTER_PORT\r\n (optional) Port used by PyTorch distributed for communication during training.\r\n --master_addr MASTER_ADDR\r\n (optional) IP address of node 0, will be inferred via 'hostname -I' if not specified.\r\n```\r\n\r\n> And can I add a parameter like [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) i.e. --sharded_ddp to use sharded_ddp instead of straight ddp?(I am not sure I totally understand the definition of straight ddp and sharded ddp)\r\n\r\nThat's another implementation of ZeRO protocol. You don't need it.\r\n\r\n> In my previous code, I will pass some generator option to t5 model\r\n> \r\n> ```\r\n> self.generator_options = {'min_length': 1, 'max_length': 128, 'num_beams': 1, 'num_return_sequences': 1, 'do_sample': False, 'top_k': 50, 'top_p': 1.0,\r\n> 'temperature': 1.0, 'length_penalty': 1.0, 'repetition_penalty': 1.0}\r\n> \r\n> output_ids = self.reasoner.generate(batch['all_inps'], **self.generator_options)\r\n> ```\r\n> \r\n> So how can I do the same thing here?\r\n\r\nPlease run:\r\n```\r\npython examples/pytorch/translation/run_translation.py --help \r\n```\r\nyou will see the existing options there (e.g. . `--num_beams`)\r\n\r\nIf you want to customize the example script, these `generate` args are passed here (`num_beams`)\r\n\r\nhttps://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/examples/pytorch/translation/run_translation.py#L589-L593\r\n\r\nall the `generate` options are here:\r\nhttps://github.com/huggingface/transformers/blob/26e5e129b43760138aed2dfc1cc3c75b481a95e6/src/transformers/generation_utils.py#L844-L887",
"Hi @stas00 , thank you for your reply! I trained it wity your updated scripts but the job was stopped accidently.\r\n\r\nSo I tried to resume from checkpoints by this scripts(without --overwrite_output_dir, and output_dir_1 is the folder with checkpoints)\r\n```\r\ndeepspeed examples/pytorch/translation/run_translation.py --model_name_or_path t5-11b --output_dir output_dir_1 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 4 --per_device_train_batch_size 16 --learning_rate 1e-4 --source_lang prompt --target_lang completion \r\n--train_file=\r\n/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_trim_filtered/json_file_t5_11b/train/train.json\r\n--test_file=\r\n/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_trim_filtered/json_file_t5_11b/test/test.json \r\n--validation_file=\r\n/home/zeyi/lr_dataset/data/processed/logic_comp1_nt_v0_infer1.0_balance_seed42_trim_filtered/json_file_t5_11b/dev/dev.json \r\n--deepspeed tests/deepspeed/ds_config_zero3.json --save_strategy epoch --evaluation_strategy epoch --load_best_model_at_end\r\n```\r\nBut it said that\r\n```\r\nUsing /home/zeyi/.cache/torch_extensions as PyTorch extensions root...\r\nNo modifications detected for re-loaded extension module utils, skipping build step...\r\nLoading extension module utils...\r\nTime to load utils op: 0.0007061958312988281 seconds\r\n[INFO|deepspeed.py:444] 2022-06-09 15:28:40,179 >> Attempting to resume from output_dir_1/checkpoint-3126\r\n[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 40204\r\n[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 40205\r\n[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 40206\r\n[2022-06-09 15:31:04,178] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 4020\r\n```\r\n",
"this usually means that you didn't have enough cpu memory to resume\r\n\r\nUnfortunately it's a bug in deepspeed, where instead of loading the checkpoint directly to gpu it first loads it to cpu.\r\nI filed a bug report here https://github.com/microsoft/DeepSpeed/issues/1971\r\nPlease voice your need in this issue so that it's seen that it needs higher priority.\r\n\r\nI can offer you a hack that may help. Basically you need to stagger the checkpoint loading so that not all 4 processes try to load it to cpu memory at once. \r\n\r\n\r\n\r\n",
"something like this should work to stagger the checkpoint loading:\r\n\r\n```\r\ndiff --git a/src/transformers/deepspeed.py b/src/transformers/deepspeed.py\r\nindex 9fa22d462..ce2f39cc5 100644\r\n--- a/src/transformers/deepspeed.py\r\n+++ b/src/transformers/deepspeed.py\r\n@@ -447,6 +447,12 @@ def deepspeed_init(trainer, num_training_steps, resume_from_checkpoint=None, inf\r\n deepspeed_checkpoint_dirs = sorted(glob.glob(f\"{resume_from_checkpoint}/global_step*\"))\r\n\r\n if len(deepspeed_checkpoint_dirs) > 0:\r\n+\r\n+ # hack to stagger checkpoint loading so that they don't all try to use cpu at the same time\r\n+ rank = trainer.args.local_rank\r\n+ from time import sleep\r\n+ sleep(rank*20)\r\n+\r\n logger.info(f\"Attempting to resume from {resume_from_checkpoint}\")\r\n # this magically updates self.optimizer and self.lr_scheduler\r\n load_path, _ = deepspeed_engine.load_checkpoint(\r\n```\r\n\r\nadjust 20 to perhaps smaller or longer wait in secs.\r\n\r\nso here the following happens:\r\n\r\nprocess 0 sleeps for 0 secs, process 1 for 20 secs, 2 for 40 secs, etc. so each process gets full use of CPU memory alone.\r\n\r\nyou can apply the patch manually or with:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\ngit apply patch.txt\r\npip install -e .\r\n```\r\n\r\nassuming you saved my code as patch.txt (attached it to this comment as well so you can just download it)\r\n\r\n[patch.txt](https://github.com/huggingface/transformers/files/8874389/patch.txt)\r\n",
"@stas00 ,Thank you! I have sucessfully trained the t5-11b.\r\n\r\nAnd here, I want to do the inference in my setup code. Since it's hard to load t5-11b on one GPU, I use model.parallelize to do the inference part.\r\n```\r\nmodel = T5ForConditionalGeneration.from_pretrained('./checkpoint)\r\ndevice_map = {\r\n0: [0, 1, 2],\r\n1: [3, 4, 5, 6, 7, 8, 9],\r\n2: [10, 11, 12, 13, 14, 15, 16],\r\n3: [17, 18, 19, 20, 21, 22, 23],\r\n}\r\nmodel.parallelize(device_map)\r\nmodel.predict()\r\n```\r\nBut the errors said:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/zeyi/lr_dataset/src/main.py\", line 294, in <module>\r\n trainer.test(model=model_ckpt, test_dataloaders=loader)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 907, in test\r\n return self._call_and_handle_interrupt(self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 683, in _call_and_handle_interrupt\r\n return trainer_fn(*args, **kwargs)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 950, in _test_impl\r\n results = self._run(model, ckpt_path=self.tested_ckpt_path)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1195, in _run\r\n self._dispatch()\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1271, in _dispatch\r\n self.training_type_plugin.start_evaluating(self)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py\", line 178, in start_evaluating\r\n self.spawn(self.new_process, trainer, self.mp_queue, return_result=False)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py\", line 201, in spawn\r\n mp.spawn(self._wrapped_function, args=(function, args, kwargs, return_queue), nprocs=self.num_processes)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 230, in spawn\r\n return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 188, in start_processes\r\n while not context.join():\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 136, in join\r\n signal_name=name\r\ntorch.multiprocessing.spawn.ProcessExitedException: process 2 terminated with signal SIGABRT\r\nwandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.\r\nwandb: \r\nwandb: Synced lrgenerative_logic_comp1_v7_1.0_new_seed42_trim_filtered_t5_11b_13_06_2022_45964ce7: https://wandb.ai/soumya_research/lr_dataset/runs/snh11aqq\r\nwandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)\r\nwandb: Find logs at: ./wandb/run-20220613_132827-snh11aqq/logs\r\n[W CudaIPCTypes.cpp:21] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]\r\n```\r\n\r\nI have find some solution said that set `num_workers = 0`, but it still doesn't work.\r\n",
"> @stas00 ,Thank you! I have sucessfully trained the t5-11b.\r\n\r\nSuper!\r\n\r\n> And here, I want to do the inference in my setup code. Since it's hard to load t5-11b on one GPU, I use model.parallelize to do the inference part.\r\n\r\n`parallelize` is about to be deprecated and as such is no longer supported. Please use deepspeed instead, it's many folds more superior to the naive parallelization.\r\n\r\n",
"@stas00 ,Thanks a lot!\r\n\r\nIn my case, we use pytorch-lightning and what I want to do is\r\n`model = T5ForConditionalGeneration.from_pretrained('./checkpoint)`\r\nAnd follow the [doc ](https://pytorch-lightning.readthedocs.io/en/stable/advanced/model_parallel.html#deepspeed-zero-stage-3-tips)here to set\r\n\r\n```\r\ntrainer = Trainer(accelerator=\"gpu\", devices=4, strategy=\"deepspeed_stage_3_offload\")\r\ntrainer.predict()\r\n```\r\nBut although I am just doing prediction, why it will still call the `def configure_optimizers(self)` function.\r\n\r\nIn addition to that, it gave an error although I do have ninja package.\r\n\r\n```\r\n[2022-06-13 16:55:48,399] [WARNING] [engine.py:1122:_configure_optimizer] **** You are using ZeRO with an untested optimizer, proceed with caution *****\r\n[2022-06-13 16:55:48,405] [WARNING] [coalesced_collectives.py:26:<module>] unable to find torch.distributed._reduce_scatter_base. will fall back to torch.distributed.reduce_scatter which will result in suboptimal performance. please consider upgrading your pytorch installation.\r\nUsing /home/zeyi/.cache/torch_extensions as PyTorch extensions root...\r\nTraceback (most recent call last):\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 683, in _call_and_handle_interrupt\r\n return trainer_fn(*args, **kwargs)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 950, in _test_impl\r\n results = self._run(model, ckpt_path=self.tested_ckpt_path)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1184, in _run\r\n self._pre_dispatch()\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1219, in _pre_dispatch\r\n self.accelerator.pre_dispatch(self)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 136, in pre_dispatch\r\n self.training_type_plugin.pre_dispatch()\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py\", line 389, in pre_dispatch\r\n self.init_deepspeed()\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py\", line 461, in init_deepspeed\r\n self._initialize_deepspeed_inference(model)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py\", line 563, in _initialize_deepspeed_inference\r\n dist_init_required=False,\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/__init__.py\", line 130, in initialize\r\n config_params=config_params)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/engine.py\", line 294, in __init__\r\n self._configure_optimizer(optimizer, model_parameters)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/engine.py\", line 1124, in _configure_optimizer\r\n self.optimizer = self._configure_zero_optimizer(basic_optimizer)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/engine.py\", line 1439, in _configure_zero_optimizer\r\n communication_data_type=self.communication_data_type)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/runtime/zero/stage3.py\", line 292, in __init__\r\n util_ops = UtilsBuilder().load()\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/ops/op_builder/builder.py\", line 463, in load\r\n return self.jit_load(verbose)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/deepspeed/ops/op_builder/builder.py\", line 512, in jit_load\r\n verbose=verbose)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py\", line 1091, in load\r\n keep_intermediates=keep_intermediates)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py\", line 1302, in _jit_compile\r\n is_standalone=is_standalone)\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py\", line 1373, in _write_ninja_file_and_build_library\r\n verify_ninja_availability()\r\n File \"/home/zeyi/.conda/envs/lr_dataset/lib/python3.7/site-packages/torch/utils/cpp_extension.py\", line 1429, in verify_ninja_availability\r\n raise RuntimeError(\"Ninja is required to load C++ extensions\")\r\nRuntimeError: Ninja is required to load C++ extensions\r\npython-BaseException\r\n```\r\n\r\nI am just worried about is it reasonable to work like this? \r\n1. Trained the t5-11b by transformer.Trainer. \r\n2. Just load the checkpoint saved before and use Pytorch-lightning to do the prediction \r\n3.Since can not load t5-11b on one GPU, I set the strategy to `deepspeed_stage_3_offload`for trainer.",
"wrt to the traceback you shared, `pip install ninja` should do the trick, even though it should have already been installed. something `$PATH` env var is missing the bin dir where pip installs to, check with:\r\n\r\n```\r\nwhich ninja\r\n```\r\n\r\nit should give you the path to the binary. Don't try to run deepspeed again until the above returns the path. if it returns nothing it means that your python's env `bin` dir is not in your `$PATH` env var.\r\n\r\nwrt PL-specific issues please ask at PL Issues as I'm not a PL user.",
"there is another workaround that requires no ninja and it's to prebuild deepspeed https://huggingface.co/docs/transformers/main/main_classes/deepspeed#installation (local install where you clone deepspeed and then build it)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> \r\n\r\nThe git apply patch.txt throws an error of \r\nerror: corrupt patch at line 17\r\n\r\nAm I missing something in the application of it, or missing an argument?\r\n",
"bad copy-n-paste? Just insert it manually - it's just a few lines of code and you can tell where to insert by the context around it.",
"Hi @stas00 , hope you all godd! And would deep-speed be compatible with Auto-regressive model [here](https://github.com/hpcaitech/ColossalAI-Examples/tree/f743872c2089d6bb5e593db6a8a48d427e6b2b1e/language/opt), like I need to fine-tuning a large OPT model. (BTW:Tried hard on PL trainer but always miss some weight of layers). Thanks!",
"I haven't tried it, but I don't see any reason why it shouldn't work. OPT has been out for quite a few months now so surely if it didn't work we would have heard by now and fixed it. Give it a try and if you run into problems please start a new Issue. Thank you."
] | 1,654
| 1,670
| 1,658
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-177-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <Yes>
- Using distributed or parallel set-up in script?: <Yes>
```
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Follow the steps [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400)
git clone https://github.com/huggingface/transformers
cd transformers
git checkout 7e662e6a3be0ece4
cd examples/seq2seq
wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
tar -xzvf wmt_en_ro.tar.gz
pip install -r requirement.txt
cd ../..
pip install .
cd examples/seq2seq
pip install fairscale, deepspeed==[0.3.10](https://github.com/huggingface/transformers/issues/9996#issuecomment-773725303)
#[run script 1](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400)
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500
# Error trace1
```
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 152, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 52, in __init__
self._add_dataclass_arguments(dtype)
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 93, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/home/zeyi/.conda/envs/test/lib/python3.8/typing.py", line 774, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 152, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 52, in __init__
self._add_dataclass_arguments(dtype)
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 93, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/home/zeyi/.conda/envs/test/lib/python3.8/typing.py", line 774, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
Killing subprocess 69967
Killing subprocess 69968
Traceback (most recent call last):
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/zeyi/.conda/envs/test/bin/python', '-u', './finetune_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-large', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--data_dir', 'wmt_en_ro', '--do_eval', '--do_train', '--evaluation_strategy=steps', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_eval_batch_size', '16', '--per_device_train_batch_size', '16', '--predict_with_generate', '--eval_steps', '25000', '--sortish_sampler', '--task', 'translation_en_to_ro', '--test_max_target_length', '128', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '2000', '--n_val', '500']' returned non-zero exit status 1.
```
#[run script 2](https://github.com/huggingface/transformers/issues/10036#issue-802491462)
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./run_seq2seq.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --dataset_name wmt16 --dataset_config "ro-en" --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500
#Error trace2
```
Traceback (most recent call last):
File "./run_seq2seq.py", line 499, in <module>
main()
File "./run_seq2seq.py", line 212, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 166, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--freeze_embeds', '--test_max_target_length', '128', '--n_train', '2000', '--n_val', '500']
Traceback (most recent call last):
File "./run_seq2seq.py", line 499, in <module>
main()
File "./run_seq2seq.py", line 212, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/zeyi/transformers/src/transformers/hf_argparser.py", line 166, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--freeze_embeds', '--test_max_target_length', '128', '--n_train', '2000', '--n_val', '500']
Killing subprocess 72522
Killing subprocess 72523
Traceback (most recent call last):
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/zeyi/.conda/envs/test/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/zeyi/.conda/envs/test/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/zeyi/.conda/envs/test/bin/python', '-u', './run_seq2seq.py', '--local_rank=1', '--model_name_or_path', 't5-large', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--do_eval', '--do_train', '--evaluation_strategy=steps', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_eval_batch_size', '16', '--per_device_train_batch_size', '16', '--predict_with_generate', '--eval_steps', '25000', '--sortish_sampler', '--task', 'translation_en_to_ro', '--test_max_target_length', '128', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '2000', '--n_val', '500']' returned non-zero exit status 1.
```
### Expected behavior
I hope that it will run the model with deepspeed or shared techniques. Actually I want to train the t5-11b model and want to change the dataset dir to my dataset but even can not reproduce what @stas00 shared before.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17534/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17533
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17533/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17533/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17533/events
|
https://github.com/huggingface/transformers/pull/17533
| 1,258,708,162
|
PR_kwDOCUB6oc45AN9O
| 17,533
|
Fix all offload and MP tests
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
This should fix all GPU tests for model parallelism and offload. I stopped trying to make the model bigger since it was causing so many failures (unrelated to current failing tests). Then there was a tiny bug in `from_pretrained` when tied wieghts are present, as highlighted by the failure of the GPT-Neo offload tests. Adding the tie weights before computing the auto device map is necessary to know which weights are tied when auto-generated this device map.
The OPT model for testing was too tiny for the MP/offload tests (that's why there was some logic to make tiny models bigger there) so I just adjusted its size (speed is not really affected).
Finally the T5 tests starting failing after the tied weights because the decoder has to be tied with the shared layer, which require bigger models than the tiny one used for test. Here I didn't make the model bigger since the tests already take some time, so I adjusted the percentages used for the total model size in those tests. I added a new class variable for that, but happy to overwrite the tests in the T5 modeling test if you prefer it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17533/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17533",
"html_url": "https://github.com/huggingface/transformers/pull/17533",
"diff_url": "https://github.com/huggingface/transformers/pull/17533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17533.patch",
"merged_at": 1654264754000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17532
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17532/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17532/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17532/events
|
https://github.com/huggingface/transformers/pull/17532
| 1,258,628,334
|
PR_kwDOCUB6oc44_8ZW
| 17,532
|
Update URL for Hub PR docs
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
MEMBER
| null |
# What does this PR do?
Now that the new Hub docs have been deployed, we can point users to the rendered version on Hub PRs instead of the raw Markdown.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17532/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17532",
"html_url": "https://github.com/huggingface/transformers/pull/17532",
"diff_url": "https://github.com/huggingface/transformers/pull/17532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17532.patch",
"merged_at": 1654199550000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17531
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17531/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17531/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17531/events
|
https://github.com/huggingface/transformers/pull/17531
| 1,258,617,835
|
PR_kwDOCUB6oc44_6rh
| 17,531
|
Clean imports to fix test_fetcher
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failure is flaky, so merging :-)"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
We noticed that a lot of tests are being picked from some modeling files, there are two reasons for that.
1. The circular import being `modeling_tf_utils` and `modelcard` makes `modeling_tf_utils` be a file impacted by `modelcard`. `modeling_tf_utils` triggers pretty much all tests as it's tightly linked to `modeling_utils` via `modeling_tf_pytorch_utils`.
2. ONNX also has some circular thing going on as:
- it's imported in models that define an ONNX Config
- and in return the module imports all models having an ONNX Config in `onnx.features`
This was discovered by developing a tool that prints the tree of depending modules for a given module/test which is incuded in this PR. To use it, in the root of the repo, just do:
```
python utils/tests_fetcher.py --print_dependencies_of src/transformers/models/bert/modeling_bert.py
```
The fix for 1 is to use the special comment that will make the `test_fetcher` ignore the circular import.
The fix for 2 is to remove all configs imports in `onnx.feaures` but rely on the config names, then import them dynamically at the creation of `FeaturesManager` (in a 100% backward compatible manner).
cc @ydshieh since you reported the slowdown.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17531/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17531/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17531",
"html_url": "https://github.com/huggingface/transformers/pull/17531",
"diff_url": "https://github.com/huggingface/transformers/pull/17531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17531.patch",
"merged_at": 1654274082000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17530
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17530/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17530/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17530/events
|
https://github.com/huggingface/transformers/pull/17530
| 1,258,566,822
|
PR_kwDOCUB6oc44_xaf
| 17,530
|
Add installation.mdx Italian translation
|
{
"login": "mfumanelli",
"id": 53374883,
"node_id": "MDQ6VXNlcjUzMzc0ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/53374883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfumanelli",
"html_url": "https://github.com/mfumanelli",
"followers_url": "https://api.github.com/users/mfumanelli/followers",
"following_url": "https://api.github.com/users/mfumanelli/following{/other_user}",
"gists_url": "https://api.github.com/users/mfumanelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfumanelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfumanelli/subscriptions",
"organizations_url": "https://api.github.com/users/mfumanelli/orgs",
"repos_url": "https://api.github.com/users/mfumanelli/repos",
"events_url": "https://api.github.com/users/mfumanelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfumanelli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Grazie @mfumanelli! \r\n\r\nLGTM @sgugger :)"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Italian translation of doc related to the installation of 🤗 Transformers.
See issue: #17459
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
@omarespejel
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17530/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17530",
"html_url": "https://github.com/huggingface/transformers/pull/17530",
"diff_url": "https://github.com/huggingface/transformers/pull/17530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17530.patch",
"merged_at": 1654516088000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17529
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17529/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17529/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17529/events
|
https://github.com/huggingface/transformers/issues/17529
| 1,258,328,788
|
I_kwDOCUB6oc5LAJLU
| 17,529
|
MarianMT Doesn't export to ONNX correctly
|
{
"login": "calebdkofahl",
"id": 95653486,
"node_id": "U_kgDOBbOObg",
"avatar_url": "https://avatars.githubusercontent.com/u/95653486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calebdkofahl",
"html_url": "https://github.com/calebdkofahl",
"followers_url": "https://api.github.com/users/calebdkofahl/followers",
"following_url": "https://api.github.com/users/calebdkofahl/following{/other_user}",
"gists_url": "https://api.github.com/users/calebdkofahl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calebdkofahl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calebdkofahl/subscriptions",
"organizations_url": "https://api.github.com/users/calebdkofahl/orgs",
"repos_url": "https://api.github.com/users/calebdkofahl/repos",
"events_url": "https://api.github.com/users/calebdkofahl/events{/privacy}",
"received_events_url": "https://api.github.com/users/calebdkofahl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @calebdkofahl, thanks for sharing this issue!\r\n\r\nThe reason you see the error\r\n\r\n```\r\nValueError: Model requires 4 inputs. Input Feed contains 2\r\n```\r\n\r\nis because MarianMT is a seq2seq model, so we need to pass the decoder input IDs and attention masks in addition to the encoder's ones. \r\n\r\nThe simplest way to generate inference with ONNX Runtime would be to use the new inference pipelines in our `optimum` library: https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForSeq2SeqLM.forward.example\r\n\r\nThis will allows you to skip the annoying step of needing to manually create the decoder inputs - hope that helps!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,663
| 1,663
|
NONE
| null |
I am running Python FastAPI in a docker container via VSCode's devcontainer setup. I am using the pre-trained MarianMT model for translation of chinese_simple to english and running into an error when trying to use the exported model from the transformers.onnx module. This is the error I am receiving:
```sh
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 366, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 208, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 580, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 241, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 52, in app
response = await func(request)
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 226, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 159, in run_endpoint_function
return await dependant.call(**values)
File "./app/main.py", line 300, in run_onnx_inference
onnx_output = onnx_session.run(output_names=["last_hidden_state"], input_feed=dict(onnx_inputs))
File "/usr/local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 188, in run
raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs))
ValueError: Model requires 4 inputs. Input Feed contains 2
```
Dockerfile:
```Docker
FROM python:3.8
# Install required binaries and data
RUN apt-get update
RUN apt install wget tesseract-ocr -y
RUN apt install tesseract-ocr-chi-sim -y
RUN apt install python3-opencv -y
# Install python requirements
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt
# Copy all source code and data
COPY ./README.md /README.md
COPY ./app /app
COPY ./data /data
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
```
Package Versions:
onnx 1.11.0
onnxmltools 1.11.0
onnxruntime 1.11.1
transformers 4.19.2
Code causing the error:
```python
import onnx
import onnxruntime as onnxrt # pylint: disable=import-error
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
@app.post("/onnx")
async def run_onnx_inference(text="不要把乐趣搞得一团糟"):
onnx_model_filename = f"{MODELS_DIR}/model.onnx"
mt_tokenizer = AutoTokenizer.from_pretrained(f"{MODELS_DIR}/marian")
onnx_session = onnxrt.InferenceSession(onnx_model_filename)
onnx_inputs = mt_tokenizer(str(text), return_tensors="np")
onnx_output = onnx_session.run(output_names=["last_hidden_state"], input_feed=dict(onnx_inputs))
return onnx_output
```
Command used to export the model to onnx:
```sh
python -m transformers.onnx --model=app/models/marian --atol=2e-04 --feature=seq2seq-lm app/models
```
I have also tried exporting with the following, but I receive the same error:
```sh
python -m transformers.onnx --model=app/models/marian --atol=2e-04 app/models
```
It appears that, even though https://huggingface.co/docs/transformers/serialization clearly lists Marian as a valid model to export to ONNX, it doesn't export the model in a way that allows ONNX to run inference on the model.
Any help would be greatly appreciated.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17529/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17528
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17528/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17528/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17528/events
|
https://github.com/huggingface/transformers/issues/17528
| 1,258,306,260
|
I_kwDOCUB6oc5LADrU
| 17,528
|
Issues with mypy when using Transformers
|
{
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is not a bug in Transformers, so let's remove that label please. MyPy is not part of the Python standard library and we never say anywhere we support it.\r\n\r\nIf you want to work on a PR that has some fixes that don't make the code unreadable, we'll be happy to have a look, but no one in the team is going to actively work on this.",
"Ok. Closing it then."
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.18.1-arch1-1-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Since some weeks there seem to be issues when using Transnsformers as a 3rs party lib
and mypy as a type checker.
Example:
```
hpoflow/optuna_transformers.py:83: error: Item "TFPreTrainedModel" of "Union[PreTrainedModel, TFPreTrainedModel]" has no attribute "config"
hpoflow/optuna_transformers.py:85: error: Item "PreTrainedModel" of "Union[PreTrainedModel, TFPreTrainedModel]" has no attribute "config"
hpoflow/optuna_transformers.py:85: error: Item "TFPreTrainedModel" of "Union[PreTrainedModel, TFPreTrainedModel]" has no attribute "config"
tests/test_optuna_transformers.py:135: error: "Trainer" has no attribute "train"
tests/test_optuna_transformers.py:139: error: "Trainer" has no attribute "state"
```
see https://github.com/telekom/HPOflow/runs/6664387914?check_suite_focus=true
It might be because of some lazy loading / import magic.
The Optuna project is doing some extra affords to avoid this. See here:
https://github.com/telekom/lazy-imports#usage--example-for-lazyimporter
```
# Direct imports for type-checking
if TYPE_CHECKING:
from hpoflow.mlflow import ( # noqa: F401
check_repo_is_dirty,
normalize_mlflow_entry_name,
normalize_mlflow_entry_names_in_dict,
)
from hpoflow.optuna import SignificanceRepeatedTrainingPruner # noqa: F401
from hpoflow.optuna_mlflow import OptunaMLflow # noqa: F401
from hpoflow.optuna_transformers import OptunaMLflowCallback # noqa: F401
from hpoflow.utils import func_no_exception_caller # noqa: F401
else:
sys.modules[__name__] = LazyImporter(
__name__,
globals()["__file__"],
_import_structure,
extra_objects={"__version__": __version__},
)
```
### Expected behavior
```shell
see above
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17528/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17527
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17527/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17527/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17527/events
|
https://github.com/huggingface/transformers/pull/17527
| 1,258,227,828
|
PR_kwDOCUB6oc44-plN
| 17,527
|
Update configuration_auto.py
|
{
"login": "kamalkraj",
"id": 17096858,
"node_id": "MDQ6VXNlcjE3MDk2ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamalkraj",
"html_url": "https://github.com/kamalkraj",
"followers_url": "https://api.github.com/users/kamalkraj/followers",
"following_url": "https://api.github.com/users/kamalkraj/following{/other_user}",
"gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions",
"organizations_url": "https://api.github.com/users/kamalkraj/orgs",
"repos_url": "https://api.github.com/users/kamalkraj/repos",
"events_url": "https://api.github.com/users/kamalkraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamalkraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Documentation fix for Autoconfig
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17527/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17527",
"html_url": "https://github.com/huggingface/transformers/pull/17527",
"diff_url": "https://github.com/huggingface/transformers/pull/17527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17527.patch",
"merged_at": 1654180620000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17526
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17526/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17526/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17526/events
|
https://github.com/huggingface/transformers/issues/17526
| 1,258,129,566
|
I_kwDOCUB6oc5K_Yie
| 17,526
|
center_crop in image_utils.py is broken for inputs that are not PIL Images
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 4235521865,
"node_id": "LA_kwDOCUB6oc78dO9J",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20extractors",
"name": "Feature extractors",
"color": "c2e0c6",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"You tagged the wrong Niels ;) also cc'ing @sgugger ",
"LOL, sorry Niels and other Niels.",
"That's because we don't convert images back to PIL in `center_crop` (which always happens in `resize`), but happy to look at a PR adding support for this.",
"I can add it to my to-do list, as it would be better to have this logic only in image_utils. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.7
- JaxLib version: 0.3.7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@Niels
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The feature extractor for MobileViT (see the PR at https://github.com/huggingface/transformers/pull/17354) implements its own version of the `center_crop` method, instead of using the one from image_utils. The CLIP model also does this. The reason for this custom cropping method is that the "official" one is broken for certain inputs.
In normal usage, the feature extractor first resizes the image and then performs a center crop. Resizing always turns the input into a PIL Image, and then `center_crop` correctly works on the PIL Image.
However, the feature extractor can be configured to not perform the resize with the option `do_resize=False`. Now the input is passed directly into `center_crop`. When the input is a PIL Image this works fine, but when it is a numpy array or torch tensor, `center_crop` may calculate the wrong thing.
To reproduce:
```python
# load an image
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
# get the base class from image_utils
from transformers.image_utils import ImageFeatureExtractionMixin
mixin = ImageFeatureExtractionMixin()
```
The following outputs a PIL Image, which is correct and as expected:
```python
cropped = mixin.center_crop(image, size=256)
```
But when the input is a NumPy array, the output is a `(480, 256, 255)` tensor:
```python
import numpy as np
image_np = np.array(image)
inputs = mixin.center_crop(image_np, size=256)
inputs.shape
```
This happens because `center_crop` assumes that the tensor is already in the shape (channels, H, W) but it isn't.
When you transpose the image before cropping, the output is correct again:
```python
import numpy as np
image_np = np.array(image).transpose(2, 0, 1)
inputs = mixin.center_crop(image_np, size=256)
inputs.shape
```
However, the `resize` method does accept arrays and tensors in the shape (H, W, channels), and so it would be reasonable to assume that `center_crop` also should do this.
### Expected behavior
```shell
Any input that works correctly for `resize` should also work correctly for `center_crop`.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17526/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17525
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17525/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17525/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17525/events
|
https://github.com/huggingface/transformers/issues/17525
| 1,258,035,475
|
I_kwDOCUB6oc5K_BkT
| 17,525
|
Is the addition of the 'OPTforSequenceClassification' class scheduled?
|
{
"login": "penpaperkeycode",
"id": 45441848,
"node_id": "MDQ6VXNlcjQ1NDQxODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/45441848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penpaperkeycode",
"html_url": "https://github.com/penpaperkeycode",
"followers_url": "https://api.github.com/users/penpaperkeycode/followers",
"following_url": "https://api.github.com/users/penpaperkeycode/following{/other_user}",
"gists_url": "https://api.github.com/users/penpaperkeycode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penpaperkeycode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penpaperkeycode/subscriptions",
"organizations_url": "https://api.github.com/users/penpaperkeycode/orgs",
"repos_url": "https://api.github.com/users/penpaperkeycode/repos",
"events_url": "https://api.github.com/users/penpaperkeycode/events{/privacy}",
"received_events_url": "https://api.github.com/users/penpaperkeycode/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Don't think anyone is working on this yet (cc @younesbelkada), so feel free to contribute",
"Hey! We are not working on that as Niels mentioned 😉 \r\nOpening a PR for each framework (`pytorch`, `tensorflow` and `flax`) is recommended. If you have any question about adding these models, feel free to reach out! ",
"@ArthurZucker , @NielsRogge I do not see anyone working on this so far. I want to start contributing to huggingface with this good first issue. Can I go ahead and open a WIP PR for this issue ? ",
"Hi, yes you can definitely go ahead and we'll be happy to review it ! ;)\nPlease also double check with @penpaperkeycode if it is not already a WIP on their side ",
"@penpaperkeycode as @younesbelkada mentioned , can you please let me know if you are already working on the issue ? ",
"@VijayKalmath Sorry for late reply.\r\nI tested OPT's 'forSequenceClassification' class and verified accuracy in MNLI-glue task. \r\nit looks good to me. But since this is my first huggingface PR, I need to figure out how to do it.\r\n\r\nIn addition, I am working on a conditional generation class, but there is no other work other than these two classes.\r\n",
"@younesbelkada @ArthurZucker I am new to the hugging face community and am looking to contribute, I looked through the first issue list but all issues seem to have pre existing contributors assigned, can I help with this one or is there another first issue I can dive into and help out with. Thanks",
"Hey, it seems that this was already taken car of, see #18123, but thanks a lot for wanting to contribute! I am going to look for an issue that might be interesting for you :) \r\nBTW, you can also check the closed issues as some are close because of a lack of activity. I assigned myself to #17514 but have not had the time to really look into it! Feel free to fork `transformers` and create a branch for the fix ! "
] | 1,654
| 1,658
| 1,658
|
NONE
| null |
### Feature request
Is the addition of the 'OPTforSequenceClassification' class scheduled?
Is someone handling it?
When adding these functions, I wonder if it is possible to PR one by one, or if I have to PR all classes supported by other models.
### Motivation
Added function of OPT class, which is being actively discussed recently
### Your contribution
I personally use the forSequenceClassification class because I need it for my experiments, but I would like to inquire if I can contribute to the addition of the function.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17525/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17524
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17524/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17524/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17524/events
|
https://github.com/huggingface/transformers/pull/17524
| 1,257,958,893
|
PR_kwDOCUB6oc449vzF
| 17,524
|
Fix bug - layer names and activation from previous refactor
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @amyeroberts! Could you show a snippet of where the current implementation is faulty (ideally for both resnet and maskformer)? Running a model load with either current `main` or this PR seems to result in the same layers being loaded/not loaded.",
"@LysandreJik - Sure. Here's how I found the issues (and checked it was working). \r\n\r\nThe script I was running:\r\n```\r\nfrom transformers import VanForImageClassification, ResNetForImageClassification, RegNetForImageClassification, MaskFormerForInstanceSegmentation\r\n\r\nprint(\"\\nLoading ResNet\")\r\nmodel = ResNetForImageClassification.from_pretrained(\"microsoft/resnet-50\")\r\n\r\nprint(\"\\nLoading VAN\")\r\nmodel = VanForImageClassification.from_pretrained(\"Visual-Attention-Network/van-base\")\r\n\r\nprint(\"\\nLoading RegNet\")\r\nmodel = RegNetForImageClassification.from_pretrained(\"facebook/regnet-y-040\")\r\n\r\nprint(\"\\nLoading MaskFormer\")\r\nmodel = MaskFormerForInstanceSegmentation.from_pretrained(\"facebook/maskformer-swin-base-ade\")\r\n```\r\n\r\nRunning on `main` I get the following output:\r\n```\r\n(tenv) aroberts:transformers $ git checkout main\r\nAlready on 'main'\r\nYour branch is ahead of 'origin/main' by 75 commits.\r\n (use \"git push\" to publish your local commits)\r\n(tenv) aroberts:transformers $ python ../test_model_weight_loading.py\r\n\r\nLoading ResNet\r\nSome weights of the model checkpoint at microsoft/resnet-50 were not used when initializing ResNetForImageClassification: ['resnet.encoder.stages.1.layers.3.layer.0.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.4.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.weight', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.3.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.0.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.bias', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.5.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.3.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.weight', 'resnet.encoder.stages.1.layers.3.layer.1.convolution.weight', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.2.layer.1.convolution.weight', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.0.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.weight', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.3.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.0.layer.2.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.bias', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.3.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.3.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.3.layers.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.0.convolution.weight', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.0.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.2.layers.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.layers.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.4.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.0.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.bias', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.1.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.weight', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.layers.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.2.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.3.layer.1.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.5.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.4.layer.2.normalization.running_var', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.layers.1.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.2.layers.3.layer.0.normalization.running_var', 'resnet.encoder.stages.0.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.bias', 'resnet.encoder.stages.0.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.0.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.3.layers.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.layers.1.layer.0.convolution.weight', 'resnet.encoder.stages.0.layers.2.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.0.normalization.running_var', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.weight', 'resnet.encoder.stages.2.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.running_var', 'resnet.encoder.stages.0.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.2.layers.4.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.5.layer.2.normalization.running_var', 'resnet.encoder.stages.3.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.layers.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.layers.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.layers.1.layer.2.normalization.bias', 'resnet.encoder.stages.3.layers.1.layer.2.convolution.weight', 'resnet.encoder.stages.2.layers.5.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.bias', 'resnet.encoder.stages.1.layers.3.layer.2.normalization.weight', 'resnet.encoder.stages.2.layers.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.layers.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.layers.2.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.layers.2.layer.2.normalization.running_var', 'resnet.encoder.stages.1.layers.2.layer.2.convolution.weight', 'resnet.encoder.stages.3.layers.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.layers.0.layer.1.normalization.running_var', 'resnet.encoder.stages.1.layers.0.layer.2.normalization.weight', 'resnet.encoder.stages.0.layers.0.layer.0.convolution.weight', 'resnet.encoder.stages.3.layers.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.layers.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.layers.0.layer.1.convolution.weight', 'resnet.encoder.stages.0.layers.0.layer.0.normalization.num_batches_tracked']\r\n- This IS expected if you are initializing ResNetForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing ResNetForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of ResNetForImageClassification were not initialized from the model checkpoint at microsoft/resnet-50 and are newly initialized: ['resnet.encoder.stages.3.1.layer.1.normalization.bias', 'resnet.encoder.stages.3.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.0.layer.2.normalization.running_var', 'resnet.encoder.stages.1.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.normalization.weight', 'resnet.encoder.stages.2.4.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.1.convolution.weight', 'resnet.encoder.stages.3.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.layer.0.convolution.weight', 'resnet.encoder.stages.0.0.shortcut.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.3.layer.0.normalization.weight', 'resnet.encoder.stages.1.1.layer.2.convolution.weight', 'resnet.encoder.stages.3.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.1.normalization.running_var', 'resnet.encoder.stages.0.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.3.layer.1.convolution.weight', 'resnet.encoder.stages.2.4.layer.0.normalization.bias', 'resnet.encoder.stages.1.2.layer.2.normalization.bias', 'resnet.encoder.stages.3.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.2.layer.2.normalization.bias', 'resnet.encoder.stages.0.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.5.layer.0.normalization.bias', 'resnet.encoder.stages.2.0.layer.0.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.normalization.weight', 'resnet.encoder.stages.3.2.layer.2.normalization.bias', 'resnet.encoder.stages.3.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.2.5.layer.2.convolution.weight', 'resnet.encoder.stages.2.1.layer.1.normalization.running_var', 'resnet.encoder.stages.2.3.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.layer.1.normalization.running_var', 'resnet.encoder.stages.2.4.layer.0.normalization.running_var', 'resnet.encoder.stages.3.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.1.layer.2.normalization.weight', 'resnet.encoder.stages.1.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.normalization.running_var', 'resnet.encoder.stages.1.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.5.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.2.convolution.weight', 'resnet.encoder.stages.1.3.layer.0.normalization.weight', 'resnet.encoder.stages.3.0.layer.2.normalization.weight', 'resnet.encoder.stages.3.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.0.layer.0.normalization.bias', 'resnet.encoder.stages.2.4.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.5.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.1.convolution.weight', 'resnet.encoder.stages.2.0.layer.2.normalization.bias', 'resnet.encoder.stages.0.0.layer.0.normalization.running_var', 'resnet.encoder.stages.3.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.1.normalization.running_var', 'resnet.encoder.stages.0.1.layer.1.normalization.running_var', 'resnet.encoder.stages.3.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.normalization.weight', 'resnet.encoder.stages.0.0.layer.2.normalization.bias', 'resnet.encoder.stages.0.2.layer.2.convolution.weight', 'resnet.encoder.stages.1.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.2.layer.0.normalization.weight', 'resnet.encoder.stages.3.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.shortcut.convolution.weight', 'resnet.encoder.stages.2.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.5.layer.1.normalization.bias', 'resnet.encoder.stages.3.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.0.normalization.bias', 'resnet.encoder.stages.2.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.0.normalization.bias', 'resnet.encoder.stages.3.0.layer.1.convolution.weight', 'resnet.encoder.stages.3.2.layer.1.normalization.weight', 'resnet.encoder.stages.2.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.0.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.1.layer.0.normalization.bias', 'resnet.encoder.stages.2.2.layer.1.normalization.bias', 'resnet.encoder.stages.1.1.layer.0.normalization.running_var', 'resnet.encoder.stages.0.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.1.layer.1.normalization.bias', 'resnet.encoder.stages.3.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.convolution.weight', 'resnet.encoder.stages.3.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.0.normalization.running_var', 'resnet.encoder.stages.1.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.4.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.1.normalization.weight', 'resnet.encoder.stages.1.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.layer.0.normalization.weight', 'resnet.encoder.stages.3.2.layer.0.normalization.bias', 'resnet.encoder.stages.1.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.2.convolution.weight', 'resnet.encoder.stages.3.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.0.layer.2.normalization.running_var', 'resnet.encoder.stages.0.2.layer.2.normalization.weight', 'resnet.encoder.stages.3.0.shortcut.normalization.running_var', 'resnet.encoder.stages.1.3.layer.2.convolution.weight', 'resnet.encoder.stages.0.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.1.layer.1.normalization.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.running_var', 'resnet.encoder.stages.2.1.layer.0.convolution.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.1.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.normalization.bias', 'resnet.encoder.stages.0.0.layer.0.convolution.weight', 'resnet.encoder.stages.3.0.layer.1.normalization.weight', 'resnet.encoder.stages.1.0.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.5.layer.2.normalization.running_var', 'resnet.encoder.stages.3.1.layer.0.normalization.weight', 'resnet.encoder.stages.1.3.layer.0.normalization.running_var', 'resnet.encoder.stages.2.0.layer.1.convolution.weight', 'resnet.encoder.stages.1.1.layer.1.convolution.weight', 'resnet.encoder.stages.0.0.layer.2.normalization.weight', 'resnet.encoder.stages.0.0.layer.1.normalization.running_var', 'resnet.encoder.stages.1.1.layer.2.normalization.weight', 'resnet.encoder.stages.3.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.1.normalization.running_var', 'resnet.encoder.stages.1.2.layer.1.normalization.weight', 'resnet.encoder.stages.0.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.2.normalization.weight', 'resnet.encoder.stages.2.4.layer.2.normalization.bias', 'resnet.encoder.stages.2.5.layer.1.convolution.weight', 'resnet.encoder.stages.0.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.4.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.4.layer.1.convolution.weight', 'resnet.encoder.stages.2.4.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.0.convolution.weight', 'resnet.encoder.stages.1.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.1.convolution.weight', 'resnet.encoder.stages.2.5.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.0.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.2.normalization.bias', 'resnet.encoder.stages.1.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.1.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.1.normalization.running_var', 'resnet.encoder.stages.1.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.0.layer.0.normalization.weight', 'resnet.encoder.stages.1.1.layer.0.normalization.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.0.normalization.running_var', 'resnet.encoder.stages.2.5.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.3.layer.0.normalization.bias', 'resnet.encoder.stages.2.1.layer.2.normalization.bias', 'resnet.encoder.stages.3.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.2.layer.1.normalization.running_var', 'resnet.encoder.stages.2.5.layer.1.normalization.weight', 'resnet.encoder.stages.3.1.layer.2.convolution.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.weight', 'resnet.encoder.stages.2.4.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.5.layer.2.normalization.weight', 'resnet.encoder.stages.2.5.layer.0.normalization.weight', 'resnet.encoder.stages.1.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.0.normalization.running_var', 'resnet.encoder.stages.0.1.layer.0.convolution.weight', 'resnet.encoder.stages.1.2.layer.0.normalization.weight', 'resnet.encoder.stages.2.5.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.2.normalization.running_var', 'resnet.encoder.stages.2.4.layer.2.normalization.weight', 'resnet.encoder.stages.2.1.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.1.2.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.0.0.shortcut.normalization.bias', 'resnet.encoder.stages.2.4.layer.1.normalization.running_var', 'resnet.encoder.stages.3.0.layer.2.normalization.bias', 'resnet.encoder.stages.2.1.layer.2.normalization.running_var', 'resnet.encoder.stages.1.0.layer.2.normalization.weight', 'resnet.encoder.stages.1.0.layer.0.normalization.bias', 'resnet.encoder.stages.2.3.layer.0.normalization.running_var', 'resnet.encoder.stages.2.4.layer.1.normalization.bias', 'resnet.encoder.stages.0.1.layer.1.convolution.weight', 'resnet.encoder.stages.2.5.layer.0.convolution.weight', 'resnet.encoder.stages.0.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.0.convolution.weight', 'resnet.encoder.stages.0.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.0.layer.1.normalization.weight', 'resnet.encoder.stages.2.3.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.2.4.layer.2.normalization.running_var', 'resnet.encoder.stages.2.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.3.layer.1.normalization.weight', 'resnet.encoder.stages.2.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.1.normalization.running_var', 'resnet.encoder.stages.3.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.3.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.running_var', 'resnet.encoder.stages.2.3.layer.2.normalization.weight', 'resnet.encoder.stages.3.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.3.2.layer.0.normalization.running_var', 'resnet.encoder.stages.0.1.layer.0.normalization.running_var', 'resnet.encoder.stages.3.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.2.layer.1.normalization.bias', 'resnet.encoder.stages.3.2.layer.0.convolution.weight', 'resnet.encoder.stages.1.2.layer.2.convolution.weight', 'resnet.encoder.stages.2.3.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.1.layer.0.normalization.bias', 'resnet.encoder.stages.3.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.2.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.3.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.0.layer.2.normalization.running_var', 'resnet.encoder.stages.1.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.0.convolution.weight', 'resnet.encoder.stages.1.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.0.shortcut.normalization.weight', 'resnet.encoder.stages.3.2.layer.1.normalization.bias', 'resnet.encoder.stages.2.3.layer.1.normalization.running_var', 'resnet.encoder.stages.2.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.1.normalization.running_mean', 'resnet.encoder.stages.0.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.4.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.3.0.layer.0.convolution.weight', 'resnet.encoder.stages.1.0.layer.0.normalization.running_var', 'resnet.encoder.stages.1.0.layer.0.normalization.running_mean', 'resnet.encoder.stages.2.5.layer.0.normalization.running_var', 'resnet.encoder.stages.2.1.layer.0.normalization.running_var', 'resnet.encoder.stages.2.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.3.layer.0.normalization.bias', 'resnet.encoder.stages.2.5.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.4.layer.0.convolution.weight', 'resnet.encoder.stages.1.0.layer.1.normalization.bias', 'resnet.encoder.stages.1.0.layer.1.convolution.weight', 'resnet.encoder.stages.0.1.layer.0.normalization.weight', 'resnet.encoder.stages.2.2.layer.2.convolution.weight', 'resnet.encoder.stages.3.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.2.normalization.weight', 'resnet.encoder.stages.0.0.shortcut.normalization.running_mean', 'resnet.encoder.stages.1.1.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.4.layer.2.convolution.weight', 'resnet.encoder.stages.2.5.layer.2.normalization.bias', 'resnet.encoder.stages.2.3.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.2.5.layer.1.normalization.running_var', 'resnet.encoder.stages.0.0.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.3.layer.0.convolution.weight', 'resnet.encoder.stages.3.0.shortcut.normalization.weight', 'resnet.encoder.stages.0.0.layer.1.convolution.weight', 'resnet.encoder.stages.0.1.layer.2.convolution.weight', 'resnet.encoder.stages.1.1.layer.0.convolution.weight', 'resnet.encoder.stages.3.0.layer.2.convolution.weight', 'resnet.encoder.stages.1.3.layer.0.convolution.weight', 'resnet.encoder.stages.1.3.layer.1.normalization.bias', 'resnet.encoder.stages.0.1.layer.2.normalization.bias', 'resnet.encoder.stages.2.2.layer.2.normalization.bias', 'resnet.encoder.stages.1.3.layer.2.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.1.layer.1.normalization.num_batches_tracked', 'resnet.encoder.stages.1.2.layer.2.normalization.weight', 'resnet.encoder.stages.0.0.layer.2.convolution.weight', 'resnet.encoder.stages.2.3.layer.1.normalization.bias', 'resnet.encoder.stages.2.2.layer.0.convolution.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.bias', 'resnet.encoder.stages.1.1.layer.1.normalization.bias', 'resnet.encoder.stages.2.2.layer.2.normalization.running_var', 'resnet.encoder.stages.2.0.shortcut.convolution.weight', 'resnet.encoder.stages.0.1.layer.2.normalization.running_var', 'resnet.encoder.stages.3.0.layer.1.normalization.bias', 'resnet.encoder.stages.0.2.layer.2.normalization.running_var', 'resnet.encoder.stages.0.2.layer.2.normalization.num_batches_tracked', 'resnet.encoder.stages.2.0.shortcut.normalization.bias', 'resnet.encoder.stages.1.2.layer.1.normalization.running_mean', 'resnet.encoder.stages.2.2.layer.2.normalization.weight', 'resnet.encoder.stages.1.0.shortcut.normalization.running_var', 'resnet.encoder.stages.2.0.layer.0.normalization.bias', 'resnet.encoder.stages.0.0.shortcut.normalization.num_batches_tracked', 'resnet.encoder.stages.0.1.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.0.layer.1.normalization.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.running_mean', 'resnet.encoder.stages.1.3.layer.2.normalization.running_var', 'resnet.encoder.stages.1.2.layer.0.convolution.weight', 'resnet.encoder.stages.2.4.layer.0.normalization.running_mean', 'resnet.encoder.stages.3.1.layer.1.convolution.weight', 'resnet.encoder.stages.0.2.layer.0.normalization.num_batches_tracked', 'resnet.encoder.stages.1.3.layer.0.normalization.num_batches_tracked']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n\r\nLoading VAN\r\n\r\nLoading RegNet\r\n\r\nLoading MaskFormer\r\n/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)\r\n return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\r\nSome weights of the model checkpoint at facebook/maskformer-swin-base-ade were not used when initializing MaskFormerForInstanceSegmentation: ['mask_embedder.2.0.weight', 'mask_embedder.2.0.bias']\r\n- This IS expected if you are initializing MaskFormerForInstanceSegmentation from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing MaskFormerForInstanceSegmentation from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nRunning on this branch - `fix-weight-naming`\r\n\r\n```\r\n(tenv) aroberts:transformers $ git checkout fix-weight-naming\r\nSwitched to branch 'fix-weight-naming'\r\n(tenv) aroberts:transformers $ python ../test_model_weight_loading.py\r\n\r\nLoading ResNet\r\n\r\nLoading VAN\r\n\r\nLoading RegNet\r\n\r\nLoading MaskFormer\r\n/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)\r\n return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\r\n(tenv) aroberts:transformers $ \r\n```\r\n\r\nLet me know if there's anything else you'd like me to run or check. "
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Removes two issues introduced by the PR, removing `nn.Sequential` subclasses: https://github.com/huggingface/transformers/commit/bdc01711d67161ef5c2097b6d2d885645e0a0f08
1. `nn.Sequential` block removed within an `__init__` causing differences in layer naming, resulting in weights not being loaded
2. Fixes logic issue where a final linear weight was missing in the maskformer model
All the models affected by the previous PR (maskformer, resnet, regnet, van) have been loaded in with their default checkpoint weights to double check all layers have their weights initialised, and all weights from the checkpoint are used.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17524/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17524",
"html_url": "https://github.com/huggingface/transformers/pull/17524",
"diff_url": "https://github.com/huggingface/transformers/pull/17524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17524.patch",
"merged_at": 1654263071000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17523
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17523/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17523/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17523/events
|
https://github.com/huggingface/transformers/issues/17523
| 1,257,947,939
|
I_kwDOCUB6oc5K-sMj
| 17,523
|
Not able to import tensorflow OPT
|
{
"login": "Leli1024",
"id": 33652168,
"node_id": "MDQ6VXNlcjMzNjUyMTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/33652168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leli1024",
"html_url": "https://github.com/Leli1024",
"followers_url": "https://api.github.com/users/Leli1024/followers",
"following_url": "https://api.github.com/users/Leli1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Leli1024/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leli1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leli1024/subscriptions",
"organizations_url": "https://api.github.com/users/Leli1024/orgs",
"repos_url": "https://api.github.com/users/Leli1024/repos",
"events_url": "https://api.github.com/users/Leli1024/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leli1024/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I just tried this with latest source install and it's working for me. Make sure you have TF installed to be able to import TF models.",
"> I just tried this with latest source install and it's working for me. Make sure you have TF installed to be able to import TF models.\r\n\r\nI have tensorflow installed, even checked it by running a quick test",
"Could you share details about your script and maybe your environment? ",
"> Could you share details about your script and maybe your environment?\r\n\r\nI was working on an Azure compute machine. I found the problem, apparently I needed to install tensorflow GPU. as well as the regular distro"
] | 1,654
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
Using the latest git repo as a source install, when I try to import TFOPTForCasualLM I will get the following error.
ImportError: cannot import name 'TFOPTForCasualLM' from 'transformers' (/anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/__init__.py)
```
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import TFOPTForCausalLM, GPT2Tokenizer
model = TFOPTForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
```
### Expected behavior
```shell
Was expecting model class to import normally
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17523/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17522
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17522/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17522/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17522/events
|
https://github.com/huggingface/transformers/pull/17522
| 1,257,892,518
|
PR_kwDOCUB6oc449hoZ
| 17,522
|
[WIP] Adding support for `clip` in `feature-extraction`.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17522). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds support for the feature extraction pipeline for clip.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17522/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17522",
"html_url": "https://github.com/huggingface/transformers/pull/17522",
"diff_url": "https://github.com/huggingface/transformers/pull/17522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17522.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17521
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17521/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17521/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17521/events
|
https://github.com/huggingface/transformers/issues/17521
| 1,257,881,647
|
I_kwDOCUB6oc5K-cAv
| 17,521
|
Support returning raw logits in `generate`
|
{
"login": "shijie-wu",
"id": 2987758,
"node_id": "MDQ6VXNlcjI5ODc3NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shijie-wu",
"html_url": "https://github.com/shijie-wu",
"followers_url": "https://api.github.com/users/shijie-wu/followers",
"following_url": "https://api.github.com/users/shijie-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions",
"organizations_url": "https://api.github.com/users/shijie-wu/orgs",
"repos_url": "https://api.github.com/users/shijie-wu/repos",
"events_url": "https://api.github.com/users/shijie-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shijie-wu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @patil-suraj @gante as well",
"I'm personally fine with adding a `output_logits` flag to `generate` since it already has 50+ flags it won't make a difference and it's a useful feature indeed. What do you think @patil-suraj @gante ?",
"I'm cool with it 👍 (and it might be interesting to use as part of PT-TF cross tests)",
"@patil-suraj what do you think? Do you want to open a PR to work on it? ",
"> @patil-suraj what do you think? Do you want to open a PR to work on it?\r\n\r\n@shijie-wu seems willing to open a PR, as mentioned at the end of the issue description.",
"I could open a PR for this. ",
"I'm okay with this, let me know if you need any help @shijie-wu :) ",
"Cool thanks for taking care of it @shijie-wu ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"So, is there any work on it? I did not find a new feature about getting the raw logits.",
"I don't think so -- gently pinging @shijie-wu, who manifested interest in opening a PR :)",
"sorry about the delay! i will resume working on it in the coming week.",
"gently ping @shijie-wu --- any updates on this?",
"@gante should I open a PR? I think the change is fairly minor.",
"@xkianteb sounds good 👍 ",
"is there any update on this...?",
"None that I know of. Open to contributors :)",
"Hey for folks running into this issue: I have a snippet already getting the raw logits. Prob related to your quest as well @xkianteb . It's for RLHF PPO so you don't have to do another forward pass to get the logprobs.\r\n\r\n\r\n```python\r\nimport torch\r\nimport transformers\r\nimport torch.nn.functional as F\r\n\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"gpt2\", padding_side=\"right\")\r\ntokenizer.add_special_tokens({\"pad_token\": \"[PAD]\"})\r\npad_id = tokenizer.pad_token_id\r\npolicy = transformers.AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\npolicy.generation_config.pad_token_id = policy.generation_config.eos_token_id\r\n\r\nquery = torch.tensor([\r\n [pad_id, pad_id, 23073],\r\n [pad_id, pad_id, 234],\r\n])\r\ntemperature = 0.7\r\ncontext_length = query.shape[1]\r\n\r\ndef forward(model, query_responses, tokenizer):\r\n attention_mask = query_responses != tokenizer.pad_token_id\r\n position_ids = attention_mask.cumsum(1) - attention_mask.long()\r\n input_ids = torch.masked_fill(query_responses, ~attention_mask, 0)\r\n return model(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n position_ids=position_ids,\r\n return_dict=True,\r\n output_hidden_states=True,\r\n )\r\n\r\ndef generate_and_return_logits(lm_backbone, queries, tokenizer, generation_config):\r\n \"\"\"generate in a way that does not affect padding tokens\"\"\"\r\n context_length = queries.shape[1]\r\n attention_mask = queries != tokenizer.pad_token_id\r\n input_ids = torch.masked_fill(queries, ~attention_mask, 0)\r\n output = lm_backbone.generate(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n # position_ids=attention_mask.cumsum(1) - attention_mask.long(), # already handled in generation\r\n generation_config=generation_config,\r\n return_dict_in_generate=True,\r\n output_scores=True\r\n )\r\n logits = torch.stack(output.scores, 1)\r\n return torch.cat((queries, output.sequences[:, context_length:]), dim=1), logits\r\n\r\ngeneration_config = transformers.GenerationConfig(\r\n max_new_tokens=5,\r\n min_new_tokens=5,\r\n temperature=temperature,\r\n top_k=0.0,\r\n top_p=1.0,\r\n do_sample=True,\r\n)\r\nquery_response, logits = generate_and_return_logits(policy, query, tokenizer, generation_config)\r\nresponse = query_response[:, context_length:]\r\nall_logprob = F.log_softmax(logits, dim=-1)\r\nlogprob = torch.gather(all_logprob, 2, response.unsqueeze(-1)).squeeze(-1)\r\nprint(f\"{response=}\")\r\nprint(f\"{logprob=}\")\r\n\r\noutput = forward(policy, query_response, tokenizer)\r\nlogits = output.logits[:, context_length - 1 : -1]\r\nlogits /= temperature\r\nall_logprob = F.log_softmax(logits, dim=-1)\r\nlogprob = torch.gather(all_logprob, 2, response.unsqueeze(-1)).squeeze(-1)\r\nprint(f\"{logprob=}\")\r\n```\r\n```\r\nresponse=tensor([[ 198, 198, 3, 399, 532],\r\n [ 198, 198, 48412, 4803, 19321]])\r\nlogprob=tensor([[-3.2519e+00, -5.9604e-06, -5.2666e+00, -7.8440e+00, -2.6367e+00],\r\n [-1.5943e+00, -5.6028e-06, -9.8833e+00, -2.3764e+00, -4.8006e+00]])\r\nlogprob=tensor([[-3.2519e+00, -5.9604e-06, -5.2666e+00, -7.8440e+00, -2.6367e+00],\r\n [-1.5943e+00, -5.6028e-06, -9.8833e+00, -2.3764e+00, -4.8006e+00]],\r\n grad_fn=<SqueezeBackward1>)\r\n```",
"(see #28667)"
] | 1,654
| 1,707
| 1,659
|
CONTRIBUTOR
| null |
### Feature request
Support returning raw logits in `generate` by either:
1. creating a new arg that enables return of raw logits
2. or support callback that allow users to collect the raw logits
### Motivation
* Raw logits "would be the most understandable & consistent across generation methods" (@patrickvonplaten)
* For testing, returning raw logits would help "identify which parts get wrong if any test failure occurs" (@ydshieh)
* There's concern about "rampant too many options" (@Narsil), thus I would prefer the second option to support this feature.
* However, the second option still needs code change to support it. As the user provided `logits_processor` is appended to a new instance of `LogitsProcessorList`. As a result, users cannot get the raw logits using the current implementation even with a custom `LogitsProcessor`.
See further discussion in https://github.com/huggingface/transformers/issues/17424
### Your contribution
I could open a PR to reorder how `logits_processor` is merged with the predefined list of `LogitsProcessorList`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17521/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17521/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17520
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17520/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17520/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17520/events
|
https://github.com/huggingface/transformers/issues/17520
| 1,257,422,816
|
I_kwDOCUB6oc5K8r_g
| 17,520
|
Loading BertModel from BertForMaskedLM without randomly initializing weights
|
{
"login": "jmeadows17",
"id": 85583107,
"node_id": "MDQ6VXNlcjg1NTgzMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/85583107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmeadows17",
"html_url": "https://github.com/jmeadows17",
"followers_url": "https://api.github.com/users/jmeadows17/followers",
"following_url": "https://api.github.com/users/jmeadows17/following{/other_user}",
"gists_url": "https://api.github.com/users/jmeadows17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmeadows17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmeadows17/subscriptions",
"organizations_url": "https://api.github.com/users/jmeadows17/orgs",
"repos_url": "https://api.github.com/users/jmeadows17/repos",
"events_url": "https://api.github.com/users/jmeadows17/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmeadows17/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! The model returns several outputs:\r\n\r\n```\r\n last_hidden_state: torch.FloatTensor = None\r\n pooler_output: torch.FloatTensor = None\r\n hidden_states: Optional[Tuple[torch.FloatTensor]] = None\r\n past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None\r\n attentions: Optional[Tuple[torch.FloatTensor]] = None\r\n cross_attentions: Optional[Tuple[torch.FloatTensor]] = None\r\n```\r\n\r\nThe `pooler_output` is going to be random as the pooler weights will be randomly generated, but the rest will not be! In order to compare the two, you could compare the `hidden_states` values, which contain all intermediary values from the word embeddings up until the last transformer layer's output.",
"I suppose more specifically I'm after the following output:\r\n\r\n```\r\nmodel = BertModel('path/to/new_BertForMaskedLM_model')\r\noutput = model(**encoded_input)[1]\r\n```\r\n\r\nI've got other baselines with embeddings extracted like this. \r\n\r\nDo you mind explaining the difference between `hidden_states` output and `model(**encoded_input)[1]` please?",
"`[1]` will take the second value of the output. It may differ across models, while specifying `output.hidden_states` will always return the hidden states.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,657
| 1,657
|
NONE
| null |
I've trained a `BertForMaskedLM` model for a few days, and I've `save_pretrained()` it.
I want to compare encoder vectors from a `BertModel`, with the embeddings from
this trained `BertForMaskedLM`.
However, simply using `BertModel.from_pretrained("path/to/new_BertForMaskedLM_model")`
warns me that "You should probably TRAIN this model....".
It seems that `bert.pooler.dense.weight` and `bert.pooler.dense.bias` have been added and randomly initialized after the LM head was removed. I assume this means the output of this current model will be useless.
What do I need to do to compare `BertForMaskedLM` encoder embeddings (after removing the LM head) with `BertModel` encoder embeddings?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17520/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17519
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17519/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17519/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17519/events
|
https://github.com/huggingface/transformers/pull/17519
| 1,257,363,731
|
PR_kwDOCUB6oc447wUH
| 17,519
|
Clean README in post release job as well.
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
The README cleanup (removing the main in the links to the doc) hasn't been done in the past releases because we do them on branches (so it has been done, but not on main). This PR redoes the cleanup in the `post-release` job (which is always done on main) and updates the instruction in the setup (no need to run `make post-patch` anymore but the post-release job will change the README so `make fix-copies` is necessary).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17519/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17519",
"html_url": "https://github.com/huggingface/transformers/pull/17519",
"diff_url": "https://github.com/huggingface/transformers/pull/17519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17519.patch",
"merged_at": 1654170243000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17518
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17518/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17518/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17518/events
|
https://github.com/huggingface/transformers/pull/17518
| 1,257,356,605
|
PR_kwDOCUB6oc447uqN
| 17,518
|
Fix when Accelerate is not installed
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
As pointed out in #17516, the current `from_pretrained` on the main branch tries to use a function in Accelerate even if it's not necessary. This PR adds a check on the offloading (which requires Accelerate and is enforced at [this line](https://github.com/huggingface/transformers/blob/58fb3c9f98877bf76efb03e376a5c92cf80f7952/src/transformers/modeling_utils.py#L1847) since it requires a `device_map`) before using that function.
Fixes #17516
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17518/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17518",
"html_url": "https://github.com/huggingface/transformers/pull/17518",
"diff_url": "https://github.com/huggingface/transformers/pull/17518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17518.patch",
"merged_at": 1654170341000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17517
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17517/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17517/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17517/events
|
https://github.com/huggingface/transformers/pull/17517
| 1,257,347,689
|
PR_kwDOCUB6oc447sld
| 17,517
|
Check list of models in the main README and sort it
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
In our effort to have the repo consistency check catch all potential mistakes a contributor can make when adding a new model, this PR adds a new script to make sure all added models are also put in the main README. As a bonus, it also enforces that said README is sorted in alphabetical order cause that's prettier.
Some models are not supposed to be in the main README, so there is a new list for those black sheeps. Some models have different names in the main README and in the lib, there is a map for that.
MobileBERT and RAG were not in the README and I think they should o I added them, the other absent do not have a paper from what I checked. I fixed typos (usually in casing) as well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17517/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17517",
"html_url": "https://github.com/huggingface/transformers/pull/17517",
"diff_url": "https://github.com/huggingface/transformers/pull/17517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17517.patch",
"merged_at": 1654171809000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17516
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17516/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17516/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17516/events
|
https://github.com/huggingface/transformers/issues/17516
| 1,257,035,206
|
I_kwDOCUB6oc5K7NXG
| 17,516
|
NameError: name 'save_offload_index' is not defined when use --model_revision sharded
|
{
"login": "edchengg",
"id": 20430102,
"node_id": "MDQ6VXNlcjIwNDMwMTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/20430102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edchengg",
"html_url": "https://github.com/edchengg",
"followers_url": "https://api.github.com/users/edchengg/followers",
"following_url": "https://api.github.com/users/edchengg/following{/other_user}",
"gists_url": "https://api.github.com/users/edchengg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edchengg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edchengg/subscriptions",
"organizations_url": "https://api.github.com/users/edchengg/orgs",
"repos_url": "https://api.github.com/users/edchengg/repos",
"events_url": "https://api.github.com/users/edchengg/events{/privacy}",
"received_events_url": "https://api.github.com/users/edchengg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@sgugger, would you please kindly look at this - this looks related to the accelerate import - probably not checking if it was imported and using it anyway? \r\n\r\nhttps://github.com/huggingface/transformers/blob/58fb3c9f98877bf76efb03e376a5c92cf80f7952/src/transformers/modeling_utils.py#L75-L80\r\n\r\nhttps://github.com/huggingface/transformers/blob/58fb3c9f98877bf76efb03e376a5c92cf80f7952/src/transformers/modeling_utils.py#L2397\r\n\r\nThank you!",
"Indeed, this needs some gating. Will send a fix shortly, thanks for flagging!"
] | 1,654
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.7
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`export BS=8; rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus=4 run_translation.py --model_name_or_path google/mt5-xxl --output_dir output_dir --adam_eps 1e-06 --evaluation_strategy=steps --do_train --do_eval --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --warmup_steps 50 --max_train_samples 500 --max_eval_samples 50 --deepspeed ds_zero3.json --fp16 --model_revision sharded`
error:
```
[INFO|modeling_utils.py:2115] 2022-06-01 16:46:52,833 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model
[2022-06-01 16:47:17,303] [INFO] [partition_parameters.py:464:__exit__] finished initializing model with 12.92B parameters
Traceback (most recent call last):
File "run_translation.py", line 654, in <module>
Traceback (most recent call last):
File "run_translation.py", line 654, in <module>
main()
File "run_translation.py", line 377, in main
main()
File "run_translation.py", line 377, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
Traceback (most recent call last):
Traceback (most recent call last):
File "run_translation.py", line 654, in <module>
File "run_translation.py", line 654, in <module>
main()main()
File "run_translation.py", line 377, in main
File "run_translation.py", line 377, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
use_auth_token=True if model_args.use_auth_token else None,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2179, in from_pretrained
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
dtype=torch_dtype,
File "/srv/scratch/ychen3411/anaconda3/envs/unify_srl/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2397, in _load_pretrained_model
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
save_offload_index(offload_index, offload_folder)
NameError: name 'save_offload_index' is not defined
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81406
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81407
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81408
[2022-06-01 16:53:23,841] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 81409
[2022-06-01 16:53:23,841] [ERROR] [launch.py:184:sigkill_handler] ['/srv/scratch/ychen3411/anaconda3/envs/unify_srl/bin/python', '-u', 'run_translation.py', '--local_rank=3', '--model_name_or_path', 'google/mt5-xxl', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--evaluation_strategy=steps', '--do_train', '--do_eval', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '500', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--predict_with_generate', '--sortish_sampler', '--source_lang', 'en', '--target_lang', 'ro', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_prefix', 'translate English to Romanian: ', '--val_max_target_length', '128', '--warmup_steps', '50', '--max_train_samples', '500', '--max_eval_samples', '50', '--deepspeed', 'ds_zero3.json', '--fp16', '--model_revision', 'sharded'] exits with return code = 1`
```
### Expected behavior
```shell
I tried to run mt5-xxl (12b) on 4 gpus with deepspeed zero3 and sharded.
But I got the following error:
NameError: name 'save_offload_index' is not defined
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17516/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17515
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17515/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17515/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17515/events
|
https://github.com/huggingface/transformers/pull/17515
| 1,256,614,044
|
PR_kwDOCUB6oc445CWZ
| 17,515
|
Fix flakey no-trainer test
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes an occasional fail in `test_accelerate_examples::test_run_squad_no_trainer` due to multi-gpu's sometimes showing a slight drop in accuracy.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17515/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17515",
"html_url": "https://github.com/huggingface/transformers/pull/17515",
"diff_url": "https://github.com/huggingface/transformers/pull/17515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17515.patch",
"merged_at": 1654105249000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17514
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17514/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17514/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17514/events
|
https://github.com/huggingface/transformers/issues/17514
| 1,256,568,556
|
I_kwDOCUB6oc5K5bbs
| 17,514
|
Flax OPT batch generation test
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ArthurZucker I was going to start digging into this, please confirm if thats ok per your. earlier advice, it seems strange because the issue itself is closed so its a bit confusing :)",
"Hi @skanjila & @ArthurZucker. I had commented on this issue recently as I ran into the same problem with the PyTorch version of OPT when performing batch generation with half precision. However, later I found out it's also related to issue #17433 and solved in the PR #17437. I installed the latest version of the library from the `main` branch and I can confirm the issue was fixed.",
"Nice sorry I didn't know it was already fixed! Good job ☺️ thanks both "
] | 1,654
| 1,658
| 1,657
|
COLLABORATOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.13.0.dev20220521 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patil-suraj
The `test_batch_generation` test in FLAX OPT (currently commented) fails.
This is due to improper handling of the padding.
The fail output is the following :
```python
def test_batch_generation(self):
model_id = "facebook/opt-350m"
tokenizer = GPT2Tokenizer.from_pretrained(model_id)
model = FlaxOPTForCausalLM.from_pretrained(model_id)
tokenizer.padding_side = "left"
# use different length sentences to test batching
sentences = [
"Hello, my dog is a little",
"Today, I",
]
inputs = tokenizer(sentences, return_tensors="jax", padding=True)
input_ids = inputs["input_ids"]
outputs = model.generate(input_ids=input_ids, attention_mask=inputs["attention_mask"], trace=False)
inputs_non_padded = tokenizer(sentences[0], return_tensors="jax").input_ids
output_non_padded = model.generate(input_ids=inputs_non_padded)
num_paddings = inputs_non_padded.shape[-1] - inputs["attention_mask"][-1].sum()
inputs_padded = tokenizer(sentences[1], return_tensors="jax").input_ids
output_padded = model.generate(input_ids=inputs_padded, max_length=model.config.max_length - num_paddings)
batch_out_sentence = tokenizer.batch_decode(outputs[0], skip_special_tokens=True)
non_padded_sentence = tokenizer.decode(output_non_padded[0][0], skip_special_tokens=True)
padded_sentence = tokenizer.decode(output_padded[0][0], skip_special_tokens=True)
expected_output_sentence = [
"Hello, my dog is a little bit of a dork.\nI'm a little bit",
"Today, I<s><s><s><s><s><s><s><s><s><s><s><s>"
# TODO fix this test in next PR
# "Today, I was in the middle of a conversation with a friend about the",
]
print(batch_out_sentence, [non_padded_sentence, padded_sentence])
self.assertListEqual(expected_output_sentence, batch_out_sentence)
# TODO outputs will be similar, fix in next PR
self.assertListEqual(batch_out_sentence, [non_padded_sentence, padded_sentence])
````
```python
["Hello, my dog is a little bit of a dork.\nI'm a little bit", 'Today, I<s><s><s><s><s><s><s><s><s><s><s><s>'] ["Hello, my dog is a little bit of a dork.\nI'm a little bit", 'Today, I was in the middle of a conversation with a friend about the']
E AssertionError: Lists differ: ["Hel[62 chars]ay, I<s><s><s><s><s><s><s><s><s><s><s><s>'] != ["Hel[62 chars]ay, I was in the middle of a conversation with[16 chars]the']
E
E First differing element 1:
E 'Today, I<s><s><s><s><s><s><s><s><s><s><s><s>'
E 'Today, I was in the middle of a conversation with a friend about the'
E
E ["Hello, my dog is a little bit of a dork.\nI'm a little bit",
E - 'Today, I<s><s><s><s><s><s><s><s><s><s><s><s>']
E + 'Today, I was in the middle of a conversation with a friend about the']
tests/models/opt/test_modeling_flax_opt.py:406: AssertionError
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just un-comment the test in the main branch
### Expected behavior
```shell
["Hello, my dog is a little bit of a dork.\nI'm a little bit","Today, I was in the middle of a conversation with a friend about the"]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17514/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17513
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17513/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17513/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17513/events
|
https://github.com/huggingface/transformers/pull/17513
| 1,256,524,326
|
PR_kwDOCUB6oc444tRi
| 17,513
|
Implemented loss for training AudioFrameClassification
|
{
"login": "MorenoLaQuatra",
"id": 10062811,
"node_id": "MDQ6VXNlcjEwMDYyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10062811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorenoLaQuatra",
"html_url": "https://github.com/MorenoLaQuatra",
"followers_url": "https://api.github.com/users/MorenoLaQuatra/followers",
"following_url": "https://api.github.com/users/MorenoLaQuatra/following{/other_user}",
"gists_url": "https://api.github.com/users/MorenoLaQuatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorenoLaQuatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorenoLaQuatra/subscriptions",
"organizations_url": "https://api.github.com/users/MorenoLaQuatra/orgs",
"repos_url": "https://api.github.com/users/MorenoLaQuatra/repos",
"events_url": "https://api.github.com/users/MorenoLaQuatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorenoLaQuatra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @MorenoLaQuatra, thank you for the contribution! :hugs: \r\n\r\nAs you may have seen, the `check_repository_consistency` test doesn't pass. That's because `WavLMForAudioClassification` is auto-copied from `Wav2Vec2ForAudioClassification` using this line: https://github.com/huggingface/transformers/blob/80bb27abb4710267a99443736bde44fe64724615/src/transformers/models/wavlm/modeling_wavlm.py#L1534\r\nCould you please move the loss inside `Wav2Vec2ForAudioFrameClassification` and then run `make fix-copies` from the root of your `transformers` directory? Then your implementation will propagate to all of the models that support audio frame classification :slightly_smiling_face: ",
"Thank you @anton-l for pointing it out. I was not aware of the copy mechanism (sorry!). I think now it should be good, I modified `modeling_wav2vec2.py` and run `make fix-copies`. Let me know if something is missing."
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17509 on WavLMForAudioFrameClassification model
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
I think @anton-l or @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17513/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17513",
"html_url": "https://github.com/huggingface/transformers/pull/17513",
"diff_url": "https://github.com/huggingface/transformers/pull/17513.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17513.patch",
"merged_at": 1654184402000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17512
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17512/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17512/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17512/events
|
https://github.com/huggingface/transformers/pull/17512
| 1,256,464,580
|
PR_kwDOCUB6oc444flP
| 17,512
|
fix OPT-Flax CI tests
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Fixes the OPT Flax tests.
A `require_flax` decorator was missing. The test is also `slow` so it will not be run.
@lysandre
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17512/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17512",
"html_url": "https://github.com/huggingface/transformers/pull/17512",
"diff_url": "https://github.com/huggingface/transformers/pull/17512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17512.patch",
"merged_at": 1654188766000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17511
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17511/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17511/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17511/events
|
https://github.com/huggingface/transformers/pull/17511
| 1,256,463,508
|
PR_kwDOCUB6oc444fVl
| 17,511
|
Fix `TFRemBertModelTest.test_resize_token_embeddings`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"A quick look on \r\nhttps://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_utils.py#L1315\r\n\r\n~~I couldn't find equivalent logic about `is_input_output_equals` that was in TF.~~\r\n\r\nGuess it is because we use `self.config.tie_word_embeddings` to check in PyTorch.",
"> # What does this PR do?\r\n> Fix `TFRemBertModelTest.test_resize_token_embeddings`.\r\n> \r\n> This method\r\n> \r\n> https://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_tf_utils.py#L1449\r\n> \r\n> assumes that `word_embedding_weight` has the same shape as `old_lm_head_decoder`, but this is not the case for `TFRemBertModel`, as it has `input_embedding_size` and `output_embedding_size` in config.\r\n> \r\n> This PR checks the shape before checking the values. If shape is not equal, it means the input/output are not equal.\r\n> \r\n> Fix the CI failure [here](https://github.com/huggingface/transformers/runs/6682139350?check_suite_focus=true)\r\n\r\nSounds like an exception to me! I'm not super well versed in TF word embeddings. Think we don't have the prettiest logic there... \r\n\r\n@Rocketknight1 @gante what would you suggest here? \r\n\r\nAlso cc @sgugger ",
"I confess I don't know much about TF word embeddings, so I'll have to dive deeper to review.\r\n\r\nThere is one thing I know, though -- it uses TF1 code, and it is on my update list 😅 ",
"The embeddings in TF are using very very dark magic. I would refrain from any change in `modeling_tf_utils` (so let RemBERT fail for now) until it has been cleaned up by the TF team :-)",
"OK, so I will close this PR today without merge, if everyone is OK",
"@ydshieh it'd be great if you could open an issue (update TF embeddings) and link this closed PR to it",
"Closed for now with this issue #17540 opened."
] | 1,654
| 1,662
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Fix `TFRemBertModelTest.test_resize_token_embeddings`.
This method
https://github.com/huggingface/transformers/blob/028d4b7c8be2c2fc1146fcc1e9bd253c1a7ea346/src/transformers/modeling_tf_utils.py#L1449
assumes that `word_embedding_weight` has the same shape as `old_lm_head_decoder`, but this is not the case for `TFRemBertModel`, as it has `input_embedding_size` and `output_embedding_size` in config.
This PR checks the shape before checking the values. If shape is not equal, it means the input/output are not equal.
Fix the CI failure [here](https://github.com/huggingface/transformers/runs/6682139350?check_suite_focus=true)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17511/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17511",
"html_url": "https://github.com/huggingface/transformers/pull/17511",
"diff_url": "https://github.com/huggingface/transformers/pull/17511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17511.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17510
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17510/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17510/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17510/events
|
https://github.com/huggingface/transformers/pull/17510
| 1,256,201,823
|
PR_kwDOCUB6oc443kxS
| 17,510
|
Fix Tapas tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Time for (the fix of) Tapas.
Need to add a few `require_tensorflow_probability`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17510/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17510",
"html_url": "https://github.com/huggingface/transformers/pull/17510",
"diff_url": "https://github.com/huggingface/transformers/pull/17510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17510.patch",
"merged_at": 1654110092000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17509
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17509/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17509/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17509/events
|
https://github.com/huggingface/transformers/issues/17509
| 1,256,121,625
|
I_kwDOCUB6oc5K3uUZ
| 17,509
|
Finetuning AudioFrameClassification model
|
{
"login": "MorenoLaQuatra",
"id": 10062811,
"node_id": "MDQ6VXNlcjEwMDYyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10062811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorenoLaQuatra",
"html_url": "https://github.com/MorenoLaQuatra",
"followers_url": "https://api.github.com/users/MorenoLaQuatra/followers",
"following_url": "https://api.github.com/users/MorenoLaQuatra/following{/other_user}",
"gists_url": "https://api.github.com/users/MorenoLaQuatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorenoLaQuatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorenoLaQuatra/subscriptions",
"organizations_url": "https://api.github.com/users/MorenoLaQuatra/orgs",
"repos_url": "https://api.github.com/users/MorenoLaQuatra/repos",
"events_url": "https://api.github.com/users/MorenoLaQuatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorenoLaQuatra/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, if possible could you share your code for how to fine-tune WavLMForAudioFrameClassification for custom dataset.\r\nThank you very much.",
"I can share with you some code I actually use to train an AudioFrameClassification model. It is not intended for Speaker Diarization but the concept behind is the same.\r\n\r\nHere you can find the training loop using HF Trainer: https://github.com/MorenoLaQuatra/ComParE2022_MED/blob/master/train.py \r\n\r\nHere instead you can find the dataset class: https://github.com/MorenoLaQuatra/ComParE2022_MED/blob/master/MosTimestampDataset.py - in this specific case, the `__get_item__` function is much more complex than what you need for the simple diarization case. What you should consider is the return value. For the AudioFrameClassification case, you should use as labels something like the following:\r\n```python\r\nlabels = [\r\n [0, 1, 0, 0], # for each frame it contains the one-hot encoded class, 2nd speaker in this case\r\n [0, 0, 1, 0], # 3rd speaker in this case\r\n […],\r\n […],\r\n]\r\n```\r\n\r\nLet me know if you have any issues."
] | 1,654
| 1,662
| 1,654
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.10.0-051000-generic-x86_64-with-glibc2.32
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrickvonplaten @anton-l
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to finetune a WavLMForAudioFrameClassification model using Trainer and a custom dataset.
It is not my first project with transformers.
When I tried running the training of the model I got this very **strange** warning:
`The following columns in the training set don't have a corresponding argument in WavLMForAudioFrameClassification.forward and have been ignored: labels. If labels are not expected by WavLMForAudioFrameClassification.forward, you can safely ignore this message.`
and then the following error:
```python
File "XXX/lib/python3.9/site-packages/transformers/utils/generic.py", line 220, in __getitem__
return inner_dict[k]
KeyError: 'loss'
```
Looking at the code [here](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/wavlm/modeling_wavlm.py#L1648) it seems that `labels` are not used and loss function is not computed. Is it possible to finetune an AudioFrameClassification model? Is `labels` the wrong keyword?
### Expected behavior
```shell
Standard finetuning.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17509/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17508
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17508/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17508/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17508/events
|
https://github.com/huggingface/transformers/pull/17508
| 1,255,939,808
|
PR_kwDOCUB6oc442o4d
| 17,508
|
Fix CTRL tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I verified the fix on GCP VM.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Fix ctrl tests failed due to GPU memory issue. The fix is the same as in #16881
Job page (failed test)
https://github.com/huggingface/transformers/runs/6682129553?check_suite_focus=true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17508/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17508",
"html_url": "https://github.com/huggingface/transformers/pull/17508",
"diff_url": "https://github.com/huggingface/transformers/pull/17508.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17508.patch",
"merged_at": 1654093643000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17507
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17507/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17507/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17507/events
|
https://github.com/huggingface/transformers/pull/17507
| 1,255,517,470
|
PR_kwDOCUB6oc441IKr
| 17,507
|
Translation/italian: added pipeline_tutorial.mdx [Issue: #17459]
|
{
"login": "nickprock",
"id": 11136646,
"node_id": "MDQ6VXNlcjExMTM2NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11136646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickprock",
"html_url": "https://github.com/nickprock",
"followers_url": "https://api.github.com/users/nickprock/followers",
"following_url": "https://api.github.com/users/nickprock/following{/other_user}",
"gists_url": "https://api.github.com/users/nickprock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickprock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickprock/subscriptions",
"organizations_url": "https://api.github.com/users/nickprock/orgs",
"repos_url": "https://api.github.com/users/nickprock/repos",
"events_url": "https://api.github.com/users/nickprock/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickprock/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @nickprock! Thank you very much for your contribution to the 🤗 Italian documentation! 🌈\r\nCan I ask you if you can fix a couple of things?\r\n\r\n- \"Usare uno specifico tokenizer or modello\" --> \"Usare uno specifico tokenizer o modello\"\r\n- \"La mansione text-generation ha un [~generation_utils.GenerationMixin.generate] metodo\" --> La mansione text-generation ha un metodo [~generation_utils.GenerationMixin.generate]\r\n- Here seeing the formatting I think there are missing quotation marks \"[AutoTokenizer']. Ad esempio, carica la classe [AutoModelForCausalLM`]\", I think the problem is also there in the English doc, if you can fix it for the ita one and then I will fix it for the eng version\r\n- \"Trova un [audio classification](https://huggingface.co/models?pipeline_tag=audio-classification) modello per eseguire emotion recognition\" --> \"Trova un modello per la [classificazione audio](https://huggingface.co/models?pipeline_tag=audio-classification) per eseguire un compito di riconoscimento automatico delle emozioni\"\r\n\r\nThanks! 🚀",
"Thanks @mfumanelli,\r\nI correct and submit.\r\nOne question, how do you translate task? I used \"compito\", \"mansione\", \"attività\".",
"Thanks for the great PR @nickprock! And @mfumanelli for the amazing review 🚀",
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes @nickprock, I agree with you, I also use \"compito.\" It would be more natural for me to leave the English word \"task\" directly, so I also had this doubt. In my opinion, \"compito\", \"mansione\" and \"attività\" are actually the best words with which to translate the word \"task\". 🤗\r\n\r\nbtw, looks perfect to me now! \r\n",
"@nickprock grazie for the great PR! @mfumanelli grazie for the detailed review! \r\n\r\nI am learning Italian with your work! \r\n\r\n@sgugger LGTM :)\r\n\r\n*PR related to #17459.",
"sorry, there was a little problem. I absentmindedly pushed my updates to the same branch. I undo but a check fails",
"Thanks a lot for this new translation (the failure in Build PR doc is spurious here)."
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
- added italian translation of pipeline_tutorial.mdx
- updated _toctree.yml
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17507/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17507",
"html_url": "https://github.com/huggingface/transformers/pull/17507",
"diff_url": "https://github.com/huggingface/transformers/pull/17507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17507.patch",
"merged_at": 1654526121000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17506
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17506/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17506/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17506/events
|
https://github.com/huggingface/transformers/pull/17506
| 1,255,457,591
|
PR_kwDOCUB6oc4406kl
| 17,506
|
Fix LayoutXLMProcessorTest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Fix `LayoutXLMProcessorTest` test failure found in
https://github.com/huggingface/transformers/runs/6663596212?check_suite_focus=true
```
E ValueError: Calling LayoutXLMTokenizerFast.from_pretrained() with the path to a single file or url is not supported for this tokenizer. Use a model identifier or the path to a directory instead.
```
I just use a tiny model's hub name to fix it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17506/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17506",
"html_url": "https://github.com/huggingface/transformers/pull/17506",
"diff_url": "https://github.com/huggingface/transformers/pull/17506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17506.patch",
"merged_at": 1654093597000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17505
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17505/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17505/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17505/events
|
https://github.com/huggingface/transformers/issues/17505
| 1,255,205,189
|
I_kwDOCUB6oc5K0OlF
| 17,505
|
Training large huggingface models on Azure with CUDA? [OPT]
|
{
"login": "Leli1024",
"id": 33652168,
"node_id": "MDQ6VXNlcjMzNjUyMTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/33652168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leli1024",
"html_url": "https://github.com/Leli1024",
"followers_url": "https://api.github.com/users/Leli1024/followers",
"following_url": "https://api.github.com/users/Leli1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Leli1024/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leli1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leli1024/subscriptions",
"organizations_url": "https://api.github.com/users/Leli1024/orgs",
"repos_url": "https://api.github.com/users/Leli1024/repos",
"events_url": "https://api.github.com/users/Leli1024/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leli1024/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, for training such large models, a Tesla K80 won't suffice. You typically need # of parameters * 18 in terms of bytes of RAM when fine-tuning. So for the 30 billion parameter model, it's 30 billion * 18 = 540 GB. It's because you need 18 bytes per parameter to store not only the parameter itself, but also its gradient and optimizer states.\r\n\r\nHowever there are tricks like mixed precision and frameworks like DeepSpeed to fit giant models, you can read more in our guide here: https://huggingface.co/docs/transformers/performance"
] | 1,654
| 1,654
| 1,654
|
NONE
| null |
I am trying to finetune the 1.3b and 30b OPT variant models respectively, however each time I try to train them I will always get the following error:
```
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 11.17 GiB total capacity; 1.95 GiB already allocated; 8.50 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
I have tried reducing the batch size to 1. However short of running the model on CPU which would take way too much time, it will crash each time. If it's relevant I am using an Azure environment which uses the Tesla K80 accelerator. Has anyone managed to train or use these models on GPU?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17505/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17504
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17504/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17504/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17504/events
|
https://github.com/huggingface/transformers/issues/17504
| 1,254,691,304
|
I_kwDOCUB6oc5KyRHo
| 17,504
|
bad_words_ids not working
|
{
"login": "Jack000",
"id": 2636509,
"node_id": "MDQ6VXNlcjI2MzY1MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2636509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jack000",
"html_url": "https://github.com/Jack000",
"followers_url": "https://api.github.com/users/Jack000/followers",
"following_url": "https://api.github.com/users/Jack000/following{/other_user}",
"gists_url": "https://api.github.com/users/Jack000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jack000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jack000/subscriptions",
"organizations_url": "https://api.github.com/users/Jack000/orgs",
"repos_url": "https://api.github.com/users/Jack000/repos",
"events_url": "https://api.github.com/users/Jack000/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jack000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I wrote a function to enumerate all possible permutations of \" Badword\", but it quickly blows up with hundreds of permutations like [\" B\",\"a\",\"d\",\"w\",\"o\",\"r\",\"d\"]. Limiting the token length works ok, but still doesn't prevent generation of variations like [\" Bad\",\"words\"]\r\n\r\nI think this overall approach just doesn't really work for preventing the generation of bad_words. Don't know if there's a better solution than generate + filter.\r\n\r\n```\r\ndef get_bad_words_ids(tokenizer, bad_words, min_strlen=2):\r\n vocab_tokens = tokenizer.get_vocab()\r\n vocab = {}\r\n\r\n for token in vocab_tokens:\r\n vocab[tokenizer.convert_tokens_to_string([token])] = token\r\n\r\n results = []\r\n\r\n for bad_word in bad_words:\r\n confirmed_tokens = []\r\n possible_tokens = []\r\n for token in vocab:\r\n if bad_word == token:\r\n confirmed_tokens.append([token])\r\n elif bad_word.startswith(token):\r\n possible_tokens.append([token])\r\n while len(possible_tokens) > 0:\r\n new_possible_tokens = []\r\n for prefixes in possible_tokens:\r\n prefix = ''.join(prefixes)\r\n for token in vocab:\r\n if len(token) < min_strlen:\r\n continue\r\n if bad_word == prefix + token:\r\n found_prefix = prefixes.copy()\r\n found_prefix.append(token)\r\n confirmed_tokens.append(found_prefix)\r\n elif bad_word.startswith(prefix + token):\r\n found_prefix = prefixes.copy()\r\n found_prefix.append(token)\r\n new_possible_tokens.append(found_prefix)\r\n possible_tokens = new_possible_tokens\r\n results += confirmed_tokens\r\n\r\n ids = []\r\n for tokens in results:\r\n gtokens = []\r\n for token in tokens:\r\n gtokens.append(vocab[token])\r\n ids.append(tokenizer.convert_tokens_to_ids(gtokens))\r\n return ids\r\n```",
"Hey @Jack000 👋 It is not clear from your description -- have you tried using the tokenizer with the instructions given in the `NoBadWordsLogitsProcessor` [docs](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.NoBadWordsLogitsProcessor.bad_words_ids)?\r\n\r\n[\"...in order to get the token ids of the words that should not appear in the generated text, use `tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids`.\"]",
"That's what I did. This will consistently tokenize [\" Badword\"] as [11908] but during inference the model will generate [7286, 1754] which is [\" Bad\", \"word\"]\r\n\r\nas I mentioned above I wrote a function to enumerate all possible ways of combining tokens to form \"Badword\", but the problem is that it doesn't work for variations like \"Badwords\" and \"Badwordo\". Extending the permutations to include these variations results in thousands of permutations per bad_word and doesn't really scale.",
"Okay, I think I got your issue :) When you add a word to `bad_word_ids`, you would like to have its sub-words and/or related words banned as well, correct? \r\n\r\nThere are a few things worth mentioning here:\r\n1. It is intentional that sub-words do NOT get banned. Think about the word \"doctorate\", which is very different from two of its subwords (\"doctor\" and \"ate\"). Banning a word doesn't imply banning the subwords in most scenarios, and our implementation has to be flexible in that regard.\r\n2. When a long word gets broken into more than a token, the first token has a prefix space and will be different from the corresponding token without the space. This is to avoid banning valid sequences that would contain the same characters. Example: if you ban \"doctorate\", \"doctor ate\" is a valid sequence. This is because the banned tokens will be \" doctor\" and \"ate\", not \" doctor\" and \" ate\" (notice the spaces).\r\n3. Banned tokens resulting from a long word are never considered in isolation. Example: if you ban \"doctorate\", you can still generate \" doctor\" and \"ate\" in isolation, \"the doctor wants to dictate\" is a valid sequence.\r\n4. I've tried running the \"Badword\" example you mentioned, and I do get two tokens (one for \" Bad\", the other for \"word\"). \r\n\r\nYou can see an example for a few cases mentioned above [here](https://colab.research.google.com/drive/1ECYuKjDt76vw7uQ-5nRaUPjU2oG-eFBt#scrollTo=RdMoVNcbwhvZ).\r\n\r\nThe solution for banning subwords is to explicitly add them to the list of `bad_word_ids`. @patrickvonplaten have you seen tools to generate sub-words and/or derived words from a list of candidate words?",
"ah the actual bad word I was trying to ban was [\" Hitler\"].\r\n\r\nI do understand how the bad_words_ids feature works, but I guess my issue is that I don't want the word \"Hitler\" generated under any circumstances subwords or otherwise. As you can see I did implement a function to enumerate all possible ways tokens can be combined to form \"Hitler\" to add to bad_words_ids, but if I include \"Hitlers\" and other such variations the possible permutations will number in the thousands.\r\n\r\nanyways, I don't see a simple solution to this but the function I wrote in addition to filtering afterwards works ok for now.",
"> I do understand how the bad_words_ids feature works\r\n\r\nMy apologies :D Better safe than sorry, in case there was some confusion about the intended behavior.",
"@patil-suraj could you maybe also take a look here? Otherwise happy to dive deeper if necessary",
"Sorry could I ping @ArthurZucker or @gante on this one maybe? :-) ",
"Hey! I looked at the problem a bit, and as you mentioned, the permutations would be a bit too problematic. \n\nWe can probably work this out by rather banning a normalized string. Instead of checking if [Bad_id,Word_id] are generated, we can should convert the a string by deciding, normalize and remove the bad word. This is more efficient but might not have its place in the generate function, as the tokenizer is not available. But it probably makes sens to have a custom logit processor that needs to be initialized with the tokenizer. Let me ask around 🤗\n\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,670
| 1,670
|
NONE
| null |
### Feature request
I'm using gpt2 for text generation with a word blacklist and noticed that some words on the blacklist were still being generated.
I found that even though the word ["badword"] would not be generated, it would still generate ["bad", "word"] in two tokens.
an example of this is [11908] and [7286, 1754]
this seems to be a different issue from the leading space issue and padding issue. I think I could get around it by adding the split tokens to the blacklist, but I can't seem to get the tokenizer to split the string to produce [7286, 1754]. Is there a way to get all possible permutations of a string to add to the blacklist?
### Motivation
Without this feature bad_words_ids basically doesn't work most of the time
### Your contribution
Not familiar with the tokenizer code unfortunately
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17504/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17503
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17503/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17503/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17503/events
|
https://github.com/huggingface/transformers/pull/17503
| 1,254,537,464
|
PR_kwDOCUB6oc44xnOa
| 17,503
|
Fix MP and CPU offload tests for Funnel and GPT-Neo
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
This PR should fix the last failing GPU/multi-GPU tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17503/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17503",
"html_url": "https://github.com/huggingface/transformers/pull/17503",
"diff_url": "https://github.com/huggingface/transformers/pull/17503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17503.patch",
"merged_at": 1654091980000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17502
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17502/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17502/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17502/events
|
https://github.com/huggingface/transformers/pull/17502
| 1,254,315,616
|
PR_kwDOCUB6oc44w6Ul
| 17,502
|
support ONNX export of XDropout in deberta{,_v2} and sew_d
|
{
"login": "garymm",
"id": 421339,
"node_id": "MDQ6VXNlcjQyMTMzOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/421339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garymm",
"html_url": "https://github.com/garymm",
"followers_url": "https://api.github.com/users/garymm/followers",
"following_url": "https://api.github.com/users/garymm/following{/other_user}",
"gists_url": "https://api.github.com/users/garymm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garymm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garymm/subscriptions",
"organizations_url": "https://api.github.com/users/garymm/orgs",
"repos_url": "https://api.github.com/users/garymm/repos",
"events_url": "https://api.github.com/users/garymm/events{/privacy}",
"received_events_url": "https://api.github.com/users/garymm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@michaelbenayoun thanks for taking a look. I added a test.\r\nUnfortunately I couldn't use the existing testing style because none of the affected models have an ONNX config (that's tracked by https://github.com/huggingface/transformers/issues/16308).",
"@michaelbenayoun bump, PTAL",
"@michaelbenayoun bump",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@garymm any update on this? ",
"@michaelbenayoun @lewtun do you have any updates on this PR?",
"Thank you for your contribution"
] | 1,654
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
Enables `torch.onnx.export` of the `StableDropout` module in training mode.
In training mode, the `XDropout` torch.autograd.Function is included. This change
adds a `symbolic` function to `XDropout` which produces an ONNX graph that
is equivalent to the `forward` function.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
No. LMK if you want me to open an issue.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
No, LMK if there's some doc that I should update.
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17502/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17502/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17502",
"html_url": "https://github.com/huggingface/transformers/pull/17502",
"diff_url": "https://github.com/huggingface/transformers/pull/17502.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17502.patch",
"merged_at": 1659522825000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17501
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17501/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17501/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17501/events
|
https://github.com/huggingface/transformers/pull/17501
| 1,254,198,363
|
PR_kwDOCUB6oc44wkvL
| 17,501
|
Refactor to inherit from nn.Module instead of nn.ModuleList
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Let us know when you'd like for us to review! :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,654
| 1,656
| 1,656
|
COLLABORATOR
| null |
# What does this PR do?
Refactors classes inheriting from `nn.ModuleList` to inherit from `nn.Module` instead. This is to make debugging and inspecting layer outputs more easy.
See also: https://github.com/huggingface/transformers/pull/17493
The following was run to check the weight loading in:
```
from transformers import BeitForImageClassification, Data2VecVisionForImageClassification
print("\nLoading in Data2VecVision model...")
model_checkpoint = "facebook/data2vec-vision-base"
model = Data2VecVisionForImageClassification.from_pretrained(model_checkpoint)
print("\nLoading in BeiT model...")
model_checkpoint = "microsoft/beit-base-patch16-224-pt22k"
model = BeitForImageClassification.from_pretrained(model_checkpoint)
```
Output:
```
Loading in Data2VecVision model...
/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of Data2VecVisionForImageClassification were not initialized from the model checkpoint at facebook/data2vec-vision-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading in BeiT model...
Some weights of the model checkpoint at microsoft/beit-base-patch16-224-pt22k were not used when initializing BeitForImageClassification: ['layernorm.bias', 'layernorm.weight', 'lm_head.weight', 'lm_head.bias']
- This IS expected if you are initializing BeitForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BeitForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BeitForImageClassification were not initialized from the model checkpoint at microsoft/beit-base-patch16-224-pt22k and are newly initialized: ['beit.pooler.layernorm.bias', 'classifier.bias', 'classifier.weight', 'beit.pooler.layernorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Running on `main` we see the same weights are newly initialized:
```
Loading in Data2VecVision model...
/Users/aroberts/.virtualenvs/tenv/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of Data2VecVisionForImageClassification were not initialized from the model checkpoint at facebook/data2vec-vision-base and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading in BeiT model...
Some weights of the model checkpoint at microsoft/beit-base-patch16-224-pt22k were not used when initializing BeitForImageClassification: ['layernorm.bias', 'lm_head.bias', 'lm_head.weight', 'layernorm.weight']
- This IS expected if you are initializing BeitForImageClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BeitForImageClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BeitForImageClassification were not initialized from the model checkpoint at microsoft/beit-base-patch16-224-pt22k and are newly initialized: ['beit.pooler.layernorm.weight', 'classifier.bias', 'classifier.weight', 'beit.pooler.layernorm.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17501/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17501",
"html_url": "https://github.com/huggingface/transformers/pull/17501",
"diff_url": "https://github.com/huggingface/transformers/pull/17501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17501.patch",
"merged_at": 1656929022000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17500
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17500/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17500/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17500/events
|
https://github.com/huggingface/transformers/pull/17500
| 1,254,134,398
|
PR_kwDOCUB6oc44wXKQ
| 17,500
|
Fix `tokenizer` type annotation in `pipeline(...)`
|
{
"login": "willfrey",
"id": 13784361,
"node_id": "MDQ6VXNlcjEzNzg0MzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13784361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willfrey",
"html_url": "https://github.com/willfrey",
"followers_url": "https://api.github.com/users/willfrey/followers",
"following_url": "https://api.github.com/users/willfrey/following{/other_user}",
"gists_url": "https://api.github.com/users/willfrey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willfrey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willfrey/subscriptions",
"organizations_url": "https://api.github.com/users/willfrey/orgs",
"repos_url": "https://api.github.com/users/willfrey/repos",
"events_url": "https://api.github.com/users/willfrey/events{/privacy}",
"received_events_url": "https://api.github.com/users/willfrey/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
I think you mean to accept either an instance of `PreTrainedTokenizer` or `PreTrainedTokenizerFast` inside of the `pipeline(...)` factory function, if the `tokenizer` argument isn't a `str`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17500/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17500",
"html_url": "https://github.com/huggingface/transformers/pull/17500",
"diff_url": "https://github.com/huggingface/transformers/pull/17500.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17500.patch",
"merged_at": 1654087408000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17499
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17499/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17499/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17499/events
|
https://github.com/huggingface/transformers/pull/17499
| 1,254,126,904
|
PR_kwDOCUB6oc44wVip
| 17,499
|
Debug LukeForMaskedLM
|
{
"login": "ryokan0123",
"id": 17979572,
"node_id": "MDQ6VXNlcjE3OTc5NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17979572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryokan0123",
"html_url": "https://github.com/ryokan0123",
"followers_url": "https://api.github.com/users/ryokan0123/followers",
"following_url": "https://api.github.com/users/ryokan0123/following{/other_user}",
"gists_url": "https://api.github.com/users/ryokan0123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryokan0123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryokan0123/subscriptions",
"organizations_url": "https://api.github.com/users/ryokan0123/orgs",
"repos_url": "https://api.github.com/users/ryokan0123/repos",
"events_url": "https://api.github.com/users/ryokan0123/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryokan0123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix some undesirable behaviors of `LukeForMaskedLM`.
#### 1. Make `LukeForMaskedLM` accept inputs without entity ids
**Before❌:**
```
>>> from transformers import LukeForMaskedLM, MLukeTokenizer
>>> import torch
>>> text = "test string"
>>> model = LukeForMaskedLM.from_pretrained('studio-ousia/luke-base')
>>> tokenizer = MLukeTokenizer.from_pretrained('studio-ousia/luke-base')
>>> encoding = tokenizer(text, return_tensors="pt")
>>> outputs = model(**encoding)
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType
```
I have fixed this by making entity inputs optional in the forward function.
#### 2. Make `LukeForMaskedLM` instantiable from `AutoModelForMaskedLM`
**Before❌:**
```
>>> from transformers import AutoModelForMaskedLM
>>> model = AutoModelForMaskedLM.from_pretrained("studio-ousia/luke-base")
ValueError: Unrecognized configuration class <class 'transformers.models.luke.configuration_luke.LukeConfig'> for this kind of AutoModel: AutoModelForMaskedLM.
```
I have fixed this by adding `("luke", "LukeForMaskedLM")` to `MODEL_FOR_MASKED_LM_MAPPING_NAMES`.
## Who can review?
@NielsRogge, could you check this PR? Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17499/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17499",
"html_url": "https://github.com/huggingface/transformers/pull/17499",
"diff_url": "https://github.com/huggingface/transformers/pull/17499.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17499.patch",
"merged_at": 1654092186000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17498
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17498/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17498/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17498/events
|
https://github.com/huggingface/transformers/pull/17498
| 1,254,115,387
|
PR_kwDOCUB6oc44wTjK
| 17,498
|
[GPT2Tokenizer] Raise ValueError for Fast GPT2Tokenizer with bos token for now
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @thomasw21 @SaulLu @mishig25 @ArthurZucker ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @patrickvonplaten ! It seems really good to have this error raised while figuring out how to do the processor.",
"> Also make it's worth doing a patch release here actually (not sure maybe not super important though)\r\n\r\nWe're likely going to release tomorrow or as soon as BLOOM is merged, so this will be included in it!",
"Hello! when using the GPT2TokenizerFast for the OPT model, I get a warning that the fast version of the tokenizer is working incorrectly. I'm wondering what is the status of this problem, or is it okay to use the fast tokenizer now?\r\n",
"I think the fast OPTTokenizer should work now after it has been merged to `tokenizers` no? cc @Narsil ",
"Actually no need for any modifications or `tokenizers` specific version. Everything should have been fixed within `transformers`. \r\n\r\nBut I am not sure about every single OPT model on the hub, I didn't make modifications there (nor am I sure how I should do that).\r\nSo if we are using an old incorrect tokenizer, the warning might still be valid."
] | 1,654
| 1,667
| 1,654
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
@sgugger - As discussed offline the best fix here will be to make sure GPT2TokenizerFast works correctly which is however dependent on https://github.com/huggingface/tokenizers/pull/1005 and will probs take some time. Think it's important that we raise a ValueError though as otherwise users will run into silently not adding BOS to OPT which I'd like to avoid.
The error message should be clear enough for the user to understand how to change.
@LysandreJik @sgugger also make it's worth doing a patch release here actually (not sure maybe not super important though)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17498/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17498",
"html_url": "https://github.com/huggingface/transformers/pull/17498",
"diff_url": "https://github.com/huggingface/transformers/pull/17498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17498.patch",
"merged_at": 1654020409000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17497
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17497/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17497/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17497/events
|
https://github.com/huggingface/transformers/pull/17497
| 1,254,104,613
|
PR_kwDOCUB6oc44wRTn
| 17,497
|
CLI: tool to convert PT into TF weights and open hub PR
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
MEMBER
| null |
# What does this PR do?
This PR adds a CLI to convert PT weights into TF weights, validate them, and (optionally) open a PR. The open PR part depends on https://github.com/huggingface/huggingface_hub/pull/884, and is wrapped in a try/except for the time being.
Here are 3 PRs open with the tool (`transformers-cli pt-to-tf --model-name [model-name] --local-dir [local-dir] --open-pr`):
1. Text modality: https://huggingface.co/joaogante/test_text/discussions/1
2. Audio modality: https://huggingface.co/joaogante/test_audio/discussions/1
3. Image modality: https://huggingface.co/joaogante/test_img/discussions/1
This tool can also be used to check existing weights. Sadly, there is no programmatic way to check the weights in existing hub PRs, they have to be downloaded manually.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17497/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17497",
"html_url": "https://github.com/huggingface/transformers/pull/17497",
"diff_url": "https://github.com/huggingface/transformers/pull/17497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17497.patch",
"merged_at": 1654105928000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17496
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17496/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17496/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17496/events
|
https://github.com/huggingface/transformers/pull/17496
| 1,254,090,612
|
PR_kwDOCUB6oc44wOR_
| 17,496
|
Exclude Databricks from notebook env
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
This PR makes sures `is_in_notebook()` returns `False` when inside Databricks.
Fixes #17406
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17496/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17496",
"html_url": "https://github.com/huggingface/transformers/pull/17496",
"diff_url": "https://github.com/huggingface/transformers/pull/17496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17496.patch",
"merged_at": 1654088411000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17495
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17495/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17495/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17495/events
|
https://github.com/huggingface/transformers/pull/17495
| 1,254,086,216
|
PR_kwDOCUB6oc44wNVs
| 17,495
|
has_attentions - consistent test skipping logic and tf tests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@NielsRogge, @ydshieh, @patrickvonplaten adding you to cover: git blame ownership, test ownership, some git blame and general transformers ownership. I hope that's OK. Feel free to remove yourselves and/or add others you think are more suitable. ",
"Thanks a lot for cleaning this up @amyeroberts ! ",
"@NielsRogge I decide not to remove `has_attentions` in this PR, and would like to focus on making the PT/TF test consistency. If that's OK with you, I'll go ahead and merge. "
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Two linked changes regarding the control of tests being run:
**PyTorch and TF consistency**: `has_attentions` flag is used in `test_modeling_common.py` to control some of the logic in tests that are run, only applying them if the model has attention(s). This PR adds equivalent logic to tests in `test_modeling_tf_common.py`
**Skipping tests consistency**: `unittest.skip` is used to skip entire tests if they cannot/do not apply to that model e.g. for [input embedding test in ConvNext](https://github.com/huggingface/transformers/blob/f394a2a50d8729cd1ca9b368e330ec50664c3292/tests/models/convnext/test_modeling_convnext.py#L161). For `test_attention_outputs` this was controlled with an if-else statement. This was changed to be controlled with `unittest.skip` instead for two reasons: 1) consistency with the rest of the code base 2) prevent confusing pytest outputs i.e. models without attention are shown to skip `test_attention_outputs` instead of passing it.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17495/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17495/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17495",
"html_url": "https://github.com/huggingface/transformers/pull/17495",
"diff_url": "https://github.com/huggingface/transformers/pull/17495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17495.patch",
"merged_at": 1654761003000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17494
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17494/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17494/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17494/events
|
https://github.com/huggingface/transformers/pull/17494
| 1,254,077,357
|
PR_kwDOCUB6oc44wLb8
| 17,494
|
Fix TF _generate
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,662
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
To be added
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17494/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17494",
"html_url": "https://github.com/huggingface/transformers/pull/17494",
"diff_url": "https://github.com/huggingface/transformers/pull/17494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17494.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17493
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17493/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17493/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17493/events
|
https://github.com/huggingface/transformers/pull/17493
| 1,253,972,666
|
PR_kwDOCUB6oc44v085
| 17,493
|
Refactor classes to inherit from nn.Module instead of nn.Sequential
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Refactors classes that inherit from `nn.Sequential` to inherit from `nn.Module` instead. This is to make the code easier to debug and inspect.
Changes:
* Explicit `forward` method implemented for classes
* Iterating over layers and them registering to the module using `add_module(str_ind, layer)` in `__init__`. This provides backwards compatibility as the submodules will be named according to their position in the call stack, like in nn.Sequential, which is needed to load in the same checkpoints.
Note: This does not include other possible `nn.Sequential` refactoring within modules e.g. [lxmert](https://github.com/huggingface/transformers/blob/6ee1474b67b088829555364a14ebfb45e661fac4/src/transformers/models/lxmert/modeling_lxmert.py#L721), or inheriting from `ModuleList` e.g. in [Beit](https://github.com/huggingface/transformers/blob/2ef09ecfb8afb6624aab87afdad9fe72030397af/src/transformers/models/beit/modeling_beit.py#L962). These are more involved changes and should be address in separate PRs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17493/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17493",
"html_url": "https://github.com/huggingface/transformers/pull/17493",
"diff_url": "https://github.com/huggingface/transformers/pull/17493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17493.patch",
"merged_at": 1654086979000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17492
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17492/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17492/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17492/events
|
https://github.com/huggingface/transformers/pull/17492
| 1,253,874,060
|
PR_kwDOCUB6oc44vf-M
| 17,492
|
Adding the Portuguese version of the tasks/token_classification.mdx documentation
|
{
"login": "jonatasgrosman",
"id": 5097052,
"node_id": "MDQ6VXNlcjUwOTcwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonatasgrosman",
"html_url": "https://github.com/jonatasgrosman",
"followers_url": "https://api.github.com/users/jonatasgrosman/followers",
"following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}",
"gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions",
"organizations_url": "https://api.github.com/users/jonatasgrosman/orgs",
"repos_url": "https://api.github.com/users/jonatasgrosman/repos",
"events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonatasgrosman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @jonatasgrosman for the translation! 🤗\r\n\r\nLGTM @sgugger."
] | 1,654
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding the Portuguese version of the tasks/token_classification.mdx documentation
Work on #16824
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17492/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17492",
"html_url": "https://github.com/huggingface/transformers/pull/17492",
"diff_url": "https://github.com/huggingface/transformers/pull/17492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17492.patch",
"merged_at": 1654516054000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17491
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17491/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17491/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17491/events
|
https://github.com/huggingface/transformers/pull/17491
| 1,253,861,785
|
PR_kwDOCUB6oc44vdV9
| 17,491
|
[ViT_MAE] fix num of channels in `patchify` and `unpatchify`
|
{
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@NielsRogge I didn't find what is breaking the doc now",
"@NielsRogge , can you re-run the gh action? looks like something interrupted it",
"Hi,\r\n\r\nI've created PR #17710 that fixes some more things, like the variable names and docstrings."
] | 1,654
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix the hard-coded number of channels in `patchify` and `unpatchify` methods.
Fixes #17473
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17491/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17491",
"html_url": "https://github.com/huggingface/transformers/pull/17491",
"diff_url": "https://github.com/huggingface/transformers/pull/17491.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17491.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17490
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17490/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17490/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17490/events
|
https://github.com/huggingface/transformers/issues/17490
| 1,253,752,917
|
I_kwDOCUB6oc5KusBV
| 17,490
|
GPT-2 Forward w/ and w/o caching of past values gives different results
|
{
"login": "rajcscw",
"id": 7319647,
"node_id": "MDQ6VXNlcjczMTk2NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7319647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajcscw",
"html_url": "https://github.com/rajcscw",
"followers_url": "https://api.github.com/users/rajcscw/followers",
"following_url": "https://api.github.com/users/rajcscw/following{/other_user}",
"gists_url": "https://api.github.com/users/rajcscw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajcscw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajcscw/subscriptions",
"organizations_url": "https://api.github.com/users/rajcscw/orgs",
"repos_url": "https://api.github.com/users/rajcscw/repos",
"events_url": "https://api.github.com/users/rajcscw/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajcscw/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @rajcscw,\r\n\r\nI'm not 100% sure what your codesnippet is doing there exactly (note that I wouldn't try to pass the position_ids, but instead let GPT2 handle that).\r\n\r\nWe have exactly that test in transformers which you can find here: https://github.com/huggingface/transformers/blob/975dd2bbbcd4e8bdaf07c275c090d218d88c7c12/tests/models/gpt2/test_modeling_gpt2.py#L288\r\n\r\nCould you take a look here and see whether you can use this code?\r\n\r\nAlso cc @patil-suraj @ArthurZucker just FYI\r\n",
"Not sure either but the inputs fed to the network might be different as : \r\n1. the `position_ids` are specified with : \r\n```python\r\nattention_mask = torch.tensor([0, 0, 0, 0, 1, 1]).reshape(1, -1)\r\nposition_ids = torch.tensor([1, 1, 1, 1, 0, 1]).reshape(1, -1)\r\n``` \r\nin the first case \r\n2. in the second case they are not specified and should be automatically created. \r\n\r\nCheck if you still have an issue when you use the same position ids vectors, or just don't input them. \r\n\r\n",
"@patrickvonplaten I will check that test case and adapt my example. @ArthurZucker In both cases, the position IDs should be identical; in the first case, it is created explicitly and in the other, it is generated automatically. I will test it without passing any position IDs.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hey, I am trying to do a forward() pass of the GPT-2 model with and without caching of past values and observed that the logits are slightly different. Is this to be expected or I am missing something? I highly appreciate it if someone could help me with this.
Code snippet for an MWE below (Check the last assert statement which fails)
```python
from transformers import GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
with torch.no_grad():
#########################################################################
# with forward and no caching of past
# left padded to size of 5
# step 0
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2]).reshape(1, -1)
attention_mask = torch.tensor([0, 0, 0, 0, 1]).reshape(1, -1)
position_ids = torch.tensor([1, 1, 1, 1, 0]).reshape(1, -1)
gen_outputs = model(input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
return_dict=True)
no_cache_0_next_token_logits = gen_outputs.logits[0, -1, :].clone()
# step 1 - input grown by 1
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2, 5]).reshape(1, -1)
attention_mask = torch.tensor([0, 0, 0, 0, 1, 1]).reshape(1, -1)
position_ids = torch.tensor([1, 1, 1, 1, 0, 1]).reshape(1, -1)
gen_outputs = model(input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
return_dict=True)
no_cache_1_next_token_logits = gen_outputs.logits[0, -1, :].clone()
########################################################################
# with forward with caching
# left padded to size of 5
# step 0
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2]).reshape(1, -1)
model_kwargs = {
"attention_mask": torch.tensor([0, 0, 0, 0, 1]).reshape(1, -1)
}
model_inputs = model.prepare_inputs_for_generation(
input_ids, **model_kwargs)
gen_outputs = model(**model_inputs,
return_dict=True)
cache_0_next_token_logits = gen_outputs.logits[0, -1, :].clone()
assert torch.equal(cache_0_next_token_logits,
no_cache_0_next_token_logits) == True
model_kwargs = model._update_model_kwargs_for_generation(
gen_outputs, model_kwargs, is_encoder_decoder=model.config.is_encoder_decoder
)
# step 1 - input grown by 1
input_ids = torch.tensor([50256, 50256, 50256, 50256, 2, 5]).reshape(1, -1)
model_inputs = model.prepare_inputs_for_generation(
input_ids, **model_kwargs)
gen_outputs = model(**model_inputs,
return_dict=True)
cache_1_next_token_logits = gen_outputs.logits[0, -1, :].clone()
assert torch.equal(cache_1_next_token_logits,
no_cache_1_next_token_logits) == True
```
### Expected behavior
```shell
Expected behavior: Caching does not affect the logits and only speeds up the computation.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17490/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17489
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17489/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17489/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17489/events
|
https://github.com/huggingface/transformers/issues/17489
| 1,253,557,310
|
I_kwDOCUB6oc5Kt8Q-
| 17,489
|
`do_eval` is True when setting `do_predict`=True
|
{
"login": "yana-xuyan",
"id": 38536635,
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yana-xuyan",
"html_url": "https://github.com/yana-xuyan",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi, could you post a code-snippet to reproduce this. I just tried this command and it works as expected and does not run eval.\r\n\r\n```bash\r\npython examples/pytorch/summarization/run_summarization.py \\\r\n --model_name_or_path t5-small \\\r\n --do_predict \\\r\n --dataset_name xsum \\ \r\n --source_prefix \"summarize: \" \\\r\n --output_dir /tmp/tst-summarization \\\r\n --per_device_train_batch_size=4 \\\r\n --per_device_eval_batch_size=4 \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate\r\n```",
"The above command works great for me! I just found that usually I just modify the script for prediction by removing `--do_train` and `--do_eval` and add `--do_predict` without changing the other commands. However, seems this is causing the problem. When running the command below, this issue happens.\r\n\r\n```\r\npython examples/pytorch/summarization/run_summarization.py \\\r\n --model_name_or_path t5-small \\\r\n --do_predict \\\r\n --dataset_name xsum \\ \r\n --source_prefix \"summarize: \" \\\r\n --output_dir /tmp/tst-summarization \\\r\n --per_device_train_batch_size=4 \\\r\n --per_device_eval_batch_size=4 \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate\r\n --save_strategy epoch \\\r\n --evaluation_strategy epoch\r\n```",
"in your command you have set `evaluation_strategy` to `epoch` and `do_eval` defaults to `True` if `evaluation_strategy` is set. cf\r\nhttps://github.com/huggingface/transformers/blob/28d0048218ad7bce69510b16024510afba0daed2/src/transformers/training_args.py#L114-L118",
"Got it. Thank you very much for your fast response! Have a nice day!"
] | 1,653
| 1,653
| 1,653
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.12.3
- Platform: Linux-3.10.0-1062.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.13
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patil-suraj @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This problem happens generally to me after I started to use transformers>=4.12.3. To easily reproduce the issue, we can use the text summarization example. After training the model, we remove `--do_train` and `--do_eval` and add `--do_predict`. However, the model will run ``evaluation'' first before running ``prediction''. I checked the source of this issue and it seems to be the data parsing issue from this line: https://github.com/huggingface/transformers/blob/main/src/transformers/hf_argparser.py#L214.
Before running this line, the value of `do_eval` is still False. However, it turns out to be True afterward.
### Expected behavior
```shell
When only setting `--do_predict`, the model should not parse `do_eval` to be True.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17489/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17488
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17488/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17488/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17488/events
|
https://github.com/huggingface/transformers/issues/17488
| 1,253,501,951
|
I_kwDOCUB6oc5Ktuv_
| 17,488
|
_batch_encode_plus() got an unexpected keyword argument 'is_pretokenized' using BertTokenizerFast
|
{
"login": "anitchakraborty",
"id": 24213939,
"node_id": "MDQ6VXNlcjI0MjEzOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/24213939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anitchakraborty",
"html_url": "https://github.com/anitchakraborty",
"followers_url": "https://api.github.com/users/anitchakraborty/followers",
"following_url": "https://api.github.com/users/anitchakraborty/following{/other_user}",
"gists_url": "https://api.github.com/users/anitchakraborty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anitchakraborty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anitchakraborty/subscriptions",
"organizations_url": "https://api.github.com/users/anitchakraborty/orgs",
"repos_url": "https://api.github.com/users/anitchakraborty/repos",
"events_url": "https://api.github.com/users/anitchakraborty/events{/privacy}",
"received_events_url": "https://api.github.com/users/anitchakraborty/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] |
[
"Hi @anitchakraborty ,\r\n\r\nCould you share an example of `training_set[0][\"input_ids\"]`. I don't see \"input_ids\" in the columns of the kaggle dataset you shared - which are \"Sentence #\", \"Word\", \"POS\" and \"Tag\". Without a toy example, we can't reproduce your problem and it's hard for us to help you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'm closing this issue due to lack of activity, but don't hesitate to come back to us with an extract of your data so that we can help you! :blush: ",
"I am encountering the same issue, suggestions?",
"Hi @ludwigwittgenstein2 , \r\n\r\nThank you for sharing that you also have this issue too. To understand what is going on, could you please share a code snippet that reproduces the error and the output of `transformers-cli env` ? Thanks in advance!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am having the same problem\r\n\r\nhere is the output of `transformers-cli env`\r\n\r\n```\r\n- `transformers` version: 4.25.1\r\n- Platform: Linux-5.10.133+-x86_64-with-glibc2.27\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.11.1\r\n- PyTorch version (GPU?): 1.13.0+cu116 (True)\r\n- Tensorflow version (GPU?): 2.9.2 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\n\r\nyou can also find the colab notebook [here](https://drive.google.com/file/d/1HyTLxHs8S4tAsdpF4GHd34tKuVorBNOz/view?usp=sharing) ",
"Experiencing the same issue. I think it depends on the version compatibility of PyTorch or Transformers. This notebook is different from the others since the predictions are made sentence-wise.\r\n\r\nIt works very well with Python 3.7, Transformers 3.0.2. @SaulLu would appreciate your help. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"```\r\nfrom transformers import BertTokenizerFast, EncoderDecoderModel\r\nimport torch\r\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\r\ntokenizer = BertTokenizerFast.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization')\r\nmodel = EncoderDecoderModel.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization').to(device)\r\n\r\ndef generate_summary(text):\r\n # cut off at BERT max length 512\r\n inputs = tokenizer([text], padding=\"max_length\", truncation=True, max_new_tokens=512, return_tensors=\"pt\")\r\n input_ids = inputs.input_ids.to(device)\r\n attention_mask = inputs.attention_mask.to(device)\r\n\r\n output = model.generate(input_ids, attention_mask=attention_mask)\r\n\r\n return tokenizer.decode(output[0], skip_special_tokens=True)\r\n \r\ntext = \"your text to be summarized here...\"\r\ngenerate_summary(text)\r\n```\r\n**TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'max_new_tokens'**\r\n\r\n\r\n\r\n",
"@ArthurZucker I also have the error, see example:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nstring = \"I am a string\"\r\n\r\n# works\r\ntokens = tokenizer(string)\r\n\r\n# works\r\nnew_string = tokenizer.decode(tokens[\"input_ids\"])\r\n\r\n# works\r\nnew_string = tokenizer.decode(tokens[\"input_ids\"], invalid_kwargs_argument=True)\r\n\r\n# produces error\r\ntokens = tokenizer(string, invalid_kwargs_argument=True)\r\n\r\n# produces error\r\ntokens = tokenizer.encode(string, invalid_kwargs_argument=True)\r\n```\r\n\r\nThe passing of invalid kwargs argument does not seem to be consistent: for `encode`, it causes errors while `decode` does not care.\r\n\r\n### More\r\nTorch version: 2.0.1\r\nTransformers: 4.33.2",
"Hey! Thanks for reporting. If anyone want to open a PR for a fix (meaning most probably error out on the decode function, feel free to do so as this is low on my priority list! ",
"Same error, you can reproduce it here\r\nhttps://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb\r\nwith this dataset:\r\nhttps://www.kaggle.com/datasets/namanj27/ner-dataset\r\n\r\nThe error occurs in when running the cell 19",
"@ArthurZucker This comment is in reference to the pull request I made. One thing that I notice in the slow tokenizer part under tokenization_utils.py is that the kwargs is being propagated to other functions internally hence I am not sure if the same thing can be done there. Please clarify. Thanks",
"I am seeing similar error when i execute the line below:\r\n\r\n```\r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\nmodel = AutoModel.from_pretrained('gpt2')\r\ntokenizer = AutoTokenizer.from_pretrained('gpt2')\r\n\r\ntokenizer.add_special_tokens({'pad_token': '<|pad|>'})\r\nbatch = tokenizer(ds, padding=True, truncation=True, pad_token=\"<|pad|>\", bos_token=\"<|startoftext|>\", return_tensors=\"pt\")\r\n\r\n```\r\n\r\nError: `TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'pad_token'`",
"If you want to set the pad tokens you need to specify them in the call of `from_pretrained` 😓 that's a separate issue ! ",
"> If you want to set the pad tokens you need to specify them in the call of `from_pretrained` 😓 that's a separate issue !\r\n\r\nDo you have a link for the issue where I can comment or do I need to open a new one? I was following the into guides, I can't seem to make it work for simple cases..",
"No I mean it's an issue with how you initialize it 😉 \r\n```python \r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\nmodel = AutoModel.from_pretrained('gpt2')\r\ntokenizer = AutoTokenizer.from_pretrained('gpt2', pad_token=\"<|pad|>\", bos_token=\"<|startoftext|>\")\r\nbatch = tokenizer(ds, padding=True, truncation=True, , return_tensors=\"pt\")\r\n```\r\n\r\nshould work ",
"> No I mean it's an issue with how you initialize it 😉\r\n> \r\n> ```python\r\n> from transformers import AutoModel, AutoTokenizer\r\n> \r\n> model = AutoModel.from_pretrained('gpt2')\r\n> tokenizer = AutoTokenizer.from_pretrained('gpt2', pad_token=\"<|pad|>\", bos_token=\"<|startoftext|>\")\r\n> batch = tokenizer(ds, padding=True, truncation=True, , return_tensors=\"pt\")\r\n> ```\r\n> \r\n> should work\r\n\r\nThanks, I didn't know about this. Doesnt make sense to inform user about how they should pass this? Error is not at all clear in this case.",
"[the doc here](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizer) should be helpful enough for that / function's signature. But yes the unused kwargs should be handled properly I agree "
] | 1,653
| 1,706
| null |
NONE
| null |
### System Info
```shell
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
for token, label in zip(tokenizer.convert_ids_to_tokens(training_set[0]["input_ids"]), training_set[0]["labels"]):
print('{0:10} {1}'.format(token, label))
The error I am getting is:
Traceback (most recent call last):
File "C:\Users\1632613\Documents\Anit\NER_Trans\test.py", line 108, in <module>
for token, label in zip(tokenizer.convert_ids_to_tokens(training_set[0]["input_ids"]), training_set[0]["labels"]):
File "C:\Users\1632613\Documents\Anit\NER_Trans\test.py", line 66, in __getitem__
encoding = self.tokenizer(sentence,
File "C:\Users\1632613\AppData\Local\conda\conda\envs\ner\lib\site-packages\transformers\tokenization_utils_base.py", line 2477, in __call__
return self.batch_encode_plus(
File "C:\Users\1632613\AppData\Local\conda\conda\envs\ner\lib\site-packages\transformers\tokenization_utils_base.py", line 2668, in batch_encode_plus
return self._batch_encode_plus(
TypeError: _batch_encode_plus() got an unexpected keyword argument 'is_pretokenized'
```
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Download the NER Dataset from the Kaggle link (https://www.kaggle.com/datasets/namanj27/ner-dataset)
2. Use the Script with the mentioned versions of transformers and tokenizers:
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
for token, label in zip(tokenizer.convert_ids_to_tokens(training_set[0]["input_ids"]), training_set[0]["labels"]):
print('{0:10} {1}'.format(token, label))
### Expected behavior
```shell
I expect to get the token, label from the script above.
Python Version: 3.9
tokenizers-0.12.1
transformers-4.19.2
Anyone can shed some light please?
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17488/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17487
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17487/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17487/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17487/events
|
https://github.com/huggingface/transformers/issues/17487
| 1,253,496,405
|
I_kwDOCUB6oc5KttZV
| 17,487
|
How can i use bpe tokenizer in t5 pretrain from scratch
|
{
"login": "520jefferson",
"id": 5691554,
"node_id": "MDQ6VXNlcjU2OTE1NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5691554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/520jefferson",
"html_url": "https://github.com/520jefferson",
"followers_url": "https://api.github.com/users/520jefferson/followers",
"following_url": "https://api.github.com/users/520jefferson/following{/other_user}",
"gists_url": "https://api.github.com/users/520jefferson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/520jefferson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/520jefferson/subscriptions",
"organizations_url": "https://api.github.com/users/520jefferson/orgs",
"repos_url": "https://api.github.com/users/520jefferson/repos",
"events_url": "https://api.github.com/users/520jefferson/events{/privacy}",
"received_events_url": "https://api.github.com/users/520jefferson/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @520jefferson,\r\n\r\nCould you please provide us with a codesnippet that we can copy-paste directly into a terminal? We sadly don't have the time to try to guess what we should reproduce.\r\n\r\nThanks a lot!",
"Hey @patrickvonplaten \r\ni just want to make sure whether the run_t5_mlm_flax.py script provider another tokenizer which can just load bpe codes or vocab to initialize the tokenizer. \r\n\r\nOr maybe i don't want to use tokenizer, i just need vocab.txt, because i can preprocess with bpe tokenizer before training. How should i do my training without tokenizer?\r\n\r\nThanks\r\n",
"Think you have to use a tokenizer to do training (the model needs numbers not letters). You could otherwise try to use `Canine` or `ByT5`",
"@patrickvonplaten \r\n\r\nTokenizer can just split the sentence according to space like txts = text.split(\" \") , and the token txts[i] can find in vocab.txt, then it can be transfer to number, so i just need the tokenizer load vocab.txt and split according to space , then they can be transfer to numbers.\r\n\r\nThanks for reply !",
"\r\nHi!\r\n\r\nIf you're ever sure you want to do this (you probably know this, but there are an infinite number of possible words and the size of the vocabulary is computationally expensive), you can create this type of tokenizer with the tokenizers library by instantiating a particular [WordLevel](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.WordLevel) model. Then you will have to load it in a `PreTrainedTokenizerFast` of transformers. All the necessary steps are explained [here](https://huggingface.co/docs/transformers/fast_tokenizers).",
"Hi, @SaulLu \r\n\r\n1. I set like like:\r\n**>>> from transformers import PreTrainedTokenizerFast\r\n>>> from tokenizers.models import WordLevel\r\n>>> vocab = WordLevel.from_file(\"./chitchat-t5-base/vocab.json\",\"<unk>\")\r\n>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=vocab)\r\n>>> fast_tokenizer.decode(\"平时 你 干点 什么 <sep> 找 工作\",skip_special_tokens=True)\r\n**\r\nThen i got this: AttributeError: 'tokenizers.models.WordLevel' object has no attribute 'decode':\r\n\r\nand i also use tokenize, it sames i don't set the trunctation:\r\n\r\nand i use encode, i met this(**_AttributeError: 'tokenizers.models.WordLevel' object has no attribute 'truncation_**'):\r\n\r\n\r\n\r\n2. Or if i replace autotokenizer to PreTrainedTokenizerFast in run_t5_mlm_flax.py, and train a new t5, i got this:\r\n\r\n\r\n3.My vocab.json like this below, it seems the tokenizer cannot load this vocab. In order to use the tokenizer how should i load this to a proper tokenizer? \r\n{\r\n\"<unk>\": 0,\r\n\"<eod>\": 1,\r\n\"<pad>\": 2,\r\n\"<mask>\": 3,\r\n\"<sep>\": 4,\r\n\",\": 5,\r\n\"的\": 6,\r\n\"?\": 7,\r\n\"了\": 8,\r\n.....\r\n.....\r\n.....\r\n\"<s_181>\": 33786,\r\n\"<s_182>\": 33787,\r\n\"<s_183>\": 33788,\r\n\"<s_184>\": 33789,\r\n\"<s_185>\": 33790,\r\n\"<s_186>\": 33791,\r\n\"<s_187>\": 33792,\r\n\"<s_188>\": 33793,\r\n\"<s_189>\": 33794\r\n}\r\n\r\n4. i also try like this, it doesno't work:\r\n\r\nthen i uninstall transformers and sentencepiece and pip install transformers sentencepiece, the errors are same. \r\n\r\n\r\n",
"Hey @520jefferson,\r\n\r\nCould you please try to to not post screenshots here? Not that we cannot reproduce code from screenshots as it's not easily copy-pastable into the command line",
"@patrickvonplaten @SaulLu \r\n\r\nThanks for reply, codes and vocab like follows.\r\n\r\n1. The vocab.txt (vocab.json is manually constructed from vocab.txt ) and meregs.txt i upload to google drive as follows:\r\nvocab.txt:https://drive.google.com/file/d/10jC8L_-RDLRv5QkAato8nJWGU1UQQcz1/view?usp=sharing\r\nvocab.json:https://drive.google.com/file/d/1e5Ll0bAHhikhnYV5XaW3NB8aTSWdCvnC/view?usp=sharing\r\nmerges.txt:https://drive.google.com/file/d/1ifXlQaYod_kobqgNe82tmHTtHpxBYnBq/view?usp=sharing\r\n\r\n2.The sentences for training and validation and test like this (after bpe, tokens split by \" \"):\r\n你 觉得 大人 辛苦 还是 学生 辛苦 <sep> 都 很 辛苦\r\n头条 文章 没 啥 违规 , 却 被 小@@ 浪@@ 浪 屏蔽 了 , 而且 删 了 先生 的 转发 评价 , 农历 新年 将 至 , 俺 不想 发火 , 行 , 俺 再 发 一遍 ! <sep> 怎么 删 了 , 还 没 看 呢\r\n专辑 有 签名 么 ? ! … <sep> 没有 机会 去 签@@ 售@@ 会 啦 幸好 里面 的 容 和 小 卡片 有 签名\r\n你 帮 我 买 东西 吗 <sep> 你 给钱 我 , 当然 帮 你 买 耶\r\n你 说 那个 早晨 喝 那个 水有 什么 好处 <sep> 可以 提高 睡眠 质量 <sep> 养成 良好 的 睡眠 时间 和 习惯 <sep> 慢慢 养成 早睡早起 的 习惯 , 习@@ 惯@@ 成@@ 自然\r\n求个 风景 超 美的 网游 最好 是 韩国 的 <sep> 剑侠情缘 叁\r\n现在 百度 帐号 是 不能 拿 邮箱 注册 了 么 ? 只能 拿 手机号 了 么 ? 如果 可以 应该 怎么 拿 邮箱 注册 ? 谢谢 ! <sep> 先 用 手机 注册 , 然后 绑定 一个 邮箱 , 再@@ 解 绑 手机 即可\r\n咱们 出去 转 会儿 遛@@ 弯@@ 儿 去 呗 <sep> 我 在 工@@ 体 的 漫 咖啡 , 要 不要 来 坐 会儿\r\n我 知道 最近 做 什么 <sep> 准备 演唱会 的 事 吧\r\n\r\n3.i want to use the tokenizer to load the vocab and tokenizer to tokenizer my sentence and give it to the t5 model.\r\nload model like this(config: https://drive.google.com/file/d/1WOb-gqjkt1m6GBTFeq4wOWS3dW3Qt1oK/view?usp=sharing):\r\nfrom transformers import T5Config, T5ForConditionalGeneration\r\nconfig = T5Config.from_json_file(config_file)\r\nmodel = T5ForConditionalGeneration(config)\r\n\r\nload tokenizer:\r\nfrom tokenizers.models import WordLevel\r\nfrom transformers import PreTrainedTokenizerFast\r\nvocab = WordLevel.from_file(\"vocab.json\",\"<unk>\")\r\nfast_tokenizer=PreTrainedTokenizerFast(tokenizer_object=vocab)\r\nfast_tokenizer.encode(\"你 觉得 大人 辛苦 还是 学生 辛苦 <sep> 都 很 辛苦\")\r\n\r\nthen i met this errror:AttributeError: 'tokenizers.models.WordLevel' object has no attribute 'truncation'\r\n\r\n4. So i want to load the vocab into tokenizer and use it like this { source = tokenizer.batch_encode_plus([source_text], max_length= 75, pad_to_max_length=True, truncation=True, padding=\"max_length\", return_tensors='pt')\r\n } and return { 'source_ids': source_ids.to(dtype=torch.long), 'source_mask': source_mask.to(dtype=torch.long), 'target_ids': target_ids.to(dtype=torch.long), 'target_ids_y': target_ids.to(dtype=torch.long) } , and give the tokenizer result to model and train the model like translation task, how should i do ?\r\n\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
transformers version: 4.20.0.dev0
when i use the transformers/examples/flax/language-modeling/run_t5_mlm_flax.py to pretrain the t5 from scratch, the preprocess i use the bpe codes to split sentence but not the original tokenizer. How can i use the bpe codes?
```
### Who can help?
@patrickvonplaten
@SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction

### Expected behavior
```shell
replace the tokenizer with load bpe codes to split sentence
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17487/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17486
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17486/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17486/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17486/events
|
https://github.com/huggingface/transformers/issues/17486
| 1,253,460,850
|
I_kwDOCUB6oc5Ktkty
| 17,486
|
Transformers get stuck in from_pretraind
|
{
"login": "luisgg98",
"id": 45603226,
"node_id": "MDQ6VXNlcjQ1NjAzMjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/45603226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisgg98",
"html_url": "https://github.com/luisgg98",
"followers_url": "https://api.github.com/users/luisgg98/followers",
"following_url": "https://api.github.com/users/luisgg98/following{/other_user}",
"gists_url": "https://api.github.com/users/luisgg98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luisgg98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luisgg98/subscriptions",
"organizations_url": "https://api.github.com/users/luisgg98/orgs",
"repos_url": "https://api.github.com/users/luisgg98/repos",
"events_url": "https://api.github.com/users/luisgg98/events{/privacy}",
"received_events_url": "https://api.github.com/users/luisgg98/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I am experiencing the same problem with the .from_pretrained calls. For me the temporary solution was to use a VPN, but of course it isn't fun. I was wondering why we need internet connection at all, since after the first call the data should be cached locally. Maybe add a flag in the call to explicitly run offline only?",
"In case of connection issues, you can try the [offline mode](https://huggingface.co/docs/transformers/v4.19.2/en/installation#offline-mode).",
"Could you share a bit regarding your location/network? Do you mange to download files when going through the UI on the hub?",
"> In case of connection issues, you can try the [offline mode]\r\n\r\nThanks! This is what I was looking for!\r\n\r\n\r\n\r\n> Could you share a bit regarding your location/network? Do you mange to download files when going through the UI on the hub?\r\n\r\nOn my side, I am connecting through the VPN of my org. VPN location is chosen automatically, but when I turn it off the code execution of the .from_pretrained(..) is normal, whereas in the last days with VPN on it takes maybe 15 minutes. Before there were no issues with this. I am capable of downloading without issues the files from the hub directly.\r\n\r\nThanks for your time!"
] | 1,653
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-4.15.0-180-generic-x86_64-with-debian-11.0
- Python version: 3.7.11
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Requirements
```
celery==4.4.7
redis==3.4.1
Flask==1.1.4
itsdangerous==1.1.0
markupsafe==1.1.1
Flask-Cors==3.0.3
flower==0.9.7
Flask-Migrate==2.5.2
flask-restplus==0.13.0
Flask-Script==2.0.6
Werkzeug==0.16.1
summa==1.2.0
pattern3==3.0.0
gensim==3.8.3
pandas==1.1.5
keybert==0.5.0
numpy==1.19.5
nltk==3.6.3
beautifulsoup4==4.10.0
requests==2.21.0
text2text==0.6.6
wikipedia-api
jinja2<3.1.0
torch==1.7.1 #conda install pytorch cudatoolkit=10.1
transformers==4.19.2 #pip install transformers #pip install transformers[sentencepiece]
streamlit==1.5.0 #Review if finally used
seaborn==0.11.2 #Review if finally used
spacy==3.2.1
segtok==1.5.11
datasets==1.18.2
wiktionaryparser==0.0.97
en-core-web-md @ https://github.com/explosion/spacy-models/releases/download/en_core_web_md-3.2.0/en_core_web_md-3.2.0-py3-none-any.whl
es-core-news-md @ https://github.com/explosion/spacy-models/releases/download/es_core_news_md-3.2.0/es_core_news_md-3.2.0-py3-none-any.whl
rouge-score
absl-py
sacrebleu
meteor
```
Python code
```python
model_name="Vamsi/T5_Paraphrase_Paws"
AutoTokenizer.from_pretrained(model_name)
print(f"1 Loading model {str(model_name)}")
AutoModelForSeq2SeqLM.from_pretrained(model_name)
print(f"2 Loading model {str(model_name)}")
```
### Expected behavior
```shell
Good evening, sorry, I'm new working with Huggingface library.
I'm having an issue deploying a Flask application that I've deployed in Python.
It gets stuck at AutoModelForSeq2SeqLM.from_pretrained(model_name) and the loading process last forever.
I'm not sure if I've committed any mistake and this is not a bug so in case please notifiy me.
Thank you so much.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17486/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17485
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17485/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17485/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17485/events
|
https://github.com/huggingface/transformers/pull/17485
| 1,253,444,788
|
PR_kwDOCUB6oc44uESA
| 17,485
|
Add HF.co for PRs / Issues regarding specific model checkpoints
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @lhoestq for datasets"
] | 1,653
| 1,654
| 1,654
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Checkpoint issues should be put up directly to the Hub
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17485/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17485/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17485",
"html_url": "https://github.com/huggingface/transformers/pull/17485",
"diff_url": "https://github.com/huggingface/transformers/pull/17485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17485.patch",
"merged_at": 1654005519000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17484
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17484/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17484/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17484/events
|
https://github.com/huggingface/transformers/pull/17484
| 1,253,420,938
|
PR_kwDOCUB6oc44t_Tk
| 17,484
|
Fix checkpoint name
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @zphang :-)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Fix some `eleutherai/gpt-neox-20b` to `EleutherAI/gpt-neox-20b`.
The uppercase vs. lowercase in the checkpoint name matters, otherwise I got error like
`is not a local folder and is not a valid model identifier listed on`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17484/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17484",
"html_url": "https://github.com/huggingface/transformers/pull/17484",
"diff_url": "https://github.com/huggingface/transformers/pull/17484.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17484.patch",
"merged_at": 1654004448000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17483
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17483/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17483/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17483/events
|
https://github.com/huggingface/transformers/issues/17483
| 1,253,270,459
|
I_kwDOCUB6oc5Ks2O7
| 17,483
|
Performance (perplexity) decrease after conversion megatronGTP2 to hugging face model
|
{
"login": "skdirwj",
"id": 8908319,
"node_id": "MDQ6VXNlcjg5MDgzMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8908319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skdirwj",
"html_url": "https://github.com/skdirwj",
"followers_url": "https://api.github.com/users/skdirwj/followers",
"following_url": "https://api.github.com/users/skdirwj/following{/other_user}",
"gists_url": "https://api.github.com/users/skdirwj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skdirwj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skdirwj/subscriptions",
"organizations_url": "https://api.github.com/users/skdirwj/orgs",
"repos_url": "https://api.github.com/users/skdirwj/repos",
"events_url": "https://api.github.com/users/skdirwj/events{/privacy}",
"received_events_url": "https://api.github.com/users/skdirwj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
transformers==4.19.2
PyTorch: 1.11.0
CUDA: cu11.0
Train GPUs: 1node (A100 8gpus)
Test GPUs: A100 1gpu
Megatron-LM: https://github.com/NVIDIA/Megatron-LM
```
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Pretrain my own megatronGPT2 on corpus almost similar to that used in a pre-trained megatronGPT2 by Nvidia (https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m)
- I used same vocab and merge files with megatronGPT2 by Nvidia
- vocab: https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
- merge: https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
2. Test perpexity on WIKITEXT103 testset and compare performance of the above pre-trained megatronGPT2 models using evaluation script
- https://github.com/NVIDIA/Megatron-LM/blob/main/tasks/main.py
3. Convert the above pre-trained models to hugging models using the conversion script in transformers
- https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py
- Belows show the config files of converted models. `activation_function` and `vocab_size` are different.
```
- Mine
{
"activation_function": "gelu_fast",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_embd": 1024,
"n_head": 16,
"n_inner": 4096,
"n_layer": 24,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"tokenizer_class": "GPT2TokenizerFast",
"transformers_version": "4.19.2",
"use_cache": true,
"vocab_size": 50304
}
```
```
- Nvidia
{
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_embd": 1024,
"n_head": 16,
"n_inner": 4096,
"n_layer": 24,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"tokenizer_class": "GPT2TokenizerFast",
"transformers_version": "4.19.2",
"use_cache": true,
"vocab_size": 50257
}
```
4. Test perpexity on WIKITEXT103 testset and compare performance of the converted hugging models by following the guide
- https://huggingface.co/docs/transformers/perplexity#calculating-ppl-with-fixedlength-models
5. Below table is my test results
- Before means pre-trained models' and After means converted models'.
| | Before | After |
| -- | -- |-- |
| NVIDIA Megatron_345M | **14.77** | **17.15** |
| **My_Model_345M** | 15.73 | 23.89 |
### Expected behavior
```shell
I am wandering where the performance difference between converted Mine and Nvidia models comes from.
In addition, I do not know why the vocab size of Mine had been changed from 50,257 to 50,304.
(the number 50,304 means vocab size 50,257 plus dummy token)
I manually changed `activation_function` and `vocab_size` in config file of Mine to same with Nvidia and test it again. But, the performance difference is same.
I expect similar perplexity from converted hugging face models of both pre-trained my own model and Nvidia.
Do anyone have similar experience?
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17483/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17482
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17482/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17482/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17482/events
|
https://github.com/huggingface/transformers/issues/17482
| 1,253,165,091
|
I_kwDOCUB6oc5Kscgj
| 17,482
|
model.parallelize() for OPT models
|
{
"login": "MikeWangWZHL",
"id": 44760150,
"node_id": "MDQ6VXNlcjQ0NzYwMTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/44760150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeWangWZHL",
"html_url": "https://github.com/MikeWangWZHL",
"followers_url": "https://api.github.com/users/MikeWangWZHL/followers",
"following_url": "https://api.github.com/users/MikeWangWZHL/following{/other_user}",
"gists_url": "https://api.github.com/users/MikeWangWZHL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikeWangWZHL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikeWangWZHL/subscriptions",
"organizations_url": "https://api.github.com/users/MikeWangWZHL/orgs",
"repos_url": "https://api.github.com/users/MikeWangWZHL/repos",
"events_url": "https://api.github.com/users/MikeWangWZHL/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikeWangWZHL/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger can share more on that, but we now recommend leveraging `accelerate` in order to parallelize models. See [tweet](https://twitter.com/huggingface/status/1524783489593360385) that contains a [colab example](https://colab.research.google.com/drive/14wnxMvD9zsiBQo2FtTpxn6w2cpXCcb-7#scrollTo=XUlpcU3iQNhu&uniqifier=1).",
"On Transformers main branch, you can also load the OPT models on several GPUs directly with `AutoModel.from_pretrained(checkpoint, device_map=\"auto\")` (or pass along you own device map).\r\n\r\nThe old `parallelize` API will be deprecated soon.",
"@sgugger Hi, that's very helpful for me. However I wonder what's the difference between directly using ```AutoModel.from_pretrained(checkpoint, device_map=\"auto\")``` and that using ```load_checkpoint_and_dispatch()``` and ```infer_auto_device_map()```? Does both of them support fine-tuning something like OPT-30B on multiple GPUs with model parallelism? Thanks a lot for your help!",
"No this is inference only. For training/fine-tuning we recommend the use of DeepSpeed Zero-3.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,658
| 1,658
|
NONE
| null |
### Feature request
It would be great if we can have function for fitting OPT model on multiple GPUs using **model.parallelize()**, similar to what we already have for [GPT-J](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/gptj/modeling_gptj.py#L494)
### Motivation
It would be extremely helpful for fitting large OPT models such as opt-30b, opt-13b. An ideal solution would be similar to what has been integrated in [GPT-J](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/gptj/modeling_gptj.py#L494), where we can simply call **model.parallelize()** to load the model on multiple gpus.
### Your contribution
glad to help on making this feature possible
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17482/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17482/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17481
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17481/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17481/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17481/events
|
https://github.com/huggingface/transformers/issues/17481
| 1,253,155,173
|
I_kwDOCUB6oc5KsaFl
| 17,481
|
modeling_swin.py img_mask doesn't have expected torch dtype
|
{
"login": "LiweiPeng",
"id": 8562078,
"node_id": "MDQ6VXNlcjg1NjIwNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8562078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiweiPeng",
"html_url": "https://github.com/LiweiPeng",
"followers_url": "https://api.github.com/users/LiweiPeng/followers",
"following_url": "https://api.github.com/users/LiweiPeng/following{/other_user}",
"gists_url": "https://api.github.com/users/LiweiPeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiweiPeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiweiPeng/subscriptions",
"organizations_url": "https://api.github.com/users/LiweiPeng/orgs",
"repos_url": "https://api.github.com/users/LiweiPeng/repos",
"events_url": "https://api.github.com/users/LiweiPeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiweiPeng/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThanks for your interest in Swin Transformer! Do you mind opening a PR to fix this?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @LiweiPeng do you mind opening a PR for this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Gently pinging @LiweiPeng here",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,661
| 1,661
|
NONE
| null |
### System Info
```shell
transformers: master branch
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
How to repro:
Run Swin Transformer with NVIDIA mixed precision apex amp opt_level=O2.
The problem is at this line: https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/modeling_swin.py#L600, it creates img_mask as float32 type, which will conflict other types (float16) when mixed precision O2 is used.
One way to fix is shown below:
```
- def get_attn_mask(self, height, width):
+ def get_attn_mask(self, height, width, dtype):
if self.shift_size > 0:
# calculate attention mask for SW-MSA
- img_mask = torch.zeros((1, height, width, 1))
+ img_mask = torch.zeros((1, height, width, 1), dtype=dtype)
height_slices = (
slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
@@ -641,7 +641,7 @@ class SwinLayer(nn.Module):
# partition windows
hidden_states_windows = window_partition(shifted_hidden_states, self.window_size)
hidden_states_windows = hidden_states_windows.view(-1, self.window_size * self.window_size, channels)
- attn_mask = self.get_attn_mask(height_pad, width_pad)
+ attn_mask = self.get_attn_mask(height_pad, width_pad, dtype=hidden_states.dtype)
```
### Expected behavior
```shell
the img_mask tensor should be created with the same type of hidden_states.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17481/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17480
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17480/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17480/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17480/events
|
https://github.com/huggingface/transformers/issues/17480
| 1,253,126,457
|
I_kwDOCUB6oc5KsTE5
| 17,480
|
MarianMTModel no longer has postprocess_next_token_scores function
|
{
"login": "dsvilarkovic",
"id": 18049803,
"node_id": "MDQ6VXNlcjE4MDQ5ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/18049803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsvilarkovic",
"html_url": "https://github.com/dsvilarkovic",
"followers_url": "https://api.github.com/users/dsvilarkovic/followers",
"following_url": "https://api.github.com/users/dsvilarkovic/following{/other_user}",
"gists_url": "https://api.github.com/users/dsvilarkovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsvilarkovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsvilarkovic/subscriptions",
"organizations_url": "https://api.github.com/users/dsvilarkovic/orgs",
"repos_url": "https://api.github.com/users/dsvilarkovic/repos",
"events_url": "https://api.github.com/users/dsvilarkovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsvilarkovic/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @patil-suraj ",
"Hi @dsvilarkovic !\r\n\r\nThis functionality still exists, but the `generate` method is refactored to support this more cleanly. This is now implemented as `LogitsProcessor`. cf #6949",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
Hello I just saw that in file https://github.com/huggingface/transformers/blob/v4.19.2/src/transformers/data/test_generation_utils.py you mention calling the postprocess_next_token_scores() method from MarianMTModel, which doesn't seem to exist anymore in it's source file anymore: https://github.com/huggingface/transformers/blob/v4.19.2/src/transformers/models/marian/modeling_marian.py and neither exist as a PreTrainedModel method anymore, and neither in GenerationMixin model (in GenerationMixin it used to be most recently in https://huggingface.co/transformers/v3.3.1/_modules/transformers/generation_utils.html).
I would be really happy if you could tell what is the alternative to this function in the most recent library version!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17480/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17479
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17479/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17479/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17479/events
|
https://github.com/huggingface/transformers/pull/17479
| 1,253,045,544
|
PR_kwDOCUB6oc44swxY
| 17,479
|
TF: BART compatible with XLA generation
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh tagging you for TF review, as Matt is off and you are also familiar with generate :)",
"> @ydshieh tagging you for TF review, as Matt is off and you are also familiar with generate :)\r\n\r\nActually not very familiar, but would love to get more involved 😃. Thanks for tagging me!",
"Hey @ydshieh 👋 answering your questions:\r\n\r\n> From the change in prepare_inputs_for_generation, both in the PR for TF-GPT2 and this PR, my understanding of the main change is that: we need to use (decoder) attention mask in order to calculate the correct position_ids for both left/right padding. And this is done using tf.math.cumsum. Do I understand these PR correctly?\r\n\r\nCorrect 👍 \r\n\r\n> Why we need decoder_position_ids when past_key_values is passed?\r\n\r\nIn the original PT code and eager execution TF, the position ids can be obtained by default (i.e. when not explicitly passed) from the past length, as the past length corresponds to the next position id if there is no left padding. In FLAX and XLA TF, the past is zero-padded, so the past length is not the default position id. As such, it is dangerous to leave the default path active -- this path should only be used in generate anyways, and the updated generate passes the position ids. (The GPT2 should also get the same guard, to be safe!)",
"> \r\n\r\nOK, I might got it. The `past` sent to the model is the padded (on the right) version! (which is required by XLA to have a fixed shape during loop, right?)\r\n\r\nThank you @gante ! ",
"I didn't think it in a thorough way, but in `prepare_inputs_for_generation`, when we return the actual inputs to a model,\r\n\r\nhttps://github.com/huggingface/transformers/blob/9089b7b95a1b12f19b65872323f13f1f68a6eaa7/src/transformers/models/bart/modeling_tf_bart.py#L1428\r\n\r\nit seems to me that we could cut `past` to the actual (non-padded) version. And when the model returns `past`, in `_update_model_kwargs_for_xla_generation`, we just always pad on the right.\r\n\r\n(of course, we need to pass the current length info. to `prepare_inputs_for_generation` if we want to do so)\r\n\r\n- this will keep `model_kwargs[\"past\"]` compatible with XLA\r\n- the actual `past` to model is the same as before \r\n - especially, it won't get `max_length - 1` as length, so we no longer have overhead due to the increasing length \r\n- it might make the logic a bit easier in `_update_model_kwargs_for_xla_generation`\r\n\r\n@gante I don't want to make you too busy. I will let you judge if this is a good idea, and even if it is, if we should change it now, or we can do it later. I know we want to publish our work soon!\r\n",
"> it seems to me that we could cut past to the actual (non-padded) version. \r\n\r\nI would love to do that, and it would be a great idea to simplify the code, but sadly XLA does not allow dynamic-sized slices (i.e. cutting `past` based on the current length or based on its non-zero values). I've had the same idea too, but then I came across this limitation (documented [here](https://github.com/huggingface/transformers/pull/17378#issuecomment-1133641201))😢 Sadly, we have to keep working with the full padded array everywhere when XLA is on.",
"Think we can move towards finishing this PR here :-)",
"@patrickvonplaten it is ready to merge -- would you like to make a final review, or can I merge the PR? :)"
] | 1,653
| 1,655
| 1,655
|
MEMBER
| null |
# What does this PR do?
Adds `position_ids` to `TFBart`, so that we can do generation with a padded past -- a requirement for XLA generation.
This PR was built on top of #17426 (so it will contain its diff until it gets merged), and is a requirement for #17458.
🚨 Important notes:
1. **Review suggestion**: check the Bart file, then its test file. The other changes are either cosmetic changes (e.g. correcting comments) or the result of `make fix-copies` (several files have copies from Bart).
2. There are several failing tests, but it's intentional -- some models' `prepare_inputs_for_generation` were copied from Bart, but the models do not have the `position_ids` input. If the PR gets a positive review, I will propagate the changes to the affected models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17479/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17479/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17479",
"html_url": "https://github.com/huggingface/transformers/pull/17479",
"diff_url": "https://github.com/huggingface/transformers/pull/17479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17479.patch",
"merged_at": 1655719667000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17478
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17478/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17478/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17478/events
|
https://github.com/huggingface/transformers/issues/17478
| 1,253,038,954
|
I_kwDOCUB6oc5Kr9tq
| 17,478
|
Training hangs in the end while calling dist.barrier()
|
{
"login": "hasansalimkanmaz",
"id": 49716619,
"node_id": "MDQ6VXNlcjQ5NzE2NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasansalimkanmaz",
"html_url": "https://github.com/hasansalimkanmaz",
"followers_url": "https://api.github.com/users/hasansalimkanmaz/followers",
"following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions",
"organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs",
"repos_url": "https://api.github.com/users/hasansalimkanmaz/repos",
"events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I do not see a `dist.barrier()` at line 1536 of the `trainer.py` file and I can't really help you without knowing which training script you are launching and how you launched it.",
"Maybe the line corresponds to the HF version that I use. The new line number is 1679 according to main branch. You can check out from [here](https://github.com/huggingface/transformers/blob/567d9c061d04513c24872b5709adc7a1384b8129/src/transformers/trainer.py#L1679)",
"I think I have found the issue, my custom model has outputs with variable lengths and I wasn't gathering all outputs with `distributed_concat` function as they are not torch tensors. This results in different metrics in each process due to different outputs without gathering. In addition, I am using `EarlyStoppingCallback` during my training. As the metrics are different for each process, one process can stop the training and enter `dist.barrier` while the others go on training. This results in hanging training. \r\n\r\nUntil now, I haven't implemented the fix yet. After the implementation, I will confirm here and close the issue. Thanks for your time anyway. ",
"I have just tested my fix and concluded that it is related to what I mentioned above. Thanks for your time @sgugger. I am closing the issue.",
"Thanks for confirming the fix worked!",
"> I have just tested my fix and concluded that it is related to what I mentioned above. Thanks for your time @sgugger. I am closing the issue.\r\n\r\nHello,\r\nCould you please elaborate the solution?",
"I have the same issue, but i ve fixed: there is a code block in main.py: `try: model.evaluate() except save_checkpoint()`. But there is something wrong with the evaluation function, which is in another evaluate.py. AND there is also `dist.barrier()` in evaluate.py. So, the first process meet the evaluation error and skip to the `dist.barrier()` in the main.py, whilst the second process is waiting at the `dist.barrier()` at evaluate.py. Check errors in your evaluation logic should make you happy again.",
"hi @hasansalimkanmaz \r\nhow did u fix the problem? could u share the solution?",
"@zimenglan-sysu-512 as the issue was related to different metrics in different workers, I fixed the issue having uniform metrics by an aggregation. However, as it was quite a long time ago and I don't have access the repo, I couldn't verify exactly how I solved it. "
] | 1,653
| 1,697
| 1,654
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.0-1073-azure-x86_64-with-glibc2.27
- Python version: 3.8.0
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: DDP
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am working on a custom TokenClassificationTask. For a specific model type, I am experiencing a hanging process at the end of the training. After I set `TORCH_DISTRIBUTED_DEBUG=DETAIL` and also added rank numbers to logs (to do this, I overwrote `train` method of `Trainer` class with additional loggings), the training failed and I received the below stack trace.
```
Training completed for rank 6. Do not forget to share your model on huggingface.co/models =)
2022-05-30 15:54:58 INFO nlp_ner_layoutlm.layoutlm.trainers.re_trainer Before barrier for rank 6
2022-05-30 15:54:58 INFO nlp_ner_layoutlm.layoutlm.trainers.re_trainer Entering into barrier for rank 6
2022-05-30 15:54:59 INFO transformers.modeling_utils Model weights saved in ./data/tmpm8wxl12l/checkpoint-590/pytorch_model.bin
2022-05-30 15:55:01 INFO transformers.trainer Deleting older checkpoint [data/tmpm8wxl12l/checkpoint-585] due to args.save_total_limit
2022-05-30 15:55:01 ERROR __main__ Detected mismatch between collectives on ranks. Rank 6 is running inconsistent collective: CollectiveFingerPrint(OpType=BARRIER
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 53, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 160, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 158, in train_model
trainer.train(resume_from_checkpoint=get_last_checkpoint(checkpoint_dir))
File "/app/nlp_ner_layoutlm/layoutlm/trainers/re_trainer.py", line 698, in train
dist.barrier()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
work = default_pg.barrier(opts=opts)
RuntimeError: Detected mismatch between collectives on ranks. Rank 6 is running inconsistent collective: CollectiveFingerPrint(OpType=BARRIER
2022-05-30 15:55:01 ERROR __main__ Detected mismatch between collectives on ranks. Rank 3 is running inconsistent collective: CollectiveFingerPrint(OpType=BROADCAST, TensorShape=[514], TensorDtypes=Long, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 53, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 160, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 158, in train_model
trainer.train(resume_from_checkpoint=get_last_checkpoint(checkpoint_dir))
File "/app/nlp_ner_layoutlm/layoutlm/trainers/re_trainer.py", line 603, in train
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2011, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2043, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 878, in forward
self._sync_params()
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 1379, in _sync_params
self._distributed_broadcast_coalesced(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/distributed.py", line 1334, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
RuntimeError: Detected mismatch between collectives on ranks. Rank 3 is running inconsistent collective: CollectiveFingerPrint(OpType=BROADCAST, TensorShape=[514], TensorDtypes=Long, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
2022-05-30 15:55:01 ERROR __main__ Detected mismatch between collectives on ranks. Rank 1 is running inconsistent collective: CollectiveFingerPrint(OpType=BROADCAST, TensorShape=[514], TensorDtypes=Long, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
Traceback (most recent call last):
... (Same error for other processes)
```
According to the trace, while the process with rank 6 is running `dist.barrier()` from `trainer.py line 1536`, the other processes are running a `forward_call`. I think this is the issue and due to this mis-communication the training hangs. When I checked similar issues on the web, I came across [this issue from speechbrain](https://github.com/speechbrain/speechbrain/issues/1166). It is exactly the same issue and they fixed it with a PR. Currently, I can't understand why the processes are ending up in different places in the code and can't figure out how to fix this issue.
### Expected behavior
```shell
As far as I understand, the processes should meet at `dist.barrier()` and training should succeed. Could you help me or point me to a fix that I can work on it?
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17478/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17477
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17477/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17477/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17477/events
|
https://github.com/huggingface/transformers/pull/17477
| 1,252,996,190
|
PR_kwDOCUB6oc44smhx
| 17,477
|
Allow from transformers import TypicalLogitsWarper
|
{
"login": "teticio",
"id": 44233095,
"node_id": "MDQ6VXNlcjQ0MjMzMDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/44233095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teticio",
"html_url": "https://github.com/teticio",
"followers_url": "https://api.github.com/users/teticio/followers",
"following_url": "https://api.github.com/users/teticio/following{/other_user}",
"gists_url": "https://api.github.com/users/teticio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teticio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teticio/subscriptions",
"organizations_url": "https://api.github.com/users/teticio/orgs",
"repos_url": "https://api.github.com/users/teticio/repos",
"events_url": "https://api.github.com/users/teticio/events{/privacy}",
"received_events_url": "https://api.github.com/users/teticio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently, in order to use the `TypicalLogitsWarper` outside of `generate` (with `typical_p` > 0) it can only imported from `transformers.generation_logits_process`. I have simply added it to the relevant `__init__.py` so that it can be imported directly from `transformers`, like the other `LogitsWarpers`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@cimeister
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17477/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17477",
"html_url": "https://github.com/huggingface/transformers/pull/17477",
"diff_url": "https://github.com/huggingface/transformers/pull/17477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17477.patch",
"merged_at": 1654247316000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17476
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17476/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17476/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17476/events
|
https://github.com/huggingface/transformers/issues/17476
| 1,252,848,712
|
I_kwDOCUB6oc5KrPRI
| 17,476
|
PyTorch JIT trace on Swin Transformer pretrained checkpoint fails
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Using `check_trace=False` with `torch.jit.trace` avoids this problem. The traced model seems to work OK and gives the same results as the original (tested with several different input images).\r\n\r\nIt's probably still a good idea to make the trace work without the error (if possible) but as using `check_trace=False` solves my immediate problem, this is a very low priority issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> \r\n\r\nI have found that the traced jit module will got dismatched result compared to nnmodule at difference batch inputs as `window_reverse` is not supporting dynamic batch [modeling_swin.py#L218](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/modeling_swin.py#L218):\r\n\r\n```python\r\ndef window_reverse(windows, window_size, height, width):\r\n \"\"\"\r\n Merges windows to produce higher resolution features.\r\n \"\"\"\r\n batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))\r\n windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)\r\n windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(batch_size, height, width, -1)\r\n return windows\r\n```\r\n\r\nIs that a possible reason? In my practice,I write `window_reverse` as following for supporting dynamic batch:\r\n\r\n```python\r\ndef window_reverse(windows, window_size, height, width):\r\n \"\"\"\r\n Merges windows to produce higher resolution features.\r\n \"\"\"\r\n channels = int(windows.shape[-1])\r\n windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, channels)\r\n windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, height, width, channels)\r\n return windows\r\n```"
] | 1,653
| 1,661
| 1,658
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.7
- JaxLib version: 0.3.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The goal is to run `torch.jit.trace` on the Swin Transformer model from a pretrained checkpoint so it can be exported to another format (e.g. Core ML, ONNX, etc).
Steps to reproduce the issue are:
```python
from transformers import SwinForImageClassification
model_checkpoint = "microsoft/swin-small-patch4-window7-224"
model = SwinForImageClassification.from_pretrained(model_checkpoint, torchscript=True).eval()
import torch
example_input = torch.rand([1, model.config.num_channels, model.config.image_size, model.config.image_size])
traced_model = torch.jit.trace(model, example_input, strict=False)
```
The trace gives a lot of warnings, which are not important, and then fails with the error:
```python
TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
```
### Expected behavior
There should be no error and the trace completes successfully.
The unit tests for Swin Transformer also perform a JIT trace and they succeed. The model architecture used in the test case is simpler than from the pretrained checkpoint.
In particular, the issue seems to be the `window_size` parameter. This can be demonstrated as follows:
```python
from transformers import SwinConfig
config = SwinConfig.from_pretrained(model_checkpoint, window_size=6)
config.torchscript = True
model = SwinForImageClassification.from_pretrained(model_checkpoint, config=config, ignore_mismatched_sizes=True).eval()
traced_model = torch.jit.trace(model, example_input, strict=False)
```
By forcing the window size to be a different number, the trace now completes without errors. (But of course, this window size is not appropriate for the pretrained checkpoint.)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17476/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17475
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17475/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17475/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17475/events
|
https://github.com/huggingface/transformers/issues/17475
| 1,252,833,154
|
I_kwDOCUB6oc5KrLeC
| 17,475
|
Add support for pruning whole layers in transformer models.
|
{
"login": "chrisdt1998",
"id": 71395013,
"node_id": "MDQ6VXNlcjcxMzk1MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/71395013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisdt1998",
"html_url": "https://github.com/chrisdt1998",
"followers_url": "https://api.github.com/users/chrisdt1998/followers",
"following_url": "https://api.github.com/users/chrisdt1998/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisdt1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisdt1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisdt1998/subscriptions",
"organizations_url": "https://api.github.com/users/chrisdt1998/orgs",
"repos_url": "https://api.github.com/users/chrisdt1998/repos",
"events_url": "https://api.github.com/users/chrisdt1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisdt1998/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"WDYT @NielsRogge?\r\n\r\n@chrisdt1998 do you have an example of how big of a change it would result in the code?",
"Yes, the change would be about 10 lines of code added to the prune_heads method in the ViTAttention class in modeling_vit.py. This could also be extended to other transformer models in the same corresponding functions, for example in modeling_bert.py, the change would be in the prune_heads method in the BertAttention class.\r\n\r\nThe change for the ViTAttention class would be:\r\n\r\n```\r\nclass ViTAttention(nn.Module):\r\n def __init__(self, config: ViTConfig) -> None:\r\n super().__init__()\r\n self.attention = ViTSelfAttention(config)\r\n self.output = ViTSelfOutput(config)\r\n self.pruned_heads = set()\r\n\r\n def prune_heads(self, heads: Set[int]) -> None:\r\n if len(heads) == 0:\r\n return\r\n heads, index = find_pruneable_heads_and_indices(\r\n heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads\r\n )\r\n\r\n # Prune linear layers\r\n self.attention.query = prune_linear_layer(self.attention.query, index)\r\n self.attention.key = prune_linear_layer(self.attention.key, index)\r\n self.attention.value = prune_linear_layer(self.attention.value, index)\r\n self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)\r\n\r\n # Update hyper params and store pruned heads\r\n self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads)\r\n self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads\r\n self.pruned_heads = self.pruned_heads.union(heads)\r\n```\r\n\r\nTo:\r\n\r\n```\r\nclass ViTAttention(nn.Module):\r\n def __init__(self, config: ViTConfig) -> None:\r\n super().__init__()\r\n self.attention = ViTSelfAttention(config)\r\n self.output = ViTSelfOutput(config)\r\n self.pruned_heads = set()\r\n\r\n def prune_heads(self, heads: Set[int]) -> None:\r\n if len(heads) == 0:\r\n return\r\n if self.attention is None:\r\n return\r\n\r\n all_pruned = self.pruned_heads.union(heads)\r\n if len(all_pruned) == self.attention.num_attention_heads:\r\n self.attention = None\r\n self.output.dense = None\r\n # Update hyper params and store pruned heads\r\n self.pruned_heads = all_pruned\r\n return\r\n\r\n heads, index = find_pruneable_heads_and_indices(\r\n heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads\r\n )\r\n\r\n # Prune linear layers\r\n self.attention.query = prune_linear_layer(self.attention.query, index)\r\n self.attention.key = prune_linear_layer(self.attention.key, index)\r\n self.attention.value = prune_linear_layer(self.attention.value, index)\r\n self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)\r\n\r\n # Update hyper params and store pruned heads\r\n self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads)\r\n self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads\r\n self.pruned_heads = self.pruned_heads.union(heads)\r\n```\r\n\r\nPlease note that all credits go to Sai Prasanna, from this link https://github.com/sai-prasanna/bert-experiments/blob/master/src/model_bert.py and corresponding paper \"When BERT Plays the Lottery, All Tickets Are Winning\".\r\n\r\nPlease let me know if you'd like me to clarify further.\r\n"
] | 1,653
| 1,655
| 1,655
|
NONE
| null |
### Feature request
Dear HuggingFace team,
In the ViT Model folder (namely modeling_vit.py), there is an option to prune the attention heads of a model. However, at the moment, if I want to prune a whole layer, I get an error due to the dense layer because the input features is of size 0 and hence I get an issue with 1/sqrt(in_features). Would it be possible to do something similar to https://github.com/sai-prasanna/bert-experiments/blob/master/src/model_bert.py where they simply check if the number of heads to prune is equal to the number of heads in that layer and hence take the attentions and dense layer to be None?
### Motivation
The motivation for this is that I want my pruning algorithm to be able to prune whole layers if it thinks that this will give the best performance when compressing a model. I imagine that other researchers would appreciate this feature as well.
### Your contribution
I am able to take inspiration from Sai-prasanna and add it to the ViT model if you would like. Please let me know.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17475/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17474
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17474/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17474/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17474/events
|
https://github.com/huggingface/transformers/pull/17474
| 1,252,689,218
|
PR_kwDOCUB6oc44rlUY
| 17,474
|
BLOOM
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The commit's history should be better now..! @patrickvonplaten ",
"_The documentation is not available anymore as the PR was closed or merged._",
"There are several unresolved discussions in the original PR:\r\n\r\n- https://github.com/huggingface/transformers/pull/17202#discussion_r883151955\r\n- https://github.com/huggingface/transformers/pull/17202#discussion_r882980948\r\n\r\n\r\nIn the future please let's not rush to open new PRs but fix the existing ones. as it makes incomplete discussions unresolved and also reading the PR's history for why a certain decision was made will be more difficult to do.",
"I propose to continue the discussions on #17202 until all the conversations are resolved. Once they're resolved, I'll ping here for a final review! \r\nThere is only one unresolved discussion left for now",
"The discussions on the old PR are now resolved, \r\nWould love to have a final review (@sgugger and @patrickvonplaten already approved the previous PR + the test that is failing seems to be unrelated to Bloom)\r\n\r\ncc @LysandreJik @stas00 ! Hope we can merge it soon 🤞",
"Should have fixed the nits, will quickly test the slow tests now",
"[comment moved elsewhere]",
"Hey @justheuristic !\r\nthanks for the nit! Just tested in and the tests passed - I think that we should add an extra test for this sanity check :-) \r\nEDIT: I just completely removed the token_emb since we do not need it at all",
"Thank you @patrickvonplaten !\r\nI have benchmarked the performance of fused vs unfused version of the `bias_add` function and I am observing the same performance (with a slightly higher speed with unfused operation). So I will remove that, the bias term will not be passed in a weird manner anymore now\r\n\r\nHere is the script to reproduce the benchmarking in case you are interested:\r\n\r\n```\r\nimport torch, timeit\r\nfrom transformers import BloomForCausalLM\r\n\r\nmodel = BloomForCausalLM.from_pretrained(\"bigscience/bigscience-small-testing\")\r\nmodel = model.eval()\r\n\r\nmodel_unfused = BloomForCausalLM.from_pretrained(\"bigscience/bigscience-small-testing\", bias_dropout_fusion=False)\r\nmodel_unfused = model_unfused.eval()\r\n\r\ninput_ids = torch.LongTensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\r\nn_cycles, n_repeats = 100, 100\r\n\r\ndef model_fused():\r\n for _ in range(n_cycles): _ = model(input_ids)\r\n\r\ndef model_unfused_test():\r\n for _ in range(n_cycles): _ = model_unfused(input_ids)\r\n\r\nprint(f'model_fused={timeit.Timer(\"model_fused()\", globals=globals()).timeit(number=n_repeats)}')\r\nprint(f'model_unfused_test={timeit.Timer(\"model_unfused_test()\", globals=globals()).timeit(number=n_repeats)}')\r\n```",
"[...]\r\n> input_ids = torch.LongTensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\r\n\r\nJust be careful when you benchmark to use realistic shapes of tensors - when tiny tensors are used it's unlikely you are benchmarking the real situation. As the framework dominates the compute rather than data processing and you're not benchmarking the performance of the gpu ops.\r\n\r\nactually, this benchmark is run on cpu, you probably want to switch it to `.cuda`",
"I updated the test with the following snippet:\r\n```\r\nimport torch, timeit\r\nfrom transformers import BloomForCausalLM, BloomTokenizerFast\r\n\r\ntokenizer = BloomTokenizerFast.from_pretrained(\"bigscience/tokenizer\")\r\n\r\nmodel = BloomForCausalLM.from_pretrained(\"bigscience/bigscience-small-testing\").cuda()\r\nmodel = model.eval()\r\n\r\nmodel_unfused = BloomForCausalLM.from_pretrained(\"bigscience/bigscience-small-testing\", bias_dropout_fusion=False).cuda()\r\nmodel_unfused = model_unfused.eval()\r\n\r\nbatch_size = 16\r\ninput_sequences = [\"test sequence\"*50 for _ in range(batch_size)]\r\n\r\ninput_ids = tokenizer(input_sequences, return_tensors='pt')[\"input_ids\"]\r\n\r\nn_cycles, n_repeats = 100, 100\r\n\r\ndef model_fused():\r\n for _ in range(n_cycles): _ = model(input_ids.cuda())\r\n torch.cuda.synchronize()\r\n\r\ndef model_unfused_test():\r\n for _ in range(n_cycles): _ = model_unfused(input_ids.cuda())\r\n torch.cuda.synchronize()\r\n\r\nprint(f'model_fused={timeit.Timer(\"model_fused()\", globals=globals()).timeit(number=n_repeats)}')\r\nprint(f'model_unfused_test={timeit.Timer(\"model_unfused_test()\", globals=globals()).timeit(number=n_repeats)}')\r\n```\r\nAnd I got: \r\n```\r\nmodel_fused=321.8526800679999\r\nmodel_unfused_test=322.2609531270002\r\n```\r\nI guess the difference would be much higher if we increase the sequence length/ batch size. Should we keep the fused operation ? @stas00 \r\n\r\nPS: I emulated the test on google colab with transformers built on a separate branch: `pip install git+https://github.com/younesbelkada/transformers.git@501ceed44d26988a17cd884b73eeb573f1f2bea8`",
"I will put both since the fused operation is slower on the CPU!",
"On A100 I get:\r\n\r\n```\r\nmodel_fused=35.0372433779994\r\nmodel_unfused_test=35.158229669999855\r\n```\r\n\r\nI suppose choose the most sensible default and allow the user to override it.\r\n\r\nIf I increase the batch size to 128, the fused version reports to be slower 5%. Give it a try.\r\n\r\nWe are heading towards using automatic optimizers down the road (torchdynamo/aotautograd/nvfuser) where most of these things will be fused/optimized on the fly, so I think all the custom fusions will become redundant - we should start seeing these tools becoming more common towards the 2nd half of this year.",
"Great thanks for the information!\nIn that case let me put the unfused op by default ",
"Hi all! We are working with the smaller BLOOM models in the Bigscience Multilingual Modeling WG and would like to use a `BloomForSequenceClassification` and `BloomForTokenClassification` class in our experiments. \r\n\r\nI'm happy to contribute the code for these, but would it be preferred to let this PR be merged first, or for me to open a PR after this is merged?",
"Hi!\nVery happy to hear that you want to contribute on that! \nOn my side I would say that it is preferable to wait until the PR gets merged. Hopefully very soon ;) ",
"Can we merge? I think we have pretty much everything now (Accelerate+DeepSpeed compatibility, etc.) ? \r\nAs soon as we merge I need to make the small models public for the slow tests to pass",
"@sgugger I can confirm the slow tests + non slow tests pass on a A100 node in Jean Zay. I have emulated the tests by running `export RUN_SLOW=1` before running the testing script ",
"Good to merge once all tests are green (test hub failure can be ignored)!",
"@younesbelkada I think you can press the merge button now if you want ;-)",
"We need to send @younesbelkada a plaque with engraving:\r\n\r\n`I authored a PR with 120 conversations, 195 commits and 9 reviewers and lived to tell the story!`\r\n\r\nAmazing! \r\n\r\nI'd have given up long time ago.",
"Congratz!",
"The lights are green ! \r\nThank you all!!\r\n@stas00 if you count also the other PR there are 149 commits + 240 conversations ;)",
"Yes, indeed! Good call, @younesbelkada!"
] | 1,653
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding BLOOM models into transformers library - recreating the PR as the old PR has a bad git commit history
Original PR: #17202
- [x] add a generation test with a small model pushed on the hub
- [x] slow tests needs to be modified accordingly
- [x] add final credits to all reviewers in a final commit
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17474/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17474/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17474",
"html_url": "https://github.com/huggingface/transformers/pull/17474",
"diff_url": "https://github.com/huggingface/transformers/pull/17474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17474.patch",
"merged_at": 1654768840000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.