url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21387
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21387/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21387/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21387/events
|
https://github.com/huggingface/transformers/issues/21387
| 1,564,352,886
|
I_kwDOCUB6oc5dPiF2
| 21,387
|
OOM when running causal language modelling sample
|
{
"login": "rmc135",
"id": 45911282,
"node_id": "MDQ6VXNlcjQ1OTExMjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/45911282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rmc135",
"html_url": "https://github.com/rmc135",
"followers_url": "https://api.github.com/users/rmc135/followers",
"following_url": "https://api.github.com/users/rmc135/following{/other_user}",
"gists_url": "https://api.github.com/users/rmc135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rmc135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rmc135/subscriptions",
"organizations_url": "https://api.github.com/users/rmc135/orgs",
"repos_url": "https://api.github.com/users/rmc135/repos",
"events_url": "https://api.github.com/users/rmc135/events{/privacy}",
"received_events_url": "https://api.github.com/users/rmc135/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
The example at:
https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling
...mentions taking half an hour on a K80. I'm using an M40 24GB, which I would reasonably believe has sufficient VRAM, but it dies with OOM at startup. I used the exact commandline shown on the page.
Dropping -per_device_train_batch_size and --per_device_eval_batch_size from 8 to 4 succeeds, but it's still using ~15GB of VRAM
Unsure whether the WikiText-2 dataset has changed, or transformers has changed, or the model has changed, or there's something different about VRAM usage (eg default data type size) between the K80 and M40 24GB.
I'm just beginning with transformers fine-tuning which is why I was running the example commandline. Apologies if I'm missing something obvious.
Ubuntu 22.04 LTS
Python 3.10.6 running under venv
transformers 4.27.0.dev0-py3.10.egg
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21387/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21386
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21386/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21386/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21386/events
|
https://github.com/huggingface/transformers/issues/21386
| 1,564,316,476
|
I_kwDOCUB6oc5dPZM8
| 21,386
|
getting hidden_states in a causal menner
|
{
"login": "amit-sofer",
"id": 77002276,
"node_id": "MDQ6VXNlcjc3MDAyMjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/77002276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amit-sofer",
"html_url": "https://github.com/amit-sofer",
"followers_url": "https://api.github.com/users/amit-sofer/followers",
"following_url": "https://api.github.com/users/amit-sofer/following{/other_user}",
"gists_url": "https://api.github.com/users/amit-sofer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amit-sofer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amit-sofer/subscriptions",
"organizations_url": "https://api.github.com/users/amit-sofer/orgs",
"repos_url": "https://api.github.com/users/amit-sofer/repos",
"events_url": "https://api.github.com/users/amit-sofer/events{/privacy}",
"received_events_url": "https://api.github.com/users/amit-sofer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should use the [forums](https://discuss.huggingface.co/) for a question like this, as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### Feature request
i want to use the roberta model in the following way:
given a list of N tokens, i want the model to compute a hidden_state for each of the N tokens in a causal way, meaning the first token hidden_state is computed based only on the first token, the second hidden_state is computed based on the first two tokens, the third hidden_state is computed based on the first three tokens and so on.
additionally, i want a CLS token that his hidden_state will be computed based on all the input tokens.
there seems like there is no flag or input which will make this. or is there?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21386/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21385
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21385/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21385/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21385/events
|
https://github.com/huggingface/transformers/pull/21385
| 1,564,137,774
|
PR_kwDOCUB6oc5I55os
| 21,385
|
Do not log the generation config for each prediction step in TrainerSeq2Seq
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
`generation_config` is currently initialized every time `generate` is called with `generation_config=None` (see [here](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183)). This will also log the generation config (see [here](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/configuration_utils.py#L557)). Therefore, it will be logged for each iteration of the evaluation loop of `TrainerSeq2Seq`.
To avoid this behavior, this PR introduces a hack that sets `self.model.generation_config._from_model_config` to `False` after the first call to `generate` in `TrainerSeq2Seq` to ensure that 1) the right generation config has been initialized, 2) it will not be initialized in the following interations.
Internal discussion [here](https://huggingface.slack.com/archives/C01N44FJDHT/p1675159917133549).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21385/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21385/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21385",
"html_url": "https://github.com/huggingface/transformers/pull/21385",
"diff_url": "https://github.com/huggingface/transformers/pull/21385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21385.patch",
"merged_at": 1675173923000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21384
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21384/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21384/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21384/events
|
https://github.com/huggingface/transformers/pull/21384
| 1,564,079,275
|
PR_kwDOCUB6oc5I5tJw
| 21,384
|
[torch] remove deprecated uint8 in favor of bool
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Waiting for the CI to work again. ",
"Ok now the real clean up is starting, have to identify all attention masks, vs the causal masks. I think it is a good thing, will help understanding \r\n",
"The way we handle attention mask is not normalised throughout the library. \r\nDiving deeper might not be the best idea as there can be some backward incompatibilities: \r\n- if we modify all the methods that handle attention masks that are define at the beginning of most of the modeling files, it is gonna break things for potential users. \r\n- if we modify the output of the tokenizer, it is the same.\r\n\r\nIn conclusion the simplest fix is to dig where `uint8` are used, otherwise ignore. Whenever the uint8 are converted to `torch.bool` the rest of the code that depends on it should also be updated.",
"Last check, I need to make sure these new mask don't go through a `1.0 - mask` afterwards and will be good to go.\r\nEDIT: looks good, everything goes to a torch.where",
"Test failing is unrelated, merging "
] | 1,675
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
This should fix #21013.
Have not ran the tests yet, so leaving as draft
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21384/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21384",
"html_url": "https://github.com/huggingface/transformers/pull/21384",
"diff_url": "https://github.com/huggingface/transformers/pull/21384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21384.patch",
"merged_at": 1677494762000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21383
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21383/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21383/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21383/events
|
https://github.com/huggingface/transformers/pull/21383
| 1,564,054,970
|
PR_kwDOCUB6oc5I5oFe
| 21,383
|
[Docs] Minor fixes
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR:
- moves TAPAS and LayoutLM to the "multimodal" section rather than "text", as these models leverage more modalities than just text (TAPAS leverages row and column information, LayoutLM leverages 2D coordinates).
- adds a figure for DETA and fixes the one of UPerNet
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21383/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21383",
"html_url": "https://github.com/huggingface/transformers/pull/21383",
"diff_url": "https://github.com/huggingface/transformers/pull/21383.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21383.patch",
"merged_at": 1675174393000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21382
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21382/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21382/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21382/events
|
https://github.com/huggingface/transformers/pull/21382
| 1,564,053,772
|
PR_kwDOCUB6oc5I5n1j
| 21,382
|
Simplify column_names in run_clm/mlm
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is much better - thank you, @lhoestq!"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
Following https://github.com/huggingface/transformers/pull/21343
Just a minor change to simplify the code, and fix a small bug (`column_names` needs to be a list to be able to call `column_names[0]`)
cc @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21382/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21382",
"html_url": "https://github.com/huggingface/transformers/pull/21382",
"diff_url": "https://github.com/huggingface/transformers/pull/21382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21382.patch",
"merged_at": 1675175027000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21381
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21381/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21381/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21381/events
|
https://github.com/huggingface/transformers/issues/21381
| 1,563,990,295
|
I_kwDOCUB6oc5dOJkX
| 21,381
|
gradient checkpointing disables requires_grad when freezing part of models (fix with use_reentrant=False)
|
{
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for flagging the issue. We don't offer APIs to freeze the model in the Trainer (as it has never shown better results for fine-tuning, quite the opposite), so we can leave this issue to show how to solve the problem, but we won't really incorporate it in Transformers.",
"I think a lot of people use `transformers` models without using the Trainer (so with their own training script or, e.g. pytorch lightning that *does* provide an API to freeze models), but it’s your call :)",
"in case someone needs quick fix for their custom training pipelines\r\nFor inspiration:\r\n```python\r\n if gradient_checkpointing_enabled:\r\n from functools import partial\r\n notfailing_checkpoint = partial(torch.utils.checkpoint.checkpoint, use_reentrant=False)\r\n torch.utils.checkpoint.checkpoint = notfailing_checkpoint\r\n model.gradient_checkpointing_enable()\r\n```",
"qq: this mitigation is tested on only 1 GPU or multiple GPUs? On multiple GPUs, I saw this error: \r\nAssertionError: Expects storage to be allocated",
"I only tested with 1 GPU",
"> I only tested with 1 GPU\r\n\r\nas I expected. Let's wait for some updates from the pytorch side.",
"Just hit this today... a simple and efficient way to fine tune LLMs is to just train some layers, and of course checkpointing would help a lot here, so IMO would be great to be able to specify `use_reentrant` in Transformers!",
"Hi @harpone \r\nWith https://github.com/huggingface/transformers/pull/27020 being merged, you can do\r\n```python\r\nmodel.enable_gradient_checkpointing(gradient_checkpointing_kwargs={\"use_reentrant\": False})\r\n```\r\nTo enable `use_reentrant=False`."
] | 1,675
| 1,698
| 1,675
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-4.18.0-372.36.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes but not relevant
- Using distributed or parallel set-up in script?: no but not relevant
### Who can help?
trainer/PyTorch: @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When freezing the first layers of a model (e.g. including embeddings) *and* using gradient-checkpointing, **all** gradient calculation will be disabled, i.e. the output will have `requires_grad==False`. See https://discuss.pytorch.org/t/checkpoint-with-no-grad-requiring-inputs-problem/19117/7 for explanations. I believe this is the issue encountered by @entslscheia in https://github.com/huggingface/transformers/issues/16276 (apparently unsolved).
Here is a sample code to reproduce with `BertModel`, but I think that gradient-checkpointing is implemented like this everywhere in the library:
```py
In [1]: from transformers import BertTokenizer, BertModel
In [2]: model = BertModel.from_pretrained('../models/bert-base-uncased/')
In [3]: tokenizer = BertTokenizer.from_pretrained('../models/bert-base-uncased/')
In [4]: inputs = tokenizer(['foo', 'bar'], return_tensors='pt')
# enable gradient checkpointing
In [5]: model.encoder.gradient_checkpointing = True
# freezing the first input layers (here the embeddings)
In [8]: for p in model.embeddings.parameters():
...: p.requires_grad=False
In [9]: output = model(**inputs)
# expected True
In [12]: output.last_hidden_state.requires_grad
Out[12]: False
# note that all weights of the model have requires_grad==True except for the embeddings
In [15]: model.encoder.layer[0].output.dense.weight.requires_grad
Out[15]: True
```
As myself, you might find out about this while training a model, because obviously if you pass `None` gradients to an optimizer, it will not be happy. You might encounter the following warning: `/gpfswork/rech/fih/usl47jg/miniconda3/envs/datasets/lib/python3.10/site-packages/torch/utils/checkpoint.py:25: UserWarning: None of the inputs have requires_grad=True. Gradients will be None` and then the following error: `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`.
### Expected behavior
A quick fix for me was to set `use_reentrant=False` when calling `torch.utils.checkpoint.checkpoint` (e.g. in https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L600). Note that this will be the future default in "future versions of PyTorch" ([according to the doc](https://pytorch.org/docs/stable/checkpoint.html) without further precision).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21381/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21380
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21380/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21380/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21380/events
|
https://github.com/huggingface/transformers/pull/21380
| 1,563,990,281
|
PR_kwDOCUB6oc5I5aQP
| 21,380
|
Update `Graphormer` and fix its `torchscript` test failures
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Update `Graphormer` and fix its `torchscript` test failures
cc @clefourrier for reference
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21380/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21380",
"html_url": "https://github.com/huggingface/transformers/pull/21380",
"diff_url": "https://github.com/huggingface/transformers/pull/21380.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21380.patch",
"merged_at": 1675182746000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21379
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21379/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21379/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21379/events
|
https://github.com/huggingface/transformers/issues/21379
| 1,563,845,759
|
I_kwDOCUB6oc5dNmR_
| 21,379
|
Add support to MPNetForCausalLM
|
{
"login": "jwengr",
"id": 58577380,
"node_id": "MDQ6VXNlcjU4NTc3Mzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/58577380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwengr",
"html_url": "https://github.com/jwengr",
"followers_url": "https://api.github.com/users/jwengr/followers",
"following_url": "https://api.github.com/users/jwengr/following{/other_user}",
"gists_url": "https://api.github.com/users/jwengr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwengr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwengr/subscriptions",
"organizations_url": "https://api.github.com/users/jwengr/orgs",
"repos_url": "https://api.github.com/users/jwengr/repos",
"events_url": "https://api.github.com/users/jwengr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwengr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### Feature request
Add MPNetForCausalLM for various usage.
### Motivation
The pre-trained mpnet model provides the best performance in [sentence embedding](https://www.sbert.net/docs/pretrained_models.html)
And it can aim for additional performance improvements through using [TSDAE (Tranformer-based Denoising AutoEncoder)](https://www.sbert.net/examples/unsupervised_learning/README.html#tsdae).
For using TSDAE, model need decoder part implementation(MPNetForCausalLM). but current MPNet model does not support MPNetForCausalLM
With support to MPNetForCausalLM, all the decoding functions will be possible for mpnet models, e.g. the TSDAE learning, seq2seq tasks, etc.
simillar issue : [Add support to DistilBertLMHeadModel](https://github.com/huggingface/transformers/issues/14737)
### Your contribution
I have been working on the details by forking the recent master branch , and if there is no problem, I would like to proceed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21379/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21378
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21378/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21378/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21378/events
|
https://github.com/huggingface/transformers/issues/21378
| 1,563,436,537
|
I_kwDOCUB6oc5dMCX5
| 21,378
|
UL2 Training with HF Trainer + DeepSpeed Zero3 Results in CUDA Illegal Memory Exception
|
{
"login": "michaelroyzen",
"id": 45830328,
"node_id": "MDQ6VXNlcjQ1ODMwMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelroyzen",
"html_url": "https://github.com/michaelroyzen",
"followers_url": "https://api.github.com/users/michaelroyzen/followers",
"following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions",
"organizations_url": "https://api.github.com/users/michaelroyzen/orgs",
"repos_url": "https://api.github.com/users/michaelroyzen/repos",
"events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelroyzen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have never tried running UL2 - please help me to reproduce it\r\n\r\nand of course for the future do follow the instructions from the error message to re-running with ` =1` (except this feature is broken in recent NCCL (pt-1.13) and it'll hang https://github.com/NVIDIA/nccl/issues/750). The async nature often makes it impossible to get a real traceback and `CUDA_LAUNCH_BLOCKING=1` turns async mode off and gives you a normal traceback.",
"Thank you, @stas00. This is the error with `CUDA_LAUNCH_BLOCKING=1`:\r\n\r\n```\r\n[2023-01-31 01:03:02,046] [INFO] [utils.py:827:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]\r\n[2023-01-31 01:03:02,047] [INFO] [utils.py:832:see_memory_usage] MA 4.56 GB Max_MA 4.56 GB CA 5.48 GB Max_CA 5 GB \r\n[2023-01-31 01:03:02,048] [INFO] [utils.py:837:see_memory_usage] CPU Virtual Memory: used = 30.74 GB, percent = 2.3%\r\nParameter Offload: Total persistent parameters: 664576 in 164 params\r\n[2023-01-31 01:03:02,287] [INFO] [utils.py:827:see_memory_usage] DeepSpeedZeRoOffload initialize [end]\r\n[2023-01-31 01:03:02,289] [INFO] [utils.py:832:see_memory_usage] MA 4.56 GB Max_MA 4.56 GB CA 5.48 GB Max_CA 5 GB \r\n[2023-01-31 01:03:02,289] [INFO] [utils.py:837:see_memory_usage] CPU Virtual Memory: used = 30.59 GB, percent = 2.3%\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): NCCL Error 1: unhandled cuda error\r\n[2023-01-31 01:03:08,861] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26370\r\n[2023-01-31 01:03:08,879] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26371\r\n[2023-01-31 01:03:08,879] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26372\r\n[2023-01-31 01:03:08,894] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26373\r\n[2023-01-31 01:03:08,908] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26374\r\n[2023-01-31 01:03:09,454] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26375\r\n[2023-01-31 01:03:09,471] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26376\r\n[2023-01-31 01:03:09,485] [INFO] [launch.py:286:sigkill_handler] Killing subprocess 26377\r\n```",
"Hmm, I have no idea based on the log. Thank you for sharing it, Michael.\r\n\r\nHow do I reproduce the problem?\r\n\r\nIs it possible that you're running out of cpu memory? sometimes you get cpu-oom event and the program gets culled in the middle of the run, but usually the OS should log this event in the console or syslog.",
"You can reproduce the problem by trying to fine-tune UL2 in BF16 using DeepSpeed Zero2/Zero3 and the HF Trainer. Dataset doesn't seem to matter, I think any Seq2Seq fine-tuning script should reproduce it.\r\n\r\nI doubt it's a resource issue. It's GCP's a2-ultragpu instance with 1.3TB of CPU mem. GPU memory also seems to be fine. I remember training a UL2 model back in September with DeepSpeed successfully, but now I can't seem to.\r\n\r\nDo you have access to an A100 node to try this out? ",
"Sounds good.\r\n\r\nBut why is it so difficult to copy-n-paste the commands and configs that fail for you and not have me figure everything out from scratch? Please meet me half way.",
"Okay, my bad. It's just all custom, but here goes.\r\n\r\nTrain:\r\n\r\n```\r\nimport functools\r\nimport json\r\nimport argparse\r\nfrom datetime import datetime\r\nimport os\r\n\r\nfrom utils.dataset_formats import Seq2SeqDataset\r\n\r\nimport numpy as np\r\nimport nltk\r\nimport wandb\r\nimport torch\r\n\r\nfrom datasets import load_metric\r\nfrom transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer, AutoTokenizer, AutoModelForSeq2SeqLM, AddedToken\r\n\r\nclass Trainer:\r\n def __init__(self, args) -> None:\r\n self.train_dataset = None\r\n self.val_dataset = None\r\n self.args = args\r\n self.metric = load_metric(\"rouge\")\r\n self.trainer = None\r\n\r\n # Get a Seq2SeqDataset from a json file\r\n def prepare_datsets_for_training(self) -> None:\r\n train_data_json = json.load(open(self.args.train))\r\n val_data_json = json.load(open(self.args.val))\r\n\r\n self.train_dataset = Seq2SeqDataset(train_data_json)\r\n self.val_dataset = Seq2SeqDataset(val_data_json)\r\n\r\n self.tokenizer = None\r\n\r\n # Train and save a Seq2Seq model\r\n def train_model(self) -> AutoModelForSeq2SeqLM:\r\n training_args = Seq2SeqTrainingArguments(output_dir=self.args.save_dir, num_train_epochs=self.args.num_epochs, logging_steps=1, save_steps=self.args.save_steps or self.args.eval_steps,\r\n per_device_train_batch_size=self.args.per_device_train_batch_size, per_device_eval_batch_size=self.args.per_device_eval_batch_size,\r\n logging_dir=args.save_dir, bf16=self.args.bf16, bf16_full_eval=self.args.bf16, fp16=False, gradient_accumulation_steps=self.args.gradient_accumulation_steps, \r\n overwrite_output_dir=True, evaluation_strategy=\"steps\", eval_steps=self.args.eval_steps,\r\n predict_with_generate=True, report_to=\"wandb\", learning_rate=args.learning_rate, lr_scheduler_type=\"cosine\", gradient_checkpointing=self.args.gradient_checkpointing, deepspeed=args.deepspeed, log_level=\"error\", log_level_replica=\"error\")\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(self.args.model)\r\n added_tokens = [AddedToken(\"<\"), AddedToken(\"<SOURCE>\"), AddedToken(\"{\"), AddedToken(\"}\"), AddedToken(\"\\n\"), AddedToken(\"\\t\"), AddedToken(\" \"), AddedToken(\" \"), AddedToken(\" \"), AddedToken(\"`\")]\r\n tokenizer.add_special_tokens({\"additional_special_tokens\": added_tokens})\r\n tokenizer.save_pretrained(self.args.save_dir + \"/tokenizer\")\r\n self.tokenizer = tokenizer\r\n\r\n model = AutoModelForSeq2SeqLM.from_pretrained(self.args.model)\r\n \r\n os.environ[\"WANDB_PROJECT\"] = self.args.name\r\n\r\n if torch.distributed.get_rank() == 0:\r\n run_name = datetime.now().strftime('%b-%d-%I%M%p-%G')\r\n wandb.tensorboard.patch(root_logdir=self.args.save_dir)\r\n wandb.init(name=run_name, entity=\"hellocognition\")\r\n\r\n nltk.download('punkt')\r\n\r\n # Barrier for distributed training\r\n print(\"Rank {} reached barrier 1\".format(torch.distributed.get_rank()))\r\n torch.distributed.barrier()\r\n\r\n model_collate_fn = functools.partial(\r\n self.make_batch, tokenizer=tokenizer, max_input_len=self.args.max_input_len, max_target_len=self.args.max_target_len,\r\n )\r\n\r\n assert self.train_dataset and self.val_dataset\r\n\r\n self.trainer = Seq2SeqTrainer(model=model, args=training_args, train_dataset=self.train_dataset,\r\n eval_dataset=self.val_dataset, data_collator=model_collate_fn, compute_metrics=self.compute_metrics)\r\n\r\n # Barrier for distributed training\r\n print(\"Rank {} reached barrier 2\".format(torch.distributed.get_rank()))\r\n torch.distributed.barrier()\r\n \r\n self.trainer.train()\r\n\r\n if torch.distributed.get_rank() == 0:\r\n trainer.save(self.args.save_dir + '/final_model')\r\n\r\n return model\r\n\r\n # Truncate examples to max input lengths and make a torch.Tensor input/output batch\r\n def make_batch(self, example_list: list, tokenizer: AutoTokenizer, max_input_len: int, max_target_len: int):\r\n model_input_list = [model_input for model_input, _ in example_list]\r\n gold_answer_list = [gold_answer for _, gold_answer in example_list]\r\n model_input_tokens = tokenizer.batch_encode_plus(model_input_list, max_length=max_input_len, padding=True, truncation=True)\r\n model_input_ids, model_input_mask = (\r\n torch.tensor(model_input_tokens[\"input_ids\"]),\r\n torch.tensor(model_input_tokens[\"attention_mask\"])\r\n )\r\n gold_answer_tokens = tokenizer.batch_encode_plus(gold_answer_list, max_length=max_target_len, padding=True, truncation=True)\r\n gold_answer_ids, gold_answer_mask = (\r\n torch.tensor(gold_answer_tokens[\"input_ids\"]),\r\n torch.tensor(gold_answer_tokens[\"attention_mask\"])\r\n )\r\n\r\n lm_labels = gold_answer_ids[:, :].contiguous().clone()\r\n # Set pad tokens to -100 to be ignored by cross entropy loss\r\n lm_labels[gold_answer_mask[:, :].contiguous() == 0] = -100\r\n model_inputs = {\r\n \"input_ids\": model_input_ids,\r\n \"attention_mask\": model_input_mask,\r\n \"labels\": lm_labels,\r\n }\r\n return model_inputs\r\n\r\n # Compute ROUGE metrics\r\n def compute_metrics(self, eval_pred: list):\r\n predictions, labels = eval_pred\r\n decoded_preds = self.tokenizer.batch_decode(predictions, skip_special_tokens=False)\r\n # Replace -100 in the labels as we can't decode them.\r\n labels = np.where(labels != -100, labels, self.tokenizer.pad_token_id)\r\n decoded_labels = self.tokenizer.batch_decode(labels, skip_special_tokens=False)\r\n \r\n # Rouge expects a newline after each sentence\r\n decoded_preds = [\"\\n\".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]\r\n decoded_labels = [\"\\n\".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]\r\n \r\n result = self.metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n # Extract a few results\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\n \r\n # Add mean generated length\r\n prediction_lens = [np.count_nonzero(pred != self.tokenizer.pad_token_id) for pred in predictions]\r\n result[\"gen_len\"] = np.mean(prediction_lens)\r\n \r\n return {k: round(v, 4) for k, v in result.items()}\r\n\r\nif __name__ == \"__main__\":\r\n # parse args\r\n parser = argparse.ArgumentParser(description='Train Argument Parser')\r\n parser.add_argument('--name', help='name of the model to be trained using the modeltype-datasetname convention, e.g. flan-t5-3B-gpt3', required=True)\r\n parser.add_argument('--model', help='name or path of the model to train, e.g. google/flan-t5-xl', required=True)\r\n parser.add_argument('--train', help='path to the json train dataset', required=True)\r\n parser.add_argument('--val', help='path to the json val dataset', required=True)\r\n parser.add_argument('--max_input_len', type=int, help='maximum number of tokens allowed in training input', required=True)\r\n parser.add_argument('--max_target_len', type=int, help='maximum number of tokens allowed in training target output', required=True)\r\n parser.add_argument('--save_dir', help='save directory after training', required=True)\r\n parser.add_argument('--num_epochs', type=int, help='number of epochs to train', required=True)\r\n parser.add_argument('--learning_rate', type=float, help='learning rate', required=True)\r\n parser.add_argument('--eval_steps', type=int, help='how many steps to eval after', required=True)\r\n parser.add_argument('--save_steps', type=int, help='how many steps to save after', required=False)\r\n parser.add_argument('--gradient_accumulation_steps', type=int, help='how many steps to accumulate gradient for (increases effective batch size)', required=True)\r\n parser.add_argument('--per_device_train_batch_size', type=int, help='train batch size', required=True)\r\n parser.add_argument('--per_device_eval_batch_size', type=int, help='eval batch size', required=True)\r\n parser.add_argument('--bf16', help='enable bfloat16 training and eval', default=False, action=\"store_true\")\r\n parser.add_argument('--gradient_checkpointing', help='allow larger sequence lengths to fit in memory', default=False, action=\"store_true\")\r\n parser.add_argument('--deepspeed', help='path of the deepspeed config', required=True)\r\n parser.add_argument('--local_rank')\r\n args = parser.parse_args()\r\n\r\n # log into wandb\r\n os.environ['WANDB_API_KEY'] = \"WANDB-KEY\"\r\n\r\n # make trainer\r\n trainer = Trainer(args)\r\n\r\n # prepare dataset\r\n trainer.prepare_datsets_for_training()\r\n\r\n # perform training\r\n trained_model = trainer.train_model()\r\n```\r\n\r\nSeq2Seq Dataset:\r\n```\r\nfrom torch.utils.data import Dataset\r\n\r\nclass Seq2SeqDataset(Dataset):\r\n def __init__(self, examples):\r\n self.examples = examples\r\n\r\n def __len__(self):\r\n return len(self.examples)\r\n\r\n def make_example(self, i):\r\n prompt = self.examples[i][\"prompt\"]\r\n example_input = self.examples[i][\"example_input\"]\r\n\r\n gold_answer = self.examples[i][\"gold_answer\"]\r\n model_input = \"{}\\n{}\".format(prompt, example_input)\r\n \r\n return (model_input, gold_answer)\r\n\r\n def __getitem__(self, i):\r\n return self.make_example(i)\r\n```\r\n\r\nds_config:\r\n```\r\n{\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e12,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 2e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"gather_16bit_weights_on_model_save\": true\r\n },\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\"\r\n }\r\n }\r\n }\r\n```\r\n\r\nThis works great with flan-t5, but fails on UL2. Here is the detailed error that I get without `CUDA_LAUNCH_BLOCKING=1`:\r\n\r\n```\r\n[W CUDAGuardImpl.h:124] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent) \r\nterminate called after throwing an instance of 'c10::Error'\r\n what(): CUDA error: an illegal memory access was encountered\r\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. \r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nException raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:31 (most recent call first): \r\nframe #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f49bd1a6457 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) \r\nframe #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f49bd1703ec in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)\r\nframe #2: c10::cuda::c10_cuda_check_implementation(std::string const&, std::string const&, int, bool) + 0xb4 (0x7f49bd246c64 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)\r\nframe #3: <unknown function> + 0x1e0dc (0x7f49bd21e0dc in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) \r\nframe #4: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x244 (0x7f49bd221054 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) \r\nframe #5: <unknown function> + 0x4f6823 (0x7f49aa4ab823 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) \r\nframe #6: c10::TensorImpl::~TensorImpl() + 0x1a0 (0x7f49bd1869e0 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) \r\nframe #7: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f49bd186af9 in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) \r\nframe #8: std::vector<at::Tensor, std::allocator<at::Tensor> >::~vector() + 0x8b (0x7f49aa4add1b in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\r\nframe #9: c10d::ProcessGroupNCCL::WorkNCCL::~WorkNCCL() + 0x8c (0x7f491bf3ae8c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so) \r\nframe #10: c10d::ProcessGroupNCCL::WorkNCCL::~WorkNCCL() + 0x9 (0x7f491bf3b349 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cpp.so) \r\nframe #11: <unknown function> + 0xbe302c (0x7f49aab9802c in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) \r\nframe #12: <unknown function> + 0x3e4272 (0x7f49aa399272 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) \r\nframe #13: <unknown function> + 0x3e51af (0x7f49aa39a1af in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) \r\nframe #14: <unknown function> + 0xe5698 (0x56347c18d698 in /opt/conda/bin/python3.7) \r\nframe #15: <unknown function> + 0x1f7b89 (0x56347c29fb89 in /opt/conda/bin/python3.7) \r\nframe #16: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) \r\nframe #17: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) \r\nframe #18: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) \r\nframe #19: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) \r\nframe #20: _PyEval_EvalFrameDefault + 0xb2a (0x56347c26010a in /opt/conda/bin/python3.7) \r\nframe #21: <unknown function> + 0x1f7b66 (0x56347c29fb66 in /opt/conda/bin/python3.7) \r\nframe #22: _PyFunction_FastCallDict + 0xaef (0x56347c1a78cf in /opt/conda/bin/python3.7) \r\nframe #23: _PyEval_EvalFrameDefault + 0x1f86 (0x56347c261566 in /opt/conda/bin/python3.7) \r\nframe #24: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) \r\nframe #25: _PyFunction_FastCallKeywords + 0x320 (0x56347c1d7570 in /opt/conda/bin/python3.7) \r\nframe #26: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) \r\nframe #27: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) \r\nframe #28: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) \r\nframe #29: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) \r\nframe #30: _PyEval_EvalFrameDefault + 0xb2a (0x56347c26010a in /opt/conda/bin/python3.7) \r\nframe #31: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) \r\nframe #32: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) \r\nframe #33: <unknown function> + 0x185ea4 (0x56347c22dea4 in /opt/conda/bin/python3.7) \r\nframe #34: _PyObject_FastCallKeywords + 0x18c (0x56347c238b8c in /opt/conda/bin/python3.7) \r\nframe #35: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) \r\nframe #36: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) \r\nframe #37: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) \r\nframe #38: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) \r\nframe #39: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) \r\nframe #40: _PyFunction_FastCallKeywords + 0x184 (0x56347c1d73d4 in /opt/conda/bin/python3.7) \r\nframe #41: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) \r\nframe #42: _PyEval_EvalFrameDefault + 0x4c8a (0x56347c26426a in /opt/conda/bin/python3.7) \r\nframe #43: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) \r\nframe #44: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) \r\nframe #45: <unknown function> + 0x185ea4 (0x56347c22dea4 in /opt/conda/bin/python3.7) \r\nframe #46: _PyObject_FastCallKeywords + 0x18c (0x56347c238b8c in /opt/conda/bin/python3.7) \r\nframe #47: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) \r\nframe #48: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) \r\nframe #49: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) \r\nframe #50: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) \r\nframe #51: _PyEval_EvalFrameDefault + 0x1f86 (0x56347c261566 in /opt/conda/bin/python3.7) \r\nframe #52: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) \r\nframe #53: _PyFunction_FastCallKeywords + 0x320 (0x56347c1d7570 in /opt/conda/bin/python3.7) \r\nframe #54: <unknown function> + 0x191de8 (0x56347c239de8 in /opt/conda/bin/python3.7) \r\nframe #55: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) \r\nframe #56: _PyEval_EvalCodeWithName + 0x33d (0x56347c1a5ccd in /opt/conda/bin/python3.7) \r\nframe #57: _PyFunction_FastCallDict + 0x6a0 (0x56347c1a7480 in /opt/conda/bin/python3.7) \r\nframe #58: <unknown function> + 0x185a63 (0x56347c22da63 in /opt/conda/bin/python3.7) \r\nframe #59: PyObject_Call + 0x6c (0x56347c1b09dc in /opt/conda/bin/python3.7)\r\nframe #60: <unknown function> + 0x21d3e7 (0x56347c2c53e7 in /opt/conda/bin/python3.7) \r\nframe #61: _PyObject_FastCallKeywords + 0x3cb (0x56347c238dcb in /opt/conda/bin/python3.7) \r\nframe #62: <unknown function> + 0x191f79 (0x56347c239f79 in /opt/conda/bin/python3.7) \r\nframe #63: _PyEval_EvalFrameDefault + 0x16bb (0x56347c260c9b in /opt/conda/bin/python3.7) \r\n```",
"and cmd line?",
"```\r\ndeepspeed train.py \\\r\n --name ul2-test \\\r\n --model google/ul2 \\\r\n --train <your train file>.json \\\r\n --val <your val file>.json \\\r\n --max_input_len 128 \\\r\n --max_target_len 512 \\\r\n --save_dir <your save directory path> \\\r\n --num_epochs 3 \\\r\n --learning_rate 2e-4 \\\r\n --eval_steps 3000 \\\r\n --gradient_accumulation_steps 8 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --deepspeed utils/ds_config_zero3.json \\\r\n --bf16\r\n```\r\n\r\nI can't share the train files, unfortunately, but as per the Seq2SeqDataset schema, the train/val files are a list of\r\n```\r\n{\r\n \"prompt\": \"prompt\",\r\n \"example_input\": \"input\",\r\n \"gold_answer\": \"gold_answer\"\r\n}\r\n``` \r\nobjects dumped to a JSON file.",
"ok, then I can't support you, Michael.\r\n\r\nOnce you provide a way for me to reproduce the problem I'd be happy to try to understand and come up with a solution.\r\n\r\n",
"Okay, my apologies again. Here are some dummy files that can be used to reproduce the issue.\r\n\r\nhttps://phind-demo.s3.amazonaws.com/demo_train.json\r\nhttps://phind-demo.s3.amazonaws.com/demo_val.json\r\n\r\nSo the train script would be\r\n\r\n```\r\ndeepspeed train.py \\\r\n --name ul2-test \\\r\n --model google/ul2 \\\r\n --train demo_train.json \\\r\n --val demo_val.json \\\r\n --max_input_len 128 \\\r\n --max_target_len 512 \\\r\n --save_dir <your save directory path> \\\r\n --num_epochs 3 \\\r\n --learning_rate 2e-4 \\\r\n --eval_steps 3000 \\\r\n --gradient_accumulation_steps 8 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --deepspeed ds_config.json \\\r\n --bf16\r\n```\r\n",
"Now please test that the code you shared works. As it fails here:\r\n\r\n```\r\n File \"train.py\", line 183, in <module>\r\n trained_model = trainer.train_model()\r\n File \"train.py\", line 96, in train_model\r\n self.trainer.train()\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 1557, in train\r\n return inner_training_loop(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 1569, in _inner_training_loop\r\n train_dataloader = self.get_train_dataloader()\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 835, in get_train_dataloader\r\n train_dataset = self._remove_unused_columns(train_dataset, description=\"training\")\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 711, in _remove_unused_columns\r\n ignored_columns = list(set(dataset.column_names) - set(signature_columns))\r\n File \"/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1673, in column_names\r\n return self._data.column_names\r\nAttributeError: 'Seq2SeqDataset' object has no attribute '_data'\r\n```\r\n\r\nI dumped your `Seq2SeqDataset` into your main script (`trainer.py`) so the line numbers won't match with your original main script.\r\n\r\nAlso does the problem still occur if you use a much smaller ul2 model? e.g. I'm trying with `yhavinga/ul2-small-dutch-english` - at this point we don't care for outcome, just to reproduce your problem.\r\n\r\nI'm trying on 1 gpu first:\r\n\r\n```\r\nrm -rf save_dir; CUDA_VISIBLE_DEVICES=0 deepspeed train.py \\\r\n --name ul2-test \\\r\n --model yhavinga/ul2-small-dutch-english \\\r\n --train demo_train.json \\\r\n --val demo_val.json \\\r\n --max_input_len 128 \\\r\n --max_target_len 512 \\\r\n --save_dir save_dir \\\r\n --num_epochs 3 \\\r\n --learning_rate 2e-4 \\\r\n --eval_steps 3000 \\\r\n --gradient_accumulation_steps 8 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --deepspeed ds_config.json \\\r\n --bf16\r\n```\r\n\r\nLet's try to come up with the smallest possible set up that reproduces the issue, then it'll be easy to debug.",
"I've just tested the scripts with flan-t5-small (ul2-small-dutch-english had an odd CUDA error, a different one than the one described above). Additionally, ul2-small-dutch-english is not a representative example as it uses a different activation function from google's UL2 (gated gelu vs gated silu).\r\n\r\nPlease refer to my S3 bucket for my scripts that I've confirmed run on my machine and their corresponding directory structure. \r\n\r\n- train.py (https://phind-demo.s3.amazonaws.com/train.py)\r\n- ds_config.json (https://phind-demo.s3.amazonaws.com/ds_config.json)\r\n- utils folder containing dataset_formats.py, which has the Seq2SeqDataset class (https://phind-demo.s3.amazonaws.com/utils/dataset_formats.py)\r\n\r\nWith these exact files and the latest version of transformers/datasets, I've just been able to run:\r\n\r\n```\r\ndeepspeed train.py \\\r\n --name ul2-test \\\r\n --model google/flan-t5-small \\\r\n --train demo_train.json \\\r\n --val demo_val.json \\\r\n --max_input_len 128 \\\r\n --max_target_len 512 \\\r\n --save_dir save_dir \\\r\n --num_epochs 3 \\\r\n --learning_rate 2e-4 \\\r\n --eval_steps 3000 \\\r\n --gradient_accumulation_steps 8 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --deepspeed ds_config.json \\\r\n --bf16\r\n```\r\n\r\nBut any of the UL2 models get CUDA errors. Would appreciate your help.\r\n\r\nThanks!\r\n",
"Thank you, Michael. With this last version of your code I can run the example you shared.\r\n\r\nOK, so what is the smallest UL2 model do you still see the problem with? https://huggingface.co/models?sort=downloads&search=ul2\r\n\r\nI run the above code on ` --model Finnish-NLP/ul2-small-nl24-finnish` on 1 and 2 gpus and had no problem.\r\n\r\nAdditionally, once you try a smaller ul2 model, do you get the same problem with a. 1 gpu b. 2 gpus?",
"Running with `--model Finnish-NLP/ul2-small-nl24-finnish` works for me as well with any number of gpus (from 1 to 8).\r\n\r\nBut I don't think it's representative because it uses a different activation function than google/ul2. Unfortunately there are no \"real\" smaller UL2 models, unlike the flan-t5 series where everything is the same except for scale.\r\n\r\nUPDATE: I take that back. yhavinga/ul2-base-en-nl also uses gated-silu. Running that experiment now.",
"Running \r\n\r\n```\r\ndeepspeed train.py \\\r\n --name ul2-test \\\r\n --model yhavinga/ul2-base-en-nl \\\r\n --train demo_train.json \\\r\n --val demo_val.json \\\r\n --max_input_len 128 \\\r\n --max_target_len 512 \\\r\n --save_dir save_dir \\\r\n --num_epochs 3 \\\r\n --learning_rate 2e-4 \\\r\n --eval_steps 3000 \\\r\n --gradient_accumulation_steps 8 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --deepspeed ds_config.json \\\r\n --bf16\r\n```\r\n\r\non 8 gpus, I got\r\n\r\n```\r\nRuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ /home/michael/train.py:164 in <module> │\r\n│ │\r\n│ 161 │ trainer.prepare_datsets_for_training() │\r\n│ 162 │ │\r\n│ 163 │ # perform training │\r\n│ ❱ 164 │ trained_model = trainer.train_model() │\r\n│ 165 │\r\n│ │\r\n│ /home/michael/train.py:77 in train_model │\r\n│ │\r\n│ 74 │ │ print(\"Rank {} reached barrier 2\".format(torch.distributed.get_rank())) │\r\n│ 75 │ │ torch.distributed.barrier() │\r\n│ 76 │ │ │\r\n│ ❱ 77 │ │ self.trainer.train() │\r\n│ 78 │ │ │\r\n│ 79 │ │ if torch.distributed.get_rank() == 0: │\r\n│ 80 │ │ │ trainer.save(self.args.save_dir + '/final_model') │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1531 in train │\r\n│ │\r\n│ 1528 │ │ │ args=args, │\r\n│ 1529 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │\r\n│ 1530 │ │ │ trial=trial, │\r\n│ ❱ 1531 │ │ │ ignore_keys_for_eval=ignore_keys_for_eval, │\r\n│ 1532 │ │ ) │\r\n│ 1533 │ │\r\n│ 1534 │ def _inner_training_loop( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1775 in _inner_training_loop │\r\n│ │\r\n│ 1772 │ │ │ │ │ with model.no_sync(): │\r\n│ 1773 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │\r\n│ 1774 │ │ │ │ else: │\r\n│ ❱ 1775 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │\r\n│ 1776 │ │ │ │ │\r\n│ 1777 │ │ │ │ if ( │\r\n│ 1778 │ │ │ │ │ args.logging_nan_inf_filter │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2523 in training_step │\r\n│ │\r\n│ 2520 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │\r\n│ 2521 │ │ │\r\n│ 2522 │ │ with self.compute_loss_context_manager(): │\r\n│ ❱ 2523 │ │ │ loss = self.compute_loss(model, inputs) │\r\n│ 2524 │ │ │\r\n│ 2525 │ │ if self.args.n_gpu > 1: │\r\n│ 2526 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2555 in compute_loss │\r\n│ │\r\n│ 2552 │ │ │ labels = inputs.pop(\"labels\") │\r\n│ 2553 │ │ else: │\r\n│ 2554 │ │ │ labels = None │\r\n│ ❱ 2555 │ │ outputs = model(**inputs) │\r\n│ 2556 │ │ # Save past state if it exists │\r\n│ 2557 │ │ # TODO: this needs to be fixed and made cleaner later. │\r\n│ 2558 │ │ if self.args.past_index >= 0: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl │\r\n│ │\r\n│ 1191 │ │ # this function, and just call forward. │\r\n│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │\r\n│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │\r\n│ 1195 │ │ # Do not call functions when jit is used │\r\n│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/utils/nvtx.py:11 in wrapped_fn │\r\n│ │\r\n│ 8 │ │ │\r\n│ 9 │ │ def wrapped_fn(*args, **kwargs): │\r\n│ 10 │ │ │ with torch.cuda.nvtx.range(func.__qualname__): │\r\n│ ❱ 11 │ │ │ │ return func(*args, **kwargs) │\r\n│ 12 │ │ │\r\n│ 13 │ │ return wrapped_fn │\r\n│ 14 │ else: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/engine.py:1727 in forward │\r\n│ │\r\n│ 1724 │ │ if self.fp16_auto_cast(): │\r\n│ 1725 │ │ │ inputs = self._cast_inputs_half(inputs) │\r\n│ 1726 │ │ │\r\n│ ❱ 1727 │ │ loss = self.module(*inputs, **kwargs) │\r\n│ 1728 │ │ │\r\n│ 1729 │ │ if self.zero_optimization_partition_weights(): │\r\n│ 1730 │ │ │ # Disable automated discovery of external parameters │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1618 in forward │\r\n│ │\r\n│ 1615 │ │ │ │ head_mask=head_mask, │\r\n│ 1616 │ │ │ │ output_attentions=output_attentions, │\r\n│ 1617 │ │ │ │ output_hidden_states=output_hidden_states, │\r\n│ ❱ 1618 │ │ │ │ return_dict=return_dict, │\r\n│ 1619 │ │ │ ) │\r\n│ 1620 │ │ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): │\r\n│ 1621 │ │ │ encoder_outputs = BaseModelOutput( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1051 in forward │\r\n│ │\r\n│ 1048 │ │ │ │ │ cross_attn_layer_head_mask=cross_attn_layer_head_mask, │\r\n│ 1049 │ │ │ │ │ past_key_value=past_key_value, │\r\n│ 1050 │ │ │ │ │ use_cache=use_cache, │\r\n│ ❱ 1051 │ │ │ │ │ output_attentions=output_attentions, │\r\n│ 1052 │ │ │ │ ) │\r\n│ 1053 │ │ │ │\r\n│ 1054 │ │ │ # layer_outputs is a tuple with: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:680 in forward │\r\n│ │\r\n│ 677 │ │ │ layer_head_mask=layer_head_mask, │\r\n│ 678 │ │ │ past_key_value=self_attn_past_key_value, │\r\n│ 679 │ │ │ use_cache=use_cache, │\r\n│ ❱ 680 │ │ │ output_attentions=output_attentions, │\r\n│ 681 │ │ ) │\r\n│ 682 │ │ hidden_states, present_key_value_state = self_attention_outputs[:2] │\r\n│ 683 │ │ attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs an │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:586 in forward │\r\n│ │\r\n│ 583 │ │ │ layer_head_mask=layer_head_mask, │\r\n│ 584 │ │ │ past_key_value=past_key_value, │\r\n│ 585 │ │ │ use_cache=use_cache, │\r\n│ ❱ 586 │ │ │ output_attentions=output_attentions, │\r\n│ 587 │ │ ) │\r\n│ 588 │ │ hidden_states = hidden_states + self.dropout(attention_output[0]) │\r\n│ 589 │ │ outputs = (hidden_states,) + attention_output[1:] # add attentions if we output │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:498 in forward │\r\n│ │\r\n│ 495 │ │ │ return hidden_states │\r\n│ 496 │ │ │\r\n│ 497 │ │ # get query states │\r\n│ ❱ 498 │ │ query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, │\r\n│ 499 │ │ │\r\n│ 500 │ │ # get key/value states │\r\n│ 501 │ │ key_states = project( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py:114 in forward │\r\n│ │\r\n│ 111 │ │ │ init.uniform_(self.bias, -bound, bound) │\r\n│ 112 │ │\r\n│ 113 │ def forward(self, input: Tensor) -> Tensor: │\r\n│ ❱ 114 │ │ return F.linear(input, self.weight, self.bias) │\r\n│ 115 │ │\r\n│ 116 │ def extra_repr(self) -> str: │\r\n│ 117 │ │ return 'in_features={}, out_features={}, bias={}'.format( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:116 in zero3_linear_wrap │\r\n│ │\r\n│ 113 │\r\n│ 114 def zero3_linear_wrap(input, weight, bias=None): │\r\n│ 115 │ if bias is None: │\r\n│ ❱ 116 │ │ return LinearFunctionForZeroStage3.apply(input, weight) │\r\n│ 117 │ else: │\r\n│ 118 │ │ return LinearFunctionForZeroStage3.apply(input, weight, bias) │\r\n│ 119 │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py:97 in decorate_fwd │\r\n│ │\r\n│ 94 │ def decorate_fwd(*args, **kwargs): │\r\n│ 95 │ │ if cast_inputs is None: │\r\n│ 96 │ │ │ args[0]._fwd_used_autocast = torch.is_autocast_enabled() │\r\n│ ❱ 97 │ │ │ return fwd(*args, **kwargs) │\r\n│ 98 │ │ else: │\r\n│ 99 │ │ │ autocast_context = torch.is_autocast_enabled() │\r\n│ 100 │ │ │ args[0]._fwd_used_autocast = False │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:61 in forward │\r\n│ │\r\n│ 58 │ │ │ # fused op is marginally faster │\r\n│ 59 │ │ │ ret = torch.addmm(bias, input, weight.t()) │\r\n│ 60 │ │ else: │\r\n│ ❱ 61 │ │ │ output = input.matmul(weight.t()) │\r\n│ 62 │ │ │ if bias is not None: │\r\n│ 63 │ │ │ │ output += bias │\r\n│ 64 │ │ │ ret = output │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nRuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`\r\n```\r\n\r\nRunning with CUDA_VISIBLE_DEVICES=0, I get a slightly different error:\r\n\r\n```\r\n─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ /home/michael/train.py:164 in <module> │\r\n│ │\r\n│ 161 │ trainer.prepare_datsets_for_training() │\r\n│ 162 │ │\r\n│ 163 │ # perform training │\r\n│ ❱ 164 │ trained_model = trainer.train_model() │\r\n│ 165 │\r\n│ │\r\n│ /home/michael/train.py:77 in train_model │\r\n│ │\r\n│ 74 │ │ print(\"Rank {} reached barrier 2\".format(torch.distributed.get_rank())) │\r\n│ 75 │ │ torch.distributed.barrier() │\r\n│ 76 │ │ │\r\n│ ❱ 77 │ │ self.trainer.train() │\r\n│ 78 │ │ │\r\n│ 79 │ │ if torch.distributed.get_rank() == 0: │\r\n│ 80 │ │ │ trainer.save(self.args.save_dir + '/final_model') │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1531 in train │\r\n│ │\r\n│ 1528 │ │ │ args=args, │\r\n│ 1529 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │\r\n│ 1530 │ │ │ trial=trial, │\r\n│ ❱ 1531 │ │ │ ignore_keys_for_eval=ignore_keys_for_eval, │\r\n│ 1532 │ │ ) │\r\n│ 1533 │ │\r\n│ 1534 │ def _inner_training_loop( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:1775 in _inner_training_loop │\r\n│ │\r\n│ 1772 │ │ │ │ │ with model.no_sync(): │\r\n│ 1773 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │\r\n│ 1774 │ │ │ │ else: │\r\n│ ❱ 1775 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │\r\n│ 1776 │ │ │ │ │\r\n│ 1777 │ │ │ │ if ( │\r\n│ 1778 │ │ │ │ │ args.logging_nan_inf_filter │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2523 in training_step │\r\n│ │\r\n│ 2520 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │\r\n│ 2521 │ │ │\r\n│ 2522 │ │ with self.compute_loss_context_manager(): │\r\n│ ❱ 2523 │ │ │ loss = self.compute_loss(model, inputs) │\r\n│ 2524 │ │ │\r\n│ 2525 │ │ if self.args.n_gpu > 1: │\r\n│ 2526 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:2555 in compute_loss │\r\n│ │\r\n│ 2552 │ │ │ labels = inputs.pop(\"labels\") │\r\n│ 2553 │ │ else: │\r\n│ 2554 │ │ │ labels = None │\r\n│ ❱ 2555 │ │ outputs = model(**inputs) │\r\n│ 2556 │ │ # Save past state if it exists │\r\n│ 2557 │ │ # TODO: this needs to be fixed and made cleaner later. │\r\n│ 2558 │ │ if self.args.past_index >= 0: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl │\r\n│ │\r\n│ 1191 │ │ # this function, and just call forward. │\r\n│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │\r\n│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │\r\n│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │\r\n│ 1195 │ │ # Do not call functions when jit is used │\r\n│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │\r\n│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/utils/nvtx.py:11 in wrapped_fn │\r\n│ │\r\n│ 8 │ │ │\r\n│ 9 │ │ def wrapped_fn(*args, **kwargs): │\r\n│ 10 │ │ │ with torch.cuda.nvtx.range(func.__qualname__): │\r\n│ ❱ 11 │ │ │ │ return func(*args, **kwargs) │\r\n│ 12 │ │ │\r\n│ 13 │ │ return wrapped_fn │\r\n│ 14 │ else: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/engine.py:1727 in forward │\r\n│ │\r\n│ 1724 │ │ if self.fp16_auto_cast(): │\r\n│ 1725 │ │ │ inputs = self._cast_inputs_half(inputs) │\r\n│ 1726 │ │ │\r\n│ ❱ 1727 │ │ loss = self.module(*inputs, **kwargs) │\r\n│ 1728 │ │ │\r\n│ 1729 │ │ if self.zero_optimization_partition_weights(): │\r\n│ 1730 │ │ │ # Disable automated discovery of external parameters │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1618 in forward │\r\n│ │\r\n│ 1615 │ │ │ │ head_mask=head_mask, │\r\n│ 1616 │ │ │ │ output_attentions=output_attentions, │\r\n│ 1617 │ │ │ │ output_hidden_states=output_hidden_states, │\r\n│ ❱ 1618 │ │ │ │ return_dict=return_dict, │\r\n│ 1619 │ │ │ ) │\r\n│ 1620 │ │ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): │\r\n│ 1621 │ │ │ encoder_outputs = BaseModelOutput( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:1051 in forward │\r\n│ │\r\n│ 1048 │ │ │ │ │ cross_attn_layer_head_mask=cross_attn_layer_head_mask, │\r\n│ 1049 │ │ │ │ │ past_key_value=past_key_value, │\r\n│ 1050 │ │ │ │ │ use_cache=use_cache, │\r\n│ ❱ 1051 │ │ │ │ │ output_attentions=output_attentions, │\r\n│ 1052 │ │ │ │ ) │\r\n│ 1053 │ │ │ │\r\n│ 1054 │ │ │ # layer_outputs is a tuple with: │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:680 in forward │\r\n│ │\r\n│ 677 │ │ │ layer_head_mask=layer_head_mask, │\r\n│ 678 │ │ │ past_key_value=self_attn_past_key_value, │\r\n│ 679 │ │ │ use_cache=use_cache, │\r\n│ ❱ 680 │ │ │ output_attentions=output_attentions, │\r\n│ 681 │ │ ) │\r\n│ 682 │ │ hidden_states, present_key_value_state = self_attention_outputs[:2] │\r\n│ 683 │ │ attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs an │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:586 in forward │\r\n│ │\r\n│ 583 │ │ │ layer_head_mask=layer_head_mask, │\r\n│ 584 │ │ │ past_key_value=past_key_value, │\r\n│ 585 │ │ │ use_cache=use_cache, │\r\n│ ❱ 586 │ │ │ output_attentions=output_attentions, │\r\n│ 587 │ │ ) │\r\n│ 588 │ │ hidden_states = hidden_states + self.dropout(attention_output[0]) │\r\n│ 589 │ │ outputs = (hidden_states,) + attention_output[1:] # add attentions if we output │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py:498 in forward │\r\n│ │\r\n│ 495 │ │ │ return hidden_states │\r\n│ 496 │ │ │\r\n│ 497 │ │ # get query states │\r\n│ ❱ 498 │ │ query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, │\r\n│ 499 │ │ │\r\n│ 500 │ │ # get key/value states │\r\n│ 501 │ │ key_states = project( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1212 in _call_impl │\r\n│ │\r\n│ 1209 │ │ │ bw_hook = hooks.BackwardHook(self, full_backward_hooks) │\r\n│ 1210 │ │ │ input = bw_hook.setup_input_hook(input) │\r\n│ 1211 │ │ │\r\n│ ❱ 1212 │ │ result = forward_call(*input, **kwargs) │\r\n│ 1213 │ │ if _global_forward_hooks or self._forward_hooks: │\r\n│ 1214 │ │ │ for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()) │\r\n│ 1215 │ │ │ │ hook_result = hook(self, input, result) │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py:114 in forward │\r\n│ │\r\n│ 111 │ │ │ init.uniform_(self.bias, -bound, bound) │\r\n│ 112 │ │\r\n│ 113 │ def forward(self, input: Tensor) -> Tensor: │\r\n│ ❱ 114 │ │ return F.linear(input, self.weight, self.bias) │\r\n│ 115 │ │\r\n│ 116 │ def extra_repr(self) -> str: │\r\n│ 117 │ │ return 'in_features={}, out_features={}, bias={}'.format( │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:116 in zero3_linear_wrap │\r\n│ │\r\n│ 113 │\r\n│ 114 def zero3_linear_wrap(input, weight, bias=None): │\r\n│ 115 │ if bias is None: │\r\n│ ❱ 116 │ │ return LinearFunctionForZeroStage3.apply(input, weight) │\r\n│ 117 │ else: │\r\n│ 118 │ │ return LinearFunctionForZeroStage3.apply(input, weight, bias) │\r\n│ 119 │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py:97 in decorate_fwd │\r\n│ │\r\n│ 94 │ def decorate_fwd(*args, **kwargs): │\r\n│ 95 │ │ if cast_inputs is None: │\r\n│ 96 │ │ │ args[0]._fwd_used_autocast = torch.is_autocast_enabled() │\r\n│ ❱ 97 │ │ │ return fwd(*args, **kwargs) │\r\n│ 98 │ │ else: │\r\n│ 99 │ │ │ autocast_context = torch.is_autocast_enabled() │\r\n│ 100 │ │ │ args[0]._fwd_used_autocast = False │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/deepspeed/runtime/zero/linear.py:61 in forward │\r\n│ │\r\n│ 58 │ │ │ # fused op is marginally faster │\r\n│ 59 │ │ │ ret = torch.addmm(bias, input, weight.t()) │\r\n│ 60 │ │ else: │\r\n│ ❱ 61 │ │ │ output = input.matmul(weight.t()) │\r\n│ 62 │ │ │ if bias is not None: │\r\n│ 63 │ │ │ │ output += bias │\r\n│ 64 │ │ │ ret = output │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nRuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`\r\n```",
"super! I'm able to reproduce this on a single gpu and without deepspeed, so deepspeed is not at fault here.\r\n\r\nSo drop deepspeed, switch to a single gpu and step through with debugger through the first training step.\r\n\r\nnow using a single gpu and removing deepspeed completely and you will get the same problem.\r\n\r\nThe problem is indicated by multiple lines of:\r\n\r\n```\r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n```\r\n\r\nand that usually indicates a bug in the code wrt to tensor indices. Either in your custom code or the trainer.\r\n\r\nSo after removing deepspeed congif, run otherwise the same cmd line (you can continue using the `deepspeed` launcher - it has nothing to do with the deepspeed integration)\r\n\r\n```\r\nrm -rf save_dir; CUDA_LAUNCH_BLOCKING=1 CUDA_VISIBLE_DEVICES=0 deepspeed train.py --name ul2-test --model yhavinga/ul2-base-en-nl --train demo_train.json --val demo_val.json --max_input_len 128 --max_target_len 512 --save_dir save_dir --num_epochs 3 --learning_rate 2e-4 --eval_steps 3000 --gradient_accumulation_steps 8 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --bf16\r\n```\r\n\r\nand you start getting a usable traceback:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"train.py\", line 167, in <module>\r\n trained_model = trainer.train_model()\r\n File \"train.py\", line 80, in train_model\r\n self.trainer.train()\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 1557, in train\r\n return inner_training_loop(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 1808, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 2561, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 2593, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/parallel/distributed.py\", line 1040, in forward\r\n output = self._run_ddp_forward(*inputs, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/parallel/distributed.py\", line 1000, in _run_ddp_forward\r\n return module_to_run(*inputs[0], **kwargs[0])\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/models/t5/modeling_t5.py\", line 1623, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/models/t5/modeling_t5.py\", line 1000, in forward\r\n hidden_states = self.dropout(inputs_embeds)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/dropout.py\", line 59, in forward\r\n return F.dropout(input, self.p, self.training, self.inplace)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/functional.py\", line 1252, in dropout\r\n return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)\r\nRuntimeError: philox_cuda_state for an unexpected CUDA generator used during capture. In regions captured by CUDA graphs, you may only use the default CUDA RNG generator on the device that's current when capture begins. If you need a non-default (user-supplied) generator, or a generator on another device, please file an issue.\r\n```\r\n\r\nSo the failure appears to be inside `dropout` - Unless you'd like to spend some time with debugger and get to the root of it, it's probably the best to close this issue and start a new one now devoid of deepspeed, and providing all the repro details in the OP and ask the t5 maintainers to figure it out. Most likely it has something to do with the shapes of the tensors or shape manipulation - it's hard to tell w/o a closer look.\r\n\r\nI'm currently working on another project, so always happy to jump in on a deepspeed issue which are very rare, but won't have time at the moment to work on other issues.",
"I found one report with the same error, but I'm not sure if it's related: https://github.com/pytorch/pytorch/issues/91950\r\n\r\nI also was able to reproduce this issue with pt-1.10 and 1.11 - so it's unlikely to be a recent pytorch issue. almost certain something is off in the code.\r\n",
"Thank you @stas00 ",
"I'm a sucker for a difficult problem, here you go, I stepped with debugger. Have a look at the snapshot - your input_ids are a way too too big:\r\n\r\n",
"Thank you. I see -- how is that possible? Do you think it's a bf16 issue?\r\n\r\nUpdate: the inputs seem to be fine on my end:\r\n\r\n```\r\n{'input_ids': tensor([[ 1150, 268, 2522, 267, 1231, 3634, 263, 32132, 3634, 334, \r\n 3113, 264, 314, 279, 321, 316, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[430\r\n6, 264, 314, 279, 321, 316, 1]])} \r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. \r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. \r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. \r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. \r\n../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [43,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. \r\n../aten/src/ATen/native/cuda/Indexing\r\n```",
"@younesbelkada Could you take a look please? UL2 is broken by a script that works for flan-t5 and other seq2seq models.",
"yes, they are ok at the `outputs = model(**inputs)` frame and then are borked at the point of dropout, but this happens much sooner,. I will have a look.\r\n\r\nIt breaks somewhere inside `T5Stack.forward`",
"ok, it has to do with the size of the embedding matrix. In this case it's `32128x768`\r\n\r\nbut your `input_ids` contain higher numbers than `32128-1`:\r\n\r\n```\r\nprint(max(input_ids.flatten()))\r\n```\r\n\r\ngives `32132`\r\n\r\nif I hack your code to do:\r\n\r\n```\r\n input_ids = input_ids % 32127\r\n```\r\n\r\nthen everything works. \r\n\r\nNow that you understand what the problem is I trust you can unravel the rest?\r\n\r\nMost likely your tokenizer vocab isn't matching the vocab dimension of the embedding matrix.\r\n\r\nIt's sad that pytorch doesn't give a user friendly error. edit: actually it does on cpu, but not on cuda.\r\n\r\np.s. and the corrupt huge `input_ids` happened because pytorch blew its head off, but due to the default async nature the body was still thinking it owned a head. That `indexSelectLargeIndex` cuda error is where things broke first and not where the traceback was showing.\r\n\r\nThe blowup happened here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/bc44e947f371924db854a460484ec46c95e50a35/src/transformers/models/t5/modeling_t5.py#L954-L956",
"The other debug technique is to make gpus disappear and run on cpu, using `CUDA_VISIBLE_DEVICES=\"\"` env var setting. Usually then you get much better errors.\r\n\r\nBut not all programs will transparently be able to handle this transition. in the case of your program it doesn't work due to hardcoded gpu code. and some custom gpu kernels will of course not run on cpu.\r\n",
"Thank you so much, Stas!",
"Funny enough, there still is an issue with google/ul2 (the 20B param model) even though the smaller one runs fine now.\r\n\r\n```\r\n[W CUDAGuardImpl.h:124] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent) \r\nterminate called after throwing an instance of 'c10::Error' \r\n what(): CUDA error: an illegal memory access was encountered \r\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. \r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1. \r\n```\r\n\r\nCould you please take another look?",
"but what did you change to fix the smaller one? I hope you didn't use my `%` hack - it was just to show you what the problem was - it of course wasn't meant to be a solution - apologies if it wasn't obvious.\r\n\r\nthe larger model is most likely has a different vocab size, so you really need to figure out your setup to read the config correctly and get the tokenizer set up right - usually this is mostly done for you, but this is where you'd check since you wrote your custom code.\r\n\r\nFirst make this small model work correctly w/o hardcoding any numbers - then move onto the large one and most likely it'll just work.",
"I'm requesting to make this recurring experience of embedding lookup explosion on cuda to be less painful for the users here: https://github.com/pytorch/pytorch/issues/93880\r\n",
"I called `model.resize_token_embeddings(len(tokenizer))` (which I think is a more general solution than the % hack) and it worked on the smaller model. It doesn't work on the larger model, which has the same vocabulary size of 32128. The `CUDA error: an illegal memory access was encountered` on the larger model was always different than the one seen on the smaller model. I think something else is going on here.",
"It's very possible that you have a multitude of errors. Please ensure that you use the fixed version that you validated working with the smaller model.\r\n\r\nI think I have already asked you to show me the full traceback with `CUDA_LAUNCH_BLOCKING=1` and it wasn't telling anything useful. this feature is also broken in the recent NCCL versions.\r\n\r\ncan you share the fixed code?\r\n"
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
transformers version==4.26.0
torch==1.13.1
deepspeed==0.8
hardware: 8x A100-80GB
Fine-tuning UL2 with the Huggingface Trainer and DeepSpeed Zero2 or Zero3 results in a CUDA Illegal Memory Exception. This is true with any Huggingface Trainer script, PyTorch version (1.12 and 1.113), DeepSpeed version (0.6.7, 0.7.7, 0.8), and CUDA version (11.3 and 11.8) that I've tried. The same scripts work just fine with flan-t5-xxl.
```
[W CUDAGuardImpl.h:124] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
Any thoughts @stas00? Your help would be appreciated.
### Who can help?
@stas00
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Try fine-tuning UL2 on any task/dataset using DeepSpeed Zero2/Zero3. You should encounter the error.
### Expected behavior
Training proceeds normally.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21378/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21377
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21377/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21377/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21377/events
|
https://github.com/huggingface/transformers/pull/21377
| 1,563,282,563
|
PR_kwDOCUB6oc5I3Bpl
| 21,377
|
Added model resources for LayoutLM Issue#19848
|
{
"login": "avisinghal6",
"id": 97785770,
"node_id": "U_kgDOBdQXqg",
"avatar_url": "https://avatars.githubusercontent.com/u/97785770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avisinghal6",
"html_url": "https://github.com/avisinghal6",
"followers_url": "https://api.github.com/users/avisinghal6/followers",
"following_url": "https://api.github.com/users/avisinghal6/following{/other_user}",
"gists_url": "https://api.github.com/users/avisinghal6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avisinghal6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avisinghal6/subscriptions",
"organizations_url": "https://api.github.com/users/avisinghal6/orgs",
"repos_url": "https://api.github.com/users/avisinghal6/repos",
"events_url": "https://api.github.com/users/avisinghal6/events{/privacy}",
"received_events_url": "https://api.github.com/users/avisinghal6/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It looks like you might've only updated your branch with the latest changes on the `main` Hugging Face repo, I don't see any of the changes! 🙈 ",
"Hi @stevhliu, you were right, i only updated my branch (sorry my bad ), i think i have committed the changes now. Sorry for this.",
"Awesome, thank you so much for your contribution! 🤗 Everything looks good to me, pinging @sgugger for a final look!"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
Added resources to documentation of LayoutLM model as per Issue#19848.
@stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21377/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21377",
"html_url": "https://github.com/huggingface/transformers/pull/21377",
"diff_url": "https://github.com/huggingface/transformers/pull/21377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21377.patch",
"merged_at": 1675432396000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21376
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21376/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21376/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21376/events
|
https://github.com/huggingface/transformers/issues/21376
| 1,563,001,236
|
I_kwDOCUB6oc5dKYGU
| 21,376
|
Lazy loading models on systems with more VRAM than RAM
|
{
"login": "oobabooga",
"id": 112222186,
"node_id": "U_kgDOBrBf6g",
"avatar_url": "https://avatars.githubusercontent.com/u/112222186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oobabooga",
"html_url": "https://github.com/oobabooga",
"followers_url": "https://api.github.com/users/oobabooga/followers",
"following_url": "https://api.github.com/users/oobabooga/following{/other_user}",
"gists_url": "https://api.github.com/users/oobabooga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oobabooga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oobabooga/subscriptions",
"organizations_url": "https://api.github.com/users/oobabooga/orgs",
"repos_url": "https://api.github.com/users/oobabooga/repos",
"events_url": "https://api.github.com/users/oobabooga/events{/privacy}",
"received_events_url": "https://api.github.com/users/oobabooga/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 5151155822,
"node_id": "LA_kwDOCUB6oc8AAAABMwhmbg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Big%20Model%20Inference",
"name": "Big Model Inference",
"color": "006b75",
"default": false,
"description": "Problems related to the Big Model Inference capabilities provided by Accelerate"
}
] |
closed
| false
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Could you please share a snippet of code that fails on such an env with `device_map=\"auto\"` sent to `from_pretrained`? This loads the model directly on the GPU (as long as there is enough space) so this should work for your use case.",
"Surely, here is a snippet that causes an out of memory error on Google Colab (the free instance with 12.7GB RAM and 15GB VRAM):\r\n\r\n```\r\n!pip install -U accelerate transformers\r\n\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", device_map='auto')\r\n\r\n```\r\n\r\nI have tried every possible combination of `.cuda()` and `low_cpu_mem_usage=True`:\r\n\r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", device_map='auto')\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", device_map='auto').cuda()\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", low_cpu_mem_usage=True, device_map='auto')\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", low_cpu_mem_usage=True, device_map='auto').cuda()\r\n\r\n```\r\n\r\nIn all cases, the RAM usage steadily increases until it passes the 12GB mark and the Colab session crashes. On my machine, this model uses 11653.7 GiB VRAM and 2605.79 GiB RAM once fully loaded to the GPU, so in principle it should be possible to load it on Colab.",
"I think you are missing a `torch_dtype=torch.float16` or `torch_dtype=torch.bfloat16` to get to 12GB of use. Otherwise the model will need 24GB of memory if it has 6b parameters (the default torch dtype in PyTorch being float32).",
"You are correct, both of these allow me to load the model successfully:\r\n\r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", low_cpu_mem_usage=True, device_map='auto', torch_dtype=torch.float16)\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", device_map='auto', torch_dtype=torch.float16)\r\n```\r\n\r\nBut with these, the RAM usage *after the model is loaded* is very high: 12.2GB out of a total of 12.7GB. This makes the session very unstable and prone to crashing if other libraries are imported.\r\n\r\nIs this high RAM usage normal? Can it be avoided?",
"Can you try to see if adding a layer of garbage collector helps?\r\n```py\r\nimport gc\r\n\r\ngc.collect()\r\n```\r\nThere is no reason for the CPU RAM to be used once the model is fully loaded on the GPU.",
"I did try `gc.collect()` earlier today and that didn't release the CPU RAM memory. Now I tried to repeat the experiment just to make sure, and I couldn't even load the model because the \r\n\r\n`model = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", low_cpu_mem_usage=True, device_map='auto', torch_dtype=torch.float16)`\r\n\r\ncall made the Colab session crash after running out of RAM.",
"After loading the model with the command above, doing this releases the VRAM but not the RAM:\r\n\r\n```\r\nimport gc\r\n\r\nmodel = None\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n```\r\n\r\nThis looks exactly like https://github.com/huggingface/transformers/issues/21094. Are these two bugs related?",
"I've recreated it, report as follows:\r\n\r\n(`available_memory` returns the `%` of memory available)\r\n\r\nWorking as expected (w/o big model inference, hooks, etc)\r\n```python\r\n>>> import psutil, torch\r\n>>> from transformers import AutoModelForCausalLM\r\n>>> available_memory = lambda: psutil.virtual_memory().available * 100 / psutil.virtual_memory().total\r\n>>> available_memory()\r\n97.8753999829287\r\n>>> model = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", low_cpu_mem_usage=True)\r\n>>> available_memory()\r\n69.87882027448968\r\n>>> model = None()\r\n>>> import gc\r\n>>> gc.collect()\r\n>>> available_memory()\r\n97.28031713868933\r\n```\r\nIssue:\r\n\r\n```python\r\n>>> available_memory()\r\n97.28031713868933\r\n>>> model = AutoModelForCausalLM.from_pretrained(\r\n... \"PygmalionAI/pygmalion-6b\", \r\n... low_cpu_mem_usage=True, \r\n... device_map='auto', \r\n... torch_dtype=torch.float16\r\n... )\r\n>>> available_memory()\r\n95.77584944795181\r\n>>> model = None\r\n>>> gc.collect()\r\n>>> torch.cuda.empty_cache()\r\n>>> available_memory()\r\n95.73520915357973\r\n```\r\nNote the fact that basically no memory was released here (on multiple repeated checks the RAM hit 95.77%)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I think that lazy loading models would be an important addition to `transformers` in the context of loading models to Google Colab, but I am not sure how doable it is.\r\n\r\nA workaround for now is to reshard the models.",
"Mmm, diving into the reproducer @muellerzr, it looks like memory is not released by PyTorch when moving the model to a device:\r\n```\r\nimport psutil, torch\r\nfrom transformers import AutoModelForCausalLM\r\navailable_memory = lambda: psutil.virtual_memory().available * 100 / psutil.virtual_memory().total\r\nprint(available_memory())\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"PygmalionAI/pygmalion-6b\", low_cpu_mem_usage=True)\r\nmodel = model.to(0)\r\nprint(available_memory())\r\n\r\ndel model\r\nimport gc\r\ngc.collect()\r\nprint(available_memory())\r\n```\r\nshows no memory is released.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"From the discussion, it seems to me that lazy loading is not the only issue. One also wants to garbage collect parts of the state dict that are no longer in use. \r\n\r\nFor the use-case of apply model deltas, this requires streaming out the updated model weights rather than waiting for all the deltas to be applied.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,684
| 1,684
|
CONTRIBUTOR
| null |
### Feature request
I would like the ability to lazy load models to the GPU using `AutoModelForCausalLM.from_pretrained`.
At the moment, it is possible to reduce the RAM usage using the `low_cpu_mem_usage=True` option, but on systems with more VRAM than RAM (like Google Colab with 12GB RAM and 16GB VRAM), it is not possible to load certain models due to a RAM bottleneck.
### Motivation
See above
### Your contribution
--
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21376/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21376/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21375
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21375/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21375/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21375/events
|
https://github.com/huggingface/transformers/issues/21375
| 1,562,915,540
|
I_kwDOCUB6oc5dKDLU
| 21,375
|
Mismatch of tensor shapes in CrossEntropyLoss for custom head layer in BART
|
{
"login": "CodingSaturn",
"id": 51314378,
"node_id": "MDQ6VXNlcjUxMzE0Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/51314378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CodingSaturn",
"html_url": "https://github.com/CodingSaturn",
"followers_url": "https://api.github.com/users/CodingSaturn/followers",
"following_url": "https://api.github.com/users/CodingSaturn/following{/other_user}",
"gists_url": "https://api.github.com/users/CodingSaturn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CodingSaturn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CodingSaturn/subscriptions",
"organizations_url": "https://api.github.com/users/CodingSaturn/orgs",
"repos_url": "https://api.github.com/users/CodingSaturn/repos",
"events_url": "https://api.github.com/users/CodingSaturn/events{/privacy}",
"received_events_url": "https://api.github.com/users/CodingSaturn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should try the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.",
"Oh I am very sorry, I moved the question to the forum. Thank your for the hint!"
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
Hi,
so far I've been working with the BartForConditionalGeneration. Now I want to use a custom head layer instead.
In a linear layer after the base models decoder, I want to input the output of the base bart model and additional some numerical data similar to the code [here](https://colab.research.google.com/drive/1ZLfcB16Et9U2V-udrw8zwrfChFCIhomz?usp=sharing#scrollTo=m-TTyOMJOGBD). Following this I came up with the following forward function:
```
def forward(self, input_ids, tokens, **kwargs):
labels = kwargs.get('labels')
attn_mask = kwargs.get('attention_mask')
out = self.model_base(input_ids, attention_mask=attn_mask)
token_features = tokens.unsqueeze(1)
concat= torch.concat((out[0][:, 0, :], token_features), dim=-1)
out_lin = self.custom_layer(concat)
loss_fct = torch.nn.CrossEntropyLoss()
masked_lm_loss = loss_fct(out_lin.view(-1, self.model.config.vocab_size), labels.view(-1))
```
where out_lin is the following linear layer:
```
self.custom_layer = torch.nn.Linear(in_features = self.hidden_dim + self.token_dim, out_features = self.model.config.vocab_size)
```
For the loss function I took orientation from the original code for the [BartForConditionalGeneration](https://huggingface.co/transformers/v2.11.0/_modules/transformers/modeling_bart.html#BartForConditionalGeneration):
```
outputs = self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
encoder_outputs=encoder_outputs,
decoder_attention_mask=decoder_attention_mask,
decoder_cached_states=decoder_cached_states,
use_cache=use_cache,
)
lm_logits = F.linear(outputs[0], self.model.shared.weight, bias=self.final_logits_bias)
outputs = (lm_logits,) + outputs[1:] # Add cache, hidden states and attention if they are here
if lm_labels is not None:
loss_fct = nn.CrossEntropyLoss()
# TODO(SS): do we need to ignore pad tokens in lm_labels?
masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), lm_labels.view(-1))
outputs = (masked_lm_loss,) + outputs
return outputs
```
The error I obtain is
```
---------------------------------------------------------------------------
File c:\Users\M\Anaconda\envs\simp_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:579, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
577 raise TypeError(f"`Trainer.fit()` requires a `LightningModule`, got: {model.__class__.__qualname__}")
578 self.strategy._lightning_module = model
--> 579 call._call_and_handle_interrupt(
580 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
581 )
File c:\Users\M\Anaconda\envs\simp_env\lib\site-packages\pytorch_lightning\trainer\call.py:38, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
36 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
37 else:
---> 38 return trainer_fn(*args, **kwargs)
40 except _TunerExitException:
...
3024 if size_average is not None or reduce is not None:
3025 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3026 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (6) to match target batch_size (1536).
```
So far I understand that 6 is the batch size of my data output from the custom linear layer (which is torch.Size([6, 50267]) where 50267 is the self.final_logits_bias/vocab_size). My labels have the shape torch.Size([6, 256]) which when flattened leads to the 1536.
As my labels have the same shape as before and my layer seems to me the same as the one from the ConditionalGenerationModel which I used before, I am unsure why I suddenly receive this size incompatibility issue, when I did not before.
Furthermore, I am unsure why the first code referenced here uses only hidden_states.last_hidden_state[:, 0, :] so only batch_size and hidden_size but not the sequence length. Without it my data has the shape torch.Size([6, 256, 768]).
I would be thankful for any guidance on how to make the tensors compatible and make the custom layer work.
Did I misunderstand the examples mentioned above?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21375/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21374
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21374/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21374/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21374/events
|
https://github.com/huggingface/transformers/issues/21374
| 1,562,814,922
|
I_kwDOCUB6oc5dJqnK
| 21,374
|
decoder_hidden_states output inconsistent when generating with SpeechEncoderDecoder models
|
{
"login": "valentinp72",
"id": 2760679,
"node_id": "MDQ6VXNlcjI3NjA2Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2760679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valentinp72",
"html_url": "https://github.com/valentinp72",
"followers_url": "https://api.github.com/users/valentinp72/followers",
"following_url": "https://api.github.com/users/valentinp72/following{/other_user}",
"gists_url": "https://api.github.com/users/valentinp72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valentinp72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valentinp72/subscriptions",
"organizations_url": "https://api.github.com/users/valentinp72/orgs",
"repos_url": "https://api.github.com/users/valentinp72/repos",
"events_url": "https://api.github.com/users/valentinp72/events{/privacy}",
"received_events_url": "https://api.github.com/users/valentinp72/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @valentinp72 👋 \r\n\r\nI don't see a cause for a bug, but I am aware that our docstrings are in need of improvements :) \r\n\r\nAllow me to elaborate:\r\n1. Greedy Search: your output (`sequences`) will be `[batch_size, generated_tokens + 1]`, and the `decoder_hidden_states` will have length `generated_tokens`. `sequences` has the `+1` because the output sequence contains the BOS token, which is set before the first forward pass of the model, so there are no `decoder_hidden_states` for that token.\r\n2. Beam Search: Here it's trickier. In essence, beam search looks for candidate outputs until it hits a stopping condition. The candidate outputs can have fewer tokens than the total number of generation steps -- for instance, in an encoder-decoder text model, if your input is `How much is 2 + 2?` and the model generates as candidates `<BOS>4<EOS>` (3 tokens) and `<BOS>The answer is potato<EOS>` (for argument's sake, 6 tokens) before deciding to stop, you should see `sequences` with shape `[1, 3]` and `decoder_hidden_states` with length `5`, because `5` tokens were generated internally before settling on the 1st candidate.\r\n\r\nDoes it make more sense now? 🤗 ",
"Oh I see!\r\nIndeed, the docs could be improved :)\r\n\r\nSo, if I'd want to have the *real* hidden states associated with the first candidate (in beam search), eg. `<BOS>4<EOS>`, I would need to extract only the first 3 hidden states from `decoder_hidden_states`?\r\n\r\nFor example, that should be correct, assuming the `decoder_hidden_states` are sorted the same way the candidates are ordered? (ie. `[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]` for a batch size of `10` and a beam width of `2`.)\r\n```python\r\nbsz = generated_beam_search['sequences'].shape[0]\r\nseqlen = generated_beam_search['sequences'].shape[1]\r\nfeatdim = 1024\r\ndecoder_hidden_states = torch.zeros((bsz, seqlen, featdim)).to(device)\r\n\r\nfor i in range(seqlen):\r\n all_beams = generated_beam_search['decoder_hidden_states'][i][-1][:,0,:]\r\n # filtering the hidden states to only the first candidate for each sample\r\n only_first = all_beams.index_select(\r\n dim=0, index=torch.tensor([x for x in range(0, bsz * beam_width, beam_width)]).to(device)\r\n )\r\n decoder_hidden_states[:,i] = only_first\r\n```\r\n\r\nIf I'm correct, I think an option to return the hidden states in a tensor format (instead of tuples of tensors) according the output candidates could be nice, for both greedy and beam search decoding. ",
"@valentinp72 \r\n\r\n> So, if I'd want to have the real hidden states associated with the first candidate (in beam search), eg. `<BOS>4<EOS>`, I would need to extract only the first 3 hidden states from `decoder_hidden_states`?\r\n\r\nAlmost correct! The first 2 hidden states (one for `4`, another for `<EOS>`. `<BOS>` has no corresponding hidden states)\r\n\r\nAs for the exact methodology to extract the right values from `decoder_hidden_states` with beam search, the plot thickens 😅 It is the same problem as extracting the token-level scores from the `scores` output in beam search -- see [this function](https://github.com/huggingface/transformers/blob/42b60f8b02941b0c40c42e150a101eb372c3856e/src/transformers/generation/utils.py#L927) and its examples. If you replace `scores` by `decoder_hidden_states`, it should be very close to what you want.\r\n\r\nIn a nutshell, the index of the n-th output sequence in beam search changes over the course of its execution. The output sequence with index 0 may correspond to the sequence with index 1 at the 1st generation step, the sequence with index 5 at the 2nd generation step, and so on. `beam_indices` contains the index of each output sequence at each beam search step, from which you can de-scramble `decoder_hidden_states`.",
"Thank you.\r\nI've adapted the `compute_transition_scores` to what I want, but I still have one doubt about the contents of `beam_indices`.\r\n\r\nHere is an example generated by the beam search:\r\n```\r\ntensor([[ 0, 0, 0, 3, 3, 1, 0, 2, 1, 0, 3, 0, 0, -1],\r\n [ 5, 5, 5, 8, 6, 5, 5, 7, 6, 6, 6, 5, 5, -1],\r\n [10, 10, 10, 13, 12, 10, 10, 10, 10, 10, 12, 10, 10, -1],\r\n [15, 15, 15, 18, 17, 15, 15, 16, 16, 15, 17, 15, 15, -1],\r\n [20, 20, 20, 23, 21, 20, 22, 21, 21, 20, 20, 20, 0, -1],\r\n [25, 25, 25, 28, 27, 26, 26, 27, 27, 26, 29, 26, 25, -1],\r\n [30, 30, 30, 33, 32, 31, 30, 32, 32, 32, 32, 30, 30, -1],\r\n [35, 35, 35, 38, 38, 36, 35, 37, 37, 36, 38, 35, 35, -1],\r\n [40, 40, 40, 43, 42, 40, 40, 40, 40, 40, 42, 40, 40, -1],\r\n [45, 45, 45, 48, 49, 46, 46, 47, 46, 45, 47, 45, 45, -1]],\r\n device='cuda:0')\r\n```\r\n\r\nIf we take the first row (first sequence), does it means that the correct hidden states for that sequence can be found at indexes `0, 0, 0, 3, 3, 1, 0, 2, 1, 0, 3, 0, 0`? If so, why, for the 5th sequence, we have the index `0` for the last hidden states? Shouldn't each beam be independent?",
"@valentinp72 \r\n\r\n> If so, why, for the 5th sequence, we have the index 0 for the last hidden states? Shouldn't each beam be independent?\r\n\r\nThey should 👀 can you share the snippet that leads to those `beam_indices`? That may be a bug. ",
"Now I'm no longer able to reproduce this error.\r\nI think it was due to a bug on my own, while implementing my function I might have executed it twice, leading to (some?) -1 being replaced by 0.\r\n\r\nI'm closing this issue as it seems my function that extracts the hidden representations works. I'm sharing it below if others needs it:\r\n\r\n```python\r\ndef extract_decoder_hidden_states(\r\n generate_output_dict,\r\n hidden_layer_idx=-1,\r\n):\r\n \"\"\"\r\n Extracts the decoder hidden states representation from\r\n GreedySearchEncoderDecoderOutput and BeamSearchEncoderDecoderOutput,\r\n associated with the `sequences` output.\r\n - generate_output_dict: output dict from the model.generate() method\r\n you should add the following arguments to generate:\r\n - output_hidden_states=True\r\n - output_scores=True\r\n - return_dict_in_generate=True\r\n - hidden_layer_idx: index of the layer to extract the representation from (-1 == last one)\r\n \"\"\"\r\n greedy = isinstance(generate_output_dict, GreedySearchEncoderDecoderOutput)\r\n beamy = isinstance(generate_output_dict, BeamSearchEncoderDecoderOutput)\r\n\r\n if greedy:\r\n # in greedy decoding, the beam_indices is not present, so we create one\r\n # where the first beam is always selected\r\n scores = generate_output_dict['scores']\r\n device = generate_output_dict['sequences'].device\r\n beam_indices = torch.arange(scores[0].shape[0]).view(-1, 1)\r\n beam_indices = beam_indices.expand(-1, len(scores)).to(device)\r\n elif beamy:\r\n if 'beam_indices' not in generate_output_dict:\r\n raise RuntimeError(\r\n \"You should export the scores with output_scores=True when \" \\\r\n \"calling extract_decoder_hidden_states with \" \\\r\n \"BeamSearchEncoderDecoderOutput\"\r\n )\r\n beam_indices = generate_output_dict['beam_indices'].clone()\r\n else:\r\n raise NotImplementedError(\r\n \"extract_decoder_hidden_states only works with \" \\\r\n \"GreedySearchEncoderDecoderOutput and BeamSearchEncoderDecoderOutput \" \\\r\n \"output types.\"\r\n )\r\n\r\n # handling of the target length and preparing the masking for tokens\r\n # outside of that length\r\n beam_indices_mask = beam_indices < 0\r\n max_beam_length = (1 - beam_indices_mask.long()).sum(-1).max()\r\n beam_indices = beam_indices[:, :max_beam_length]\r\n beam_indices_mask = beam_indices_mask[:, :max_beam_length]\r\n beam_indices[beam_indices_mask] = 0\r\n\r\n seqlen = generate_output_dict['sequences'].shape[1] - 1\r\n\r\n # creating the output hidden_states representation in format:\r\n # [bsz * beam_width ; seqlen ; featdim]\r\n decoder_hidden_states = torch.stack([\r\n generate_output_dict['decoder_hidden_states'][i][hidden_layer_idx][:,0,:].index_select(\r\n dim=0,\r\n index=beam_indices[:,i] # reordering using the beam_indices\r\n )\r\n for i in range(seqlen)\r\n ]).transpose(0, 1)\r\n\r\n # setting to 0 the hidden_states were it doesn't make sense to have an output\r\n decoder_hidden_states[beam_indices_mask] = 0\r\n\r\n return decoder_hidden_states\r\n```"
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-4.18.0-372.36.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.12
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes (a100)
- Using distributed or parallel set-up in script?: yes, but script below produces the same thing with one GPU
### Who can help?
@sanchit-gandhi @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When decoding a `SpeechEncoderDecoderModel`, we can output the decoder hidden states with the args `output_hidden_states` and `return_dict_in_generate` to `True`.
However, the output lengths are not consistent between the beam search decoding and greedy decoding, and between the output sequences and decoder hidden states.
The following example is using `wav2vec2-xls-r-300m-en-to-15`, and shows this inconsistency:
```python
from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
import numpy as np
folder = "" # custom folder where the model are stored since we don't have internet on the compute nodes
device = 'cuda:0'
model = SpeechEncoderDecoderModel.from_pretrained(folder + "facebook/wav2vec2-xls-r-300m-en-to-15").to(device)
processor = Wav2Vec2Processor.from_pretrained(folder + "facebook/wav2vec2-xls-r-300m-en-to-15")
# generating dummy data with variable lengths
audios = [
np.random.random((17_000 + i,))
for i in range(10) # bsz of 10
]
input_values = processor(
audios, return_tensors="pt", padding=True, sampling_rate=16_000,
).input_values.to(device)
# common parameters to both greedy and beam search decoding
common_params = {
'max_new_tokens': 200,
'output_hidden_states': True,
'output_scores': True,
'return_dict_in_generate': True
}
####
print("Greedy decoding:")
generated_greedy = model.generate(
input_values,
num_beams=1,
**common_params
)
print(" sequences shape: ", generated_greedy['sequences'].shape)
print(" decoder_hidden_states len: ", len(generated_greedy['decoder_hidden_states']))
####
print("Beam search decoding:")
generated_beam_search = model.generate(
input_values,
num_beams=2,
**common_params
)
print(" sequences shape: ", generated_beam_search['sequences'].shape)
print(" decoder_hidden_states len: ", len(generated_beam_search['decoder_hidden_states']))
```
The output of that script is:
```
Greedy decoding:
sequences shape: torch.Size([10, 3])
decoder_hidden_states len: 2
Beam search decoding:
sequences shape: torch.Size([10, 3])
decoder_hidden_states len: 39
```
Following the documentation for [GreedySearchEncoderDecoderOutput](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.generation.GreedySearchEncoderDecoderOutput.decoder_hidden_states) and [BeamSearchEncoderDecoderOutput](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.generation.BeamSearchEncoderDecoderOutput.decoder_hidden_states):
- **greedy**: decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size).
- **beam search**: decoder_hidden_states (tuple(tuple(torch.FloatTensor)), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size).
I think the length of the tuple (one element for each generated token) should be the same as the `sequence_length`.
*PS: it seems that when the audio is long enough so that the output is capped at `max_new_tokens`, the sequences length is `max_new_tokens + 1`, and the hidden states `max_new_tokens`.*
### Expected behavior
The sequences output should be the same length as the decoder_hidden_states length, like:
```
Greedy decoding:
sequences shape: torch.Size([10, 3])
decoder_hidden_states len: 3
Beam search decoding:
sequences shape: torch.Size([10, 3])
decoder_hidden_states len: 3
```
or
```
Greedy decoding:
sequences shape: torch.Size([10, 2])
decoder_hidden_states len: 2
Beam search decoding:
sequences shape: torch.Size([10, 39])
decoder_hidden_states len: 39
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21374/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21373
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21373/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21373/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21373/events
|
https://github.com/huggingface/transformers/issues/21373
| 1,562,801,101
|
I_kwDOCUB6oc5dJnPN
| 21,373
|
Error Multi-Node Training with Deepspeed
|
{
"login": "ShivamSharma2705",
"id": 94197666,
"node_id": "U_kgDOBZ1Xog",
"avatar_url": "https://avatars.githubusercontent.com/u/94197666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivamSharma2705",
"html_url": "https://github.com/ShivamSharma2705",
"followers_url": "https://api.github.com/users/ShivamSharma2705/followers",
"following_url": "https://api.github.com/users/ShivamSharma2705/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivamSharma2705/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivamSharma2705/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivamSharma2705/subscriptions",
"organizations_url": "https://api.github.com/users/ShivamSharma2705/orgs",
"repos_url": "https://api.github.com/users/ShivamSharma2705/repos",
"events_url": "https://api.github.com/users/ShivamSharma2705/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivamSharma2705/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"30min is the timeout for any NCCL operations.\r\n\r\nI assume it never started training? Was it doing anything during the 30min, like pre-processing a dataset?\r\n\r\nplease run again and do\r\n```\r\npip install py-spy\r\npy-spy dump --pid PID \r\n```\r\nPID of the process that is stuck.\r\n\r\nAnd of course your Issue doesn't tell us anything about how to reproduce it or even which program you had a problem with.",
"The code was stuck at this\r\n\r\nnode1: [2023-01-30 08:34:30,454] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in D$\r\nepSpeed with backend nccl\r\n\r\nThe output for the py-spy command for node 1 is \r\nThread 39975 (idle): \"MainThread\"\r\n _try_wait (subprocess.py:1764)\r\n _wait (subprocess.py:1806)\r\n wait (subprocess.py:1083)\r\n main (deepspeed/launcher/runner.py:522)\r\n <module> (deepspeed:6)\r\n\r\nTo reproduce just run [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) with deepspeed and in the hostfile add two nodes as follows\r\n\r\nnode1 slots=1\r\nnode2 slots=2",
"thank you, yes, so you're not launching deepspeed properly. \r\n\r\nwhat is `node1`? it has to be an actual hostname, what is `slots=1`\r\n\r\nwhat is the actual setup - you have 2 nodes with 2 gpus each? then it'd be something like:\r\n\r\n```\r\nhostname1 slots=2\r\nhostname2 slots=2\r\n```\r\n\r\n",
"Yeah, two nodes with 1 GPU each\r\n\r\nhostname1 slots=1\r\nhostname2 slots=1\r\n",
"most likely you then have an ssh issue where it gets stuck in trying to connect to those nodes.\r\n\r\nAs we have cleared out that this is not an integration issue - please reopen this question at https://github.com/microsoft/DeepSpeed/issues \r\n\r\nand when you report it probably start with a simple test script so it's easier to reproduce / isolate to the deepspeed launcher. e.g. you can use this script https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-gpu-test.py but you will need to adapt your launching to the way you're trying to do (the instructions inside the script don't use `deepspeed` as the launcher)."
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.8.2+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This error occurred when I was running run_clm.py script with deepspeed.
1. Configure the hostfile provided to deepspeed as follows
node1 slots=1
node2 slots=2
### Expected behavior
The run gets stuck at the following for around 30mins for the node I am running the scripts from
node1: [2023-01-30 08:34:30,454] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in D$
epSpeed with backend nccl
After 30 mins or so, I get the following error
node2: Traceback (most recent call last):
node2: File "run_clm.py", line 659, in <module>
node2: main()
node2: File "run_clm.py", line 244, in main
node2: model_args, data_args, training_args = parser.parse_args_into_dataclasses()
node2: File "/home/transformers/src/transformers/hf_argparser.py", lin$
226, in parse_args_into_dataclasses
node2: obj = dtype(**inputs)
node2: File "<string>", line 104, in __init__
node2: File "/home/transformers/src/transformers/training_args.py", li$
e 1118, in __post_init__
node2: and (self.device.type != "cuda")
node2: File "/home/transformers/src/transformers/utils/import_utils.py$
, line 1000, in wrapper
node2: return func(*args, **kwargs)
node2: File "/home/transformers/src/transformers/training_args.py", li$
e 1478, in device
node2: return self._setup_devices
node2: File "/home/transformers/src/transformers/utils/generic.py", li$
e 57, in __get__
node2: cached = self.fget(obj)
node2: File "/home/transformers/src/transformers/utils/import_utils.py$
, line 1000, in wrapper
node2: return func(*args, **kwargs)
node2: File "/home/transformers/src/transformers/training_args.py", li$
e 1413, in _setup_devices
node2: deepspeed.init_distributed()
node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/deepspeed/comm/$
omm.py", line 637, in init_distributed
node2: cdb = TorchBackend(dist_backend, timeout, init_method)
node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/deepspeed/comm/$
orch.py", line 30, in __init__
node2: self.init_process_group(backend, timeout, init_method)
node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/deepspeed/comm/$
orch.py", line 34, in init_process_group
node2: torch.distributed.init_process_group(backend,
node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/torch/distribut$
d/distributed_c10d.py", line 500, in init_process_group
node2: store, rank, world_size = next(rendezvous_iterator)
node2: File "/home/anaconda3/envs/deepspeed_hf/lib/python3.8/site-packages/torch/distribut$
d/rendezvous.py", line 190, in _env_rendezvous_handler
node2: store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
node2: RuntimeError: connect() timed out.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21373/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21372
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21372/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21372/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21372/events
|
https://github.com/huggingface/transformers/pull/21372
| 1,562,688,425
|
PR_kwDOCUB6oc5I1ASj
| 21,372
|
Add cPython files in build
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
As reported by @clefourrier the Graphormer model in the last release is not usable as is, as the cpython file containing the code for the collation of samples is not included in the built package. This PR fixes that by including the extensions (similar to what we did for custom CUDA kernels).
This will be included in the next patch release.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21372/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21372/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21372",
"html_url": "https://github.com/huggingface/transformers/pull/21372",
"diff_url": "https://github.com/huggingface/transformers/pull/21372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21372.patch",
"merged_at": 1675095570000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21371
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21371/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21371/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21371/events
|
https://github.com/huggingface/transformers/issues/21371
| 1,562,609,751
|
I_kwDOCUB6oc5dI4hX
| 21,371
|
Error while loading a model on 8bit
|
{
"login": "toma-x",
"id": 97228779,
"node_id": "U_kgDOBcuX6w",
"avatar_url": "https://avatars.githubusercontent.com/u/97228779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/toma-x",
"html_url": "https://github.com/toma-x",
"followers_url": "https://api.github.com/users/toma-x/followers",
"following_url": "https://api.github.com/users/toma-x/following{/other_user}",
"gists_url": "https://api.github.com/users/toma-x/gists{/gist_id}",
"starred_url": "https://api.github.com/users/toma-x/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/toma-x/subscriptions",
"organizations_url": "https://api.github.com/users/toma-x/orgs",
"repos_url": "https://api.github.com/users/toma-x/repos",
"events_url": "https://api.github.com/users/toma-x/events{/privacy}",
"received_events_url": "https://api.github.com/users/toma-x/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @toma-x \r\nThanks for the issue, \r\nWhat you are currently trying to do (mixing cpu + int8) is not supported yet\r\nI think that this feature should be addressed in `QuantizationConfig` in the next weeks, I will keep you updated in this issue\r\n",
"Glad to know this, looking forward to hear you soon about this @younesbelkada ",
"Hi @toma-x \r\nThis is now supported on the `main` branch of `transformers`, can you check this section of the docs? 🙏 \r\nhttps://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu ",
"Hi @younesbelkada thank you for letting me updated, I will sure take a look this is very interesting, have a great day 😁",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,681
| 1,681
|
NONE
| null |
I'm trying to run inference on a model which doesn't fit on my GPU using this code:
```from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device_map = {'transformer.wte': 0,
'transformer.drop': 0,
'transformer.h.0': 0,
'transformer.h.1': 0,
'transformer.h.2': 0,
'transformer.h.3': 0,
'transformer.h.4': 0,
'transformer.h.5': 0,
'transformer.h.6': 0,
'transformer.h.7': 0,
'transformer.h.8': 0,
'transformer.h.9': 0,
'transformer.h.10': 0,
'transformer.h.11': 0,
'transformer.h.12': 0,
'transformer.h.13': 0,
'transformer.h.14': 0,
'transformer.h.15': 0,
'transformer.h.16': 0,
'transformer.h.17': 0,
'transformer.h.18': 0,
'transformer.h.19': 0,
'transformer.h.20': 0,
'transformer.h.21': 0,
'transformer.h.22': 0,
'transformer.h.23': 'cpu',
'transformer.h.24': 'cpu',
'transformer.h.25': 'cpu',
'transformer.h.26': 'cpu',
'transformer.h.27': 'cpu',
'transformer.ln_f': 'cpu',
'lm_head': 'cpu'}
tokenizer = AutoTokenizer.from_pretrained("tomaxe/fr-boris-sharded")
model = AutoModelForCausalLM.from_pretrained("tomaxe/fr-boris-sharded", load_in_8bit = True, load_in_8bit_skip_modules = ['lm_head',
'transformer.ln_f',
'transformer.h.27',
'transformer.h.26',
'transformer.h.25',
'transformer.h.24',
'transformer.h.23'], device_map = device_map)
input_text = "salut"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length = 20)
print(tokenizer.decode(outputs[0]))
```
And I'm running into this error :
@younesbelkada Do you know what I could do ? Thanks
```===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: CUDA runtime path found: /home/thomas/anaconda3/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
Loading checkpoint shards: 0%| | 0/30 [00:00<?, ?it/s]
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
A: torch.Size([2, 4096]), B: torch.Size([4096, 4096]), C: (2, 4096); (lda, ldb, ldc): (c_int(64), c_int(131072), c_int(64)); (m, n, k): (c_int(2), c_int(4096), c_int(4096))
Traceback (most recent call last):
File "/home/thomas/anaconda3/lib/python3.9/site-packages/spyder_kernels/py3compat.py", line 356, in compat_exec
exec(code, globals, locals)
File "/home/thomas/Downloads/infersharded.py", line 46, in <module>
outputs = model.generate(input_ids, max_length = 20)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/generation/utils.py", line 1391, in generate
return self.greedy_search(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/generation/utils.py", line 2179, in greedy_search
outputs = self(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 813, in forward
transformer_outputs = self.transformer(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 668, in forward
outputs = block(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 302, in forward
attn_outputs = self.attn(
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 203, in forward
query = self.q_proj(hidden_states)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/accelerate/hooks.py", line 156, in new_forward
output = old_forward(*args, **kwargs)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 254, in forward
out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 405, in matmul
return MatMul8bitLt.apply(A, B, out, bias, state)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 311, in forward
out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB)
File "/home/thomas/anaconda3/lib/python3.9/site-packages/bitsandbytes/functional.py", line 1410, in igemmlt
raise Exception('cublasLt ran into an error!')
Exception: cublasLt ran into an error!
cuBLAS API failed with status 15
error detected```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21371/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21370
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21370/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21370/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21370/events
|
https://github.com/huggingface/transformers/pull/21370
| 1,562,530,942
|
PR_kwDOCUB6oc5I0eOR
| 21,370
|
[CLAP] Add CLAP to the library
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Now we should create a new task : `zero-shot audio classification`! \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok! Looking very good, last thing is the `configuration` docstring. @younesbelkada will finish this 😉 ",
"Models are here (with nice model cards hehe):\r\n- https://huggingface.co/ybelkada/clap-htsat-fused\r\n- https://huggingface.co/ybelkada/clap-htsat-unfused\r\nWill transfer them on `laion-ai` once we got an approval :)",
"1. Some renaming has to happen to normalise the parameters around mel extraction. Will do this and everything should look good 😉 \r\n2. Add the dependency on torchvision as the np implementation received a big NO 😢 ",
"Found a few discrepancies with the various variables in the config that are not used. Will finish asap should makes things clearer. ",
"Ok I broke the history \r\n"
] | 1,675
| 1,676
| 1,676
|
COLLABORATOR
| null |
# What does this PR do?
Adds CLAP to the HF library cc @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21370/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21370/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21370",
"html_url": "https://github.com/huggingface/transformers/pull/21370",
"diff_url": "https://github.com/huggingface/transformers/pull/21370.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21370.patch",
"merged_at": 1676577568000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21369
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21369/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21369/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21369/events
|
https://github.com/huggingface/transformers/issues/21369
| 1,562,508,607
|
I_kwDOCUB6oc5dIf0_
| 21,369
|
"Both `max_new_tokens` and `max_length` have been set but they serve the same purpose" when only setting max_new_tokens.
|
{
"login": "Gvanderl",
"id": 27513709,
"node_id": "MDQ6VXNlcjI3NTEzNzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/27513709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gvanderl",
"html_url": "https://github.com/Gvanderl",
"followers_url": "https://api.github.com/users/Gvanderl/followers",
"following_url": "https://api.github.com/users/Gvanderl/following{/other_user}",
"gists_url": "https://api.github.com/users/Gvanderl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gvanderl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gvanderl/subscriptions",
"organizations_url": "https://api.github.com/users/Gvanderl/orgs",
"repos_url": "https://api.github.com/users/Gvanderl/repos",
"events_url": "https://api.github.com/users/Gvanderl/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gvanderl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @Gvanderl 👋 \r\nWe are aware of this issue (some downstream uses of `.generate()`, like the `pipeline`, fail when `max_new_tokens` is set). #21347 will fix it 🎉 \r\n\r\nAfter it gets merged, install from main and it should work! (i.e. `pip install --upgrade git+https://github.com/huggingface/transformers.git`)",
"Amazing, thanks for being so responsive.\r\nI'm closing the issue. ",
"Hello,\r\n\r\nTrying the new main, I encounter a new issue. It seems that `max_new_tokens` behaves like `max_length`.\r\nHowever, according to the documentation their behavior should be the following:\r\n\r\n> max_length (int, optional, defaults to 20) — The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens. In general, prefer the use of max_new_tokens, which ignores the number of tokens in the prompt.\r\nmax_new_tokens (int, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.\r\n\r\nUsing the code from above, I get the following stack trace:\r\n\r\n```\r\n--- Logging error ---\r\nTraceback (most recent call last):\r\n File \"AppData\\Local\\Programs\\Python\\Python310\\lib\\logging\\__init__.py\", line 1100, in emit\r\n msg = self.format(record)\r\n File \"AppData\\Local\\Programs\\Python\\Python310\\lib\\logging\\__init__.py\", line 943, in format\r\n return fmt.format(record)\r\n File \"AppData\\Local\\Programs\\Python\\Python310\\lib\\logging\\__init__.py\", line 678, in format\r\n record.message = record.getMessage()\r\n File \"AppData\\Local\\Programs\\Python\\Python310\\lib\\logging\\__init__.py\", line 368, in getMessage\r\n msg = msg % self.args\r\nTypeError: not all arguments converted during string formatting\r\nCall stack:\r\n File \"AppData\\Roaming\\JetBrains\\PyCharm2022.3\\scratches\\scratch.py\", line 6, in <module>\r\n summary = summarizer(prompt)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\text2text_generation.py\", line 265, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\text2text_generation.py\", line 165, in __call__\r\n result = super().__call__(*args, **kwargs)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 1089, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 1096, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 995, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\text2text_generation.py\", line 187, in _forward\r\n output_ids = self.model.generate(**model_inputs, **generate_kwargs)\r\n File \"venv\\lib\\site-packages\\torch\\autograd\\grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"venv\\lib\\site-packages\\transformers\\generation\\utils.py\", line 1285, in generate\r\n logger.warn(\r\nMessage: 'Both `max_new_tokens` (=50) and `max_length`(=51) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)'\r\nArguments: (<class 'UserWarning'>,)\r\nTraceback (most recent call last):\r\n File \"AppData\\Roaming\\JetBrains\\PyCharm2022.3\\scratches\\scratch.py\", line 6, in <module>\r\n summary = summarizer(prompt)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\text2text_generation.py\", line 265, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\text2text_generation.py\", line 165, in __call__\r\n result = super().__call__(*args, **kwargs)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 1089, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 1096, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 995, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"venv\\lib\\site-packages\\transformers\\pipelines\\text2text_generation.py\", line 187, in _forward\r\n output_ids = self.model.generate(**model_inputs, **generate_kwargs)\r\n File \"venv\\lib\\site-packages\\torch\\autograd\\grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"venv\\lib\\site-packages\\transformers\\generation\\utils.py\", line 1294, in generate\r\n raise ValueError(\r\nValueError: Unfeasible length constraints: the minimum length (56) is larger than the maximum length (51)\r\n\r\nProcess finished with exit code 1\r\n```\r\n\r\n\r\n",
"Hey @Gvanderl -- That's because `min_length` is set for that pipeline (in that case, `min_length=56`), and `min_length` has to be smaller than the maximum length you define :) \r\n\r\nYou can either decrease `min_length`, by setting it, or increase `max_new_tokens`",
"Oh I see.\r\nThis fixed the issue, though now every time it runs it gives me the following warning: \r\n```\r\nMessage: 'Both `max_new_tokens` (=50) and `max_length`(=51) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)'\r\n```\r\n\r\nAny way to get rid of it ? Setting `max_length=None` resulted in a crash. ",
"You can raise the level of the logger to ignore warnings :) I will be working on the pipelines in the coming week or two, that warning should disappear by then.",
"Can anyone please tell me how to pass the max_token_size. I am passing max_length in the generate method, still it is taking the default value\r\n inputs = processor(batch[\"audio\"][\"array\"], return_tensors=\"pt\",sampling_rate=16_000)\r\n generated_ids = model.generate(inputs=inputs.input_features,max_length=1000)\r\n print(len(generated_ids))\r\n batch[\"pred_str\"] = processor.batch_decode(generated_ids,skip_special_tokens=True,group_tokens=True,max_length=500)\r\n The length is not getting set , I am getting output till length 448. Please help me on this.",
"@enankobh that happens because, [if I'm seeing correctly](https://huggingface.co/openai/whisper-large-v2/blob/main/config.json#L44), Whisper's maximum output length is 448 tokens. @ArthurZucker can you confirm?"
] | 1,675
| 1,676
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.4
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante I believe.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install transformers
2. Run the following script
```
from transformers import pipeline
summarizer = pipeline("summarization", max_new_tokens=50)
prompt = "text to summarize"
summary = summarizer(prompt)
```
3. It crashes with the following error:
> ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
### Expected behavior
Expected behavior is that the script should run. I only set `max_new_tokens` and not `max_length`.
This might seem like a duplicate of https://github.com/huggingface/transformers/issues/20894 but the issue persists even after installing the latest version of transformers with the fix for that issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21369/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21368
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21368/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21368/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21368/events
|
https://github.com/huggingface/transformers/pull/21368
| 1,562,494,144
|
PR_kwDOCUB6oc5I0WOz
| 21,368
|
🚨🚨 Generate: standardize beam search behavior across frameworks
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> # What does this PR do?\r\n> Applies the discussion of #20901 into code. In a nutshell, standardizes beam search behavior across all three frameworks through `early_stopping`, keeping PT's behavior untouched for the previously accepted values of `early_stopping`.\r\n> \r\n> Changes:\r\n> \r\n> 1. `early_stopping` was changed from a binary variable (`True` or `False`, defaulting to `False`) to a ternary variable (`True`, `False`, or `\"never\"`, defaulting to `False`).\r\n> \r\n> * `early_stopping=True` means that beam search will stop whenever `num_beam` complete candidates are obtained, ignoring all room for improvement. No changes across all frameworks;\r\n> * `early_stopping=False` means that beam search will use a heuristic to stop. It effectively blocks minor \"tail\" improvements when `length_penalty` is positive (the default), while saving many beam search iterations. This was already PT's behavior for `early_stopping=False`, and is the new default for TF/FLAX;\r\n> * `early_stopping=\"never\"` means that beam search will only stop when it is mathematically impossible to improve. This was TF/FLAX's behavior for `early_stopping=False` (and is the canonical beam search implementation).\r\n> 2. As a consequence of 1.: PT users can now run the canonical beam search with `early_stopping=\"never\"`.\r\n> 3. As a consequence of 1.: TF users will notice a significant speedup if they keep the default generation parameters, while increasing `max_new_tokens`/`max_length`. This is the default case for the Marian models, and what triggered all these changes to begin with (thanks @ydshieh [Fix TF generation (especially for `TFMarian`) #20853](https://github.com/huggingface/transformers/pull/20853) ).\r\n> 4. As a consequence of 1.: Flax users will get the same benefits as TF users.\r\n> \r\n> Points 3. and 4. imply that there may be some minor differences in the output of `.generate()` with beam search on TF and FLAX. That difference should be very small (it has been PT's behavior all along, which is also our reference implementation) and will come with significant speedups. Still, being a numerically breaking change, it deserves a visible warning in the title (🚨).\r\n> \r\n> Fixes #18149\r\n> \r\n> Slow tests were ran across all 3 frameworks for:\r\n> \r\n> * [x] BART\r\n> * [x] GPT2\r\n> * [x] T5\r\n> * [x] Marian\r\n> \r\n> Speed test script\r\n> \r\n> ```python\r\n> from transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel\r\n> import time\r\n> \r\n> model_name = \"Helsinki-NLP/opus-mt-en-ROMANCE\"\r\n> tokenizer = MarianTokenizer.from_pretrained(model_name)\r\n> text_in = ['>>fr<< hello']\r\n> \r\n> model = MarianMTModel.from_pretrained(model_name)\r\n> batch = tokenizer(text_in, return_tensors='pt', padding=True)\r\n> start = time.time()\r\n> translated = model.generate(**batch)\r\n> end = time.time()\r\n> output = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n> print(output)\r\n> print(end - start)\r\n> \r\n> model = TFMarianMTModel.from_pretrained(model_name)\r\n> batch = tokenizer(text_in, return_tensors='tf', padding=True)\r\n> start = time.time()\r\n> translated = model.generate(**batch)\r\n> end = time.time()\r\n> output = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n> print(output)\r\n> print(end - start)\r\n> ```\r\n> \r\n> Before the PR: same `output`, PT time = `0.084`, TF time = `50.85` (time in seconds, based on my local machine) After the PR: same `output`, PT time = `0.084`, TF time = `0.701` (time in seconds, based on my local machine) 👉 ~70x faster, same generated text\r\n\r\nHi @gante, thanks for solving the issue. It is still a little bit weird though, why does TF generate the same output much slower than PT (as stated: 0.084 vs 0.701)?",
"Hey @jamie0725 👋 TF eager mode is super slow 🙃 \r\n\r\nIf you compile the TF model (i.e. do `xla_generate = tf.function(model.generate, jit_compile=True)` then call `generate_output = xla_generate(input_ids, ...)`), you'll see that TF is probably faster than PT, depending on the hardware."
] | 1,675
| 1,684
| 1,675
|
MEMBER
| null |
# What does this PR do?
Applies the discussion of #20901 into code. In a nutshell, standardizes beam search behavior across all three frameworks through `early_stopping`, keeping PT's behavior untouched for the previously accepted values of `early_stopping`.
Changes:
1. `early_stopping` was changed from a binary variable (`True` or `False`, defaulting to `False`) to a ternary variable (`True`, `False`, or `"never"`, defaulting to `False`).
- `early_stopping=True` means that beam search will stop whenever `num_beam` complete candidates are obtained, ignoring all room for improvement. No changes across all frameworks;
- `early_stopping=False` means that beam search will use a heuristic to stop. It effectively blocks minor "tail" improvements when `length_penalty` is positive (the default), while saving many beam search iterations. This was already PT's behavior for `early_stopping=False`, and is the new default for TF/FLAX;
- `early_stopping="never"` means that beam search will only stop when it is mathematically impossible to improve. This was TF/FLAX's behavior for `early_stopping=False` (and is the canonical beam search implementation).
2. As a consequence of 1.: PT users can now run the canonical beam search with `early_stopping="never"`.
3. As a consequence of 1.: TF users will notice a significant speedup if they keep the default generation parameters, while increasing `max_new_tokens`/`max_length`. This is the default case for the Marian models, and what triggered all these changes to begin with (thanks @ydshieh #20853 ).
4. As a consequence of 1.: Flax users will get the same benefits as TF users.
Points 3. and 4. imply that there may be some minor differences in the output of `.generate()` with beam search on TF and FLAX. That difference should be very small (it has been PT's behavior all along, which is also our reference implementation) and will come with significant speedups. Still, being a numerically breaking change, it deserves a visible warning in the title (🚨).
Fixes https://github.com/huggingface/transformers/issues/18149
_______________________________________________________
Slow tests were ran across all 3 frameworks for:
- [x] BART
- [x] GPT2
- [x] T5
- [x] Marian
_______________________________________________________
Speed test script
```py
from transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel
import time
model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
tokenizer = MarianTokenizer.from_pretrained(model_name)
text_in = ['>>fr<< hello']
model = MarianMTModel.from_pretrained(model_name)
batch = tokenizer(text_in, return_tensors='pt', padding=True)
start = time.time()
translated = model.generate(**batch)
end = time.time()
output = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(output)
print(end - start)
model = TFMarianMTModel.from_pretrained(model_name)
batch = tokenizer(text_in, return_tensors='tf', padding=True)
start = time.time()
translated = model.generate(**batch)
end = time.time()
output = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(output)
print(end - start)
```
Before the PR: same `output`, PT time = `0.084`, TF time = `50.85` (time in seconds, based on my local machine)
After the PR: same `output`, PT time = `0.084`, TF time = `0.701` (time in seconds, based on my local machine)
👉 ~70x faster, same generated text
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21368/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21368",
"html_url": "https://github.com/huggingface/transformers/pull/21368",
"diff_url": "https://github.com/huggingface/transformers/pull/21368.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21368.patch",
"merged_at": 1675419842000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21367
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21367/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21367/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21367/events
|
https://github.com/huggingface/transformers/pull/21367
| 1,562,390,231
|
PR_kwDOCUB6oc5Iz_q2
| 21,367
|
Fixes path for Graphormer checkpoint
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@clefourrier We now get some other issues, see the [failed job run](https://github.com/huggingface/transformers/actions/runs/4060331156/jobs/6989383705)\r\n\r\nCould you take a look 🙏? Don't hesitate if you need some help. \r\n\r\nError message provided here\r\n```bash\r\nFAILED tests/models/graphormer/test_modeling_graphormer.py::GraphormerModelTest::test_model_from_pretrained - RuntimeError: Error(s) in loading state_dict for GraphormerForGraphClassification:\r\n\tsize mismatch for classifier.classifier.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).\r\n\tYou may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.\r\nFAILED tests/models/graphormer/test_modeling_graphormer.py::GraphormerModelIntegrationTest::test_inference_graph_classification - RuntimeError: Error(s) in loading state_dict for GraphormerForGraphClassification:\r\n\tsize mismatch for classifier.classifier.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).\r\n\tYou may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.\r\n```\r\n\r\n",
"@ydshieh Opened PR #21419 to fix this!"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
@ydshieh - should fix the graphormer checkpoint path problem.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21367/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21367",
"html_url": "https://github.com/huggingface/transformers/pull/21367",
"diff_url": "https://github.com/huggingface/transformers/pull/21367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21367.patch",
"merged_at": 1675111685000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21366
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21366/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21366/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21366/events
|
https://github.com/huggingface/transformers/issues/21366
| 1,562,275,791
|
I_kwDOCUB6oc5dHm_P
| 21,366
|
ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
|
{
"login": "alhuri",
"id": 46427957,
"node_id": "MDQ6VXNlcjQ2NDI3OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/46427957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alhuri",
"html_url": "https://github.com/alhuri",
"followers_url": "https://api.github.com/users/alhuri/followers",
"following_url": "https://api.github.com/users/alhuri/following{/other_user}",
"gists_url": "https://api.github.com/users/alhuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alhuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alhuri/subscriptions",
"organizations_url": "https://api.github.com/users/alhuri/orgs",
"repos_url": "https://api.github.com/users/alhuri/repos",
"events_url": "https://api.github.com/users/alhuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/alhuri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Looks like an issue with the sentence-transformers library, not Transformers. Cc-ing @ArthurZucker who may other ideas.",
"I don't really but gonna try to have a look through the notebook. ",
"@alhuri could you provide a functioning notebook with the reproduction script? This one does not work for me (missing packages etc) with the config you are using? Thanks",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
I am trying to run the evaluation of both MCLIP on zero-shot learning task found on this notebook [colab](https://colab.research.google.com/drive/1zfWeVWY79XXH63Ci-pk8xxx3Vu_RRgW-?usp=sharing).
the model is loaded using the below code
```
if MODEL_TYPE == 'mClip':
from sentence_transformers import SentenceTransformer
# Here we load the multilingual CLIP model. Note, this model can only encode text.
# If you need embeddings for images, you must load the 'clip-ViT-B-32' model
se_language_model = SentenceTransformer('clip-ViT-B-32-multilingual-v1')
se_image_model = SentenceTransformer('clip-ViT-B-32')
language_model = lambda queries: se_language_model.encode(queries, convert_to_tensor=True, show_progress_bar=False).cpu().detach().numpy()
image_model = lambda images: se_image_model.encode(images, batch_size=1024, convert_to_tensor=True, show_progress_bar=False).cpu().detach().numpy()
```
when running the below prediction cell
```
top_ns = [1, 5, 10, 100]
acc_counters = [0. for _ in top_ns]
n = 0.
for i, (images, target) in enumerate(tqdm(loader)):
images = images
target = target.numpy()
# predict
image_features = image_model(images)
image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
logits = 100. * image_features @ zeroshot_weights
# measure accuracy
accs = accuracy(logits, target, topk=top_ns)
for j in range(len(top_ns)):
acc_counters[j] += accs[j]
n += images.shape[0]
tops = {f'top{top_ns[i]}': acc_counters[i] / n * 100 for i in range(len(top_ns))}
print(tops)
```
I get the below error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-41-3500c9b4df73> in <module>
11 target = target.numpy()
12 # predict
---> 13 image_features = image_model(images)
14 image_features = image_features / np.linalg.norm(image_features, axis=-1, keepdims=True)
15 logits = 100. * image_features @ zeroshot_weights
6 frames
<ipython-input-39-f2cc72683291> in <lambda>(images)
6 se_image_model = SentenceTransformer('clip-ViT-B-32')
7 language_model = lambda queries: se_language_model.encode(queries, convert_to_tensor=True, show_progress_bar=False).cpu().detach().numpy()
----> 8 image_model = lambda images: se_image_model.encode(images, batch_size=64, convert_to_tensor=False, show_progress_bar=False).cpu().detach().numpy()
9 elif MODEL_TYPE == 'bothclip':
10 import jax
/usr/local/lib/python3.8/dist-packages/sentence_transformers/SentenceTransformer.py in encode(self, sentences, batch_size, show_progress_bar, output_value, convert_to_numpy, convert_to_tensor, device, normalize_embeddings)
159 for start_index in trange(0, len(sentences), batch_size, desc="Batches", disable=not show_progress_bar):
160 sentences_batch = sentences_sorted[start_index:start_index+batch_size]
--> 161 print("sentences_batch")
162 print(sentences_batch)
163 features = self.tokenize(sentences_batch)
/usr/local/lib/python3.8/dist-packages/sentence_transformers/SentenceTransformer.py in tokenize(self, texts)
317 def tokenize(self, texts: Union[List[str], List[Dict], List[Tuple[str, str]]]):
318 """
--> 319 Tokenizes the texts
320 """
321 return self._first_module().tokenize(texts)
/usr/local/lib/python3.8/dist-packages/sentence_transformers/models/CLIPModel.py in tokenize(self, texts)
69 images = None
70
---> 71 inputs = self.processor(text=texts_values, images=images, return_tensors="pt", padding=True)
72 inputs['image_text_info'] = image_text_info
73 return inputs
/usr/local/lib/python3.8/dist-packages/transformers/models/clip/processing_clip.py in __call__(self, text, images, return_tensors, **kwargs)
97
98 if text is not None:
---> 99 encoding = self.tokenizer(text, return_tensors=return_tensors, **kwargs)
100
101 if images is not None:
/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2525 if not self._in_target_context_manager:
2526 self._switch_to_input_mode()
-> 2527 encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
2528 if text_target is not None:
2529 self._switch_to_target_mode()
/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py in _call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2583
2584 if not _is_valid_text_input(text):
-> 2585 raise ValueError(
2586 "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "
2587 "or `List[List[str]]` (batch of pretokenized examples)."
ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
How can this be fixed?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21366/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21365
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21365/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21365/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21365/events
|
https://github.com/huggingface/transformers/pull/21365
| 1,562,255,505
|
PR_kwDOCUB6oc5IziOI
| 21,365
|
Fix DETR tests after #21144
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Bug introduced with #21144 when checking whether annotations were batched. In part, because of a mismatch between type annotations and input types. Updated logic and annotations.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21365/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21365",
"html_url": "https://github.com/huggingface/transformers/pull/21365",
"diff_url": "https://github.com/huggingface/transformers/pull/21365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21365.patch",
"merged_at": 1675094100000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21364
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21364/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21364/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21364/events
|
https://github.com/huggingface/transformers/issues/21364
| 1,562,243,098
|
I_kwDOCUB6oc5dHfAa
| 21,364
|
Why the 'tokenizer' will return more than one token for single word in GPT2?
|
{
"login": "Zeqing-Wang",
"id": 58877116,
"node_id": "MDQ6VXNlcjU4ODc3MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/58877116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zeqing-Wang",
"html_url": "https://github.com/Zeqing-Wang",
"followers_url": "https://api.github.com/users/Zeqing-Wang/followers",
"following_url": "https://api.github.com/users/Zeqing-Wang/following{/other_user}",
"gists_url": "https://api.github.com/users/Zeqing-Wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zeqing-Wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zeqing-Wang/subscriptions",
"organizations_url": "https://api.github.com/users/Zeqing-Wang/orgs",
"repos_url": "https://api.github.com/users/Zeqing-Wang/repos",
"events_url": "https://api.github.com/users/Zeqing-Wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zeqing-Wang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask questions like this as we keep issues for bugs and feature requests only.\r\n\r\nMost Transformer models use subword tokenizers, which mean that one word can be split into several tokens. Here the GPT2 tokenizer splits `\"irving\"` into `[\"ir\", \"ving\"]`.",
"> Please use the [forums](https://discuss.huggingface.co/) to ask questions like this as we keep issues for bugs and feature requests only.\r\n> \r\n> Most Transformer models use subword tokenizers, which mean that one word can be split into several tokens. Here the GPT2 tokenizer splits `\"irving\"` into `[\"ir\", \"ving\"]`.\r\n\r\nI am sry for putting the question in the wrong place....\r\n\r\nand thanks for your help, I've got it!"
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
I'm trying to use the GPT2 to get the feature of a single word. But i find not every single word will get a single token from the 'tokenizer'. I don't know why it's like that.
Here is my code:
```
def test_gpt2():
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
model.to(torch.device("cuda"))
# model.feature_extractor._freeze_parameters()
text = 'irving'#"irving"
encoded_input = tokenizer(text, return_tensors='pt')
print(encoded_input)
encoded_input = encoded_input.to(torch.device("cuda"))
output = model(**encoded_input)
print(encoded_input)
# print(output.last_hidden_state)
print(output.last_hidden_state.shape)
```
and the output is:
{'input_ids': tensor([[ 81, 1075]]), 'attention_mask': tensor([[1, 1]])}
{'input_ids': tensor([[ 81, 1075]], device='cuda:0'), 'attention_mask': tensor([[1, 1]], device='cuda:0')}
torch.Size([1, 2, 768]
It seems like the word 'irving' is mapped to '81' and '1075'
It also happens on some other words.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21364/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21363
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21363/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21363/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21363/events
|
https://github.com/huggingface/transformers/issues/21363
| 1,562,163,644
|
I_kwDOCUB6oc5dHLm8
| 21,363
|
TPU out of memory (OOM) with flax train a language model GPT2
|
{
"login": "sarataylor2000",
"id": 121573171,
"node_id": "U_kgDOBz8PMw",
"avatar_url": "https://avatars.githubusercontent.com/u/121573171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarataylor2000",
"html_url": "https://github.com/sarataylor2000",
"followers_url": "https://api.github.com/users/sarataylor2000/followers",
"following_url": "https://api.github.com/users/sarataylor2000/following{/other_user}",
"gists_url": "https://api.github.com/users/sarataylor2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarataylor2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarataylor2000/subscriptions",
"organizations_url": "https://api.github.com/users/sarataylor2000/orgs",
"repos_url": "https://api.github.com/users/sarataylor2000/repos",
"events_url": "https://api.github.com/users/sarataylor2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarataylor2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Or you just need to use a smaller batch size to avoid the OOM error. cc @sanchit-gandhi who may have other ideas.",
"\r\n\r\n\r\n> Or you just need to use a smaller batch size to avoid the OOM error. cc @sanchit-gandhi who may have other ideas.\r\n\r\nthanks for the reply i really appreciate it\r\n smaller batch size has some disadvantages \r\n1- it takes huge amount of time for training\r\n2-it is impossible to train large models like gpt-neo 1.3B and 2.7B parameters if i set the batch size to 1 still i get the OOM error so training large models would be to tally impossible, \r\nany way do you think is there any reliable solution like transfer model on cpu then TPU or change the TPU memory limitation directly?\r\n",
"Hey @sarataylor2000!\r\n\r\n> the orginal repository use 64 batch size\r\n\r\nI believe this was the _effective_ batch size, which is computed as: `effective_batch_size = per_device_batch_size * num_devices`\r\n\r\nIn your script, you're setting `per_device_batch_size=256`. Supposing you're using a TPU v3-8, you have 8 TPU cores, which means you have 8 devices. This means your effective batch size is: 256 * 8 = 2048. This is a factor of 2048/64 = 32 times larger than the original repo!\r\n\r\nIf you need to use an effective batch size of 64, you can work out your per device batch size as: `per_device_batch_size = effective_batch_size / num_devices = 64 / 8 = 8`\r\n\r\nSo we only need a `per_device_batch_size=8` here!\r\n\r\n> it propose to implement the model on cup then transfer it to TPU but it did not explain where to transfer to cpu and when back to TPU\r\n\r\nThe OOM memory we're getting with your example is happening on the `pmap` step, i.e. when we're performing training. The model is loaded up perfectly fine, so no need to change the model loading logic (transferring from CPU -> TPU)\r\n\r\nYou can try setting `dtype=\"bfloat16\"` to load the params in bfloat16 precision and save memory on the model weights + optimiser states: https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/examples/flax/language-modeling/run_mlm_flax.py#L168",
"> Hey @sarataylor2000!\r\n> \r\n> > the orginal repository use 64 batch size\r\n> \r\n> I believe this was the _effective_ batch size, which is computed as: `effective_batch_size = per_device_batch_size * num_devices`\r\n> \r\n> In your script, you're setting `per_device_batch_size=256`. Supposing you're using a TPU v3-8, you have 8 TPU cores, which means you have 8 devices. This means your effective batch size is: 256 * 8 = 2048. This is a factor of 2048/64 = 32 times larger than the original repo!\r\n> \r\n> If you need to use an effective batch size of 64, you can work out your per device batch size as: `per_device_batch_size = effective_batch_size / num_devices = 64 / 8 = 8`\r\n> \r\n> So we only need a `per_device_batch_size=8` here!\r\n> \r\n> > it propose to implement the model on cup then transfer it to TPU but it did not explain where to transfer to cpu and when back to TPU\r\n> \r\n> The OOM memory we're getting with your example is happening on the `pmap` step, i.e. when we're performing training. The model is loaded up perfectly fine, so no need to change the model loading logic (transferring from CPU -> TPU)\r\n> \r\n> You can try setting `dtype=\"bfloat16\"` to load the params in bfloat16 precision and save memory on the model weights + optimiser states:\r\n> \r\n> https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/examples/flax/language-modeling/run_mlm_flax.py#L168\r\n\r\nfirst of thanks so much for reply i really appreciate it, \r\n\r\n1_the total batch is 256 and per devise batch is 32\r\n2_for changing the dtype it works for models up to T5_large but doesn't work with models like T5 x-large or T5 xx-large\r\nor gpt-nep 1.3 & 2.3 b parameter and gpt-J 8bit this big models if i set per device batch=1 NOT WORK AT ALL!\r\n**so is there any way to prevent OOM and fine-tune this big models?**\r\n \r\n\r\n",
"ok seems no one is capable to solve this TPU OOM problem i suspend here there would be some genuine guys but all SUCKS! No one know the solution",
"@sarataylor2000 We do not tolerate this kind of language in this repository. You can learn more by reading our code of conduct [here](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md). As a result, I have blocked you for seven days.\r\nThis is an opensource repository, you get all the code for free. You are not entitled to an answer or a free debugging session.",
"Hey @sarataylor2000,\r\n\r\nWhat TPU device are you using? A v3-8? If so, it's going to be difficult running super large models like T5 XXL using `pmap`, as each TPU core only has 16GB of memory.\r\n\r\nOne thing we should definitely try is enabling gradient checkpointing:\r\nhttps://github.com/huggingface/transformers/blob/21a2d900eceeded7be9edc445b56877b95eda4ca/examples/flax/language-modeling/run_mlm_flax.py#L110\r\n\r\nIf that doesn't work, we'll have to resort to some heavy engineering to make this work. This is very advanced and thus outside the scope of the `transformers` library, so I've left some pointers here:\r\n1. Add [`scan_with_axes`](https://github.com/google/flax/blob/1f6b0949d964fbc99f8f8b9541caff54226d0a78/flax/linen/partitioning.py#L378). See [seq2seq-speech/modeling_flax_bart.py](https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/models/modeling_flax_bart.py) for an example of the code changes you need to make here\r\n2. Use `pmap` to shard the model and activations across devices, see [JAX pmap](https://jax.readthedocs.io/en/latest/_autosummary/jax.pmap.html) and [bloom-jax-inference](https://github.com/huggingface/bloom-jax-inference/blob/main/bloom_inference/modeling_bloom/modeling_bloom.py)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
I'm training the GPT2 on the Flax and TPU i get TPU out of memory error while i have enough memory and without filling the memory it says out of memory like this:
** jax._src.traceback_util.UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: RESO**URCE_EXHAUSTED: Ran out of memory in memory space hbm. Used 17.91G of 15.48G hbm. Exceeded hbm capacity by 2.43G.**
Im training the below code:
the orginal repository use 64 batch size it works but when i use 256 batch size i get OOM TPU error while i have much more memory
Github page of below code:
**https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling**
**python run_clm_flax.py \
--output_dir="./norwegian-gpt2" \
--model_type="gpt2" \
--config_name="./norwegian-gpt2" \
--tokenizer_name="./norwegian-gpt2" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_no" \
--do_train --do_eval \
--block_size="512" \
--per_device_train_batch_size="256" \
--per_device_eval_batch_size="256" \
--learning_rate="5e-3" --warmup_steps="1000" \
--adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \
--overwrite_output_dir \
--num_train_epochs="20" \
--logging_steps="500" \
--save_steps="2500" \
--eval_steps="2500" \
--push_to_hub**
in the github i found a similar erro:
**link: https://github.com/google/flax/discussions/1690**
it propose to implement the model on cup then transfer it to TPU but it did not explain where to transfer to cpu and when back to TPU:
here is the explanation to transfer on cup first:
**This is quite odd for sure. Fragmentation and being close to the limit in terms of memory could off course result in errors that appear almost randomly. One thing you could try is to initialize the model on CPU jax.jit(model.init, backend="cpu") The params are moved to TPU automatically during training or during replication of the state (eg jax_utils.replicate)**
Here is the FULL error i get while training on TPU:
Traceback (most recent call last):
File "/kaggle/input/yyyyyyyyyyyyy/casual_model_unsupervised_train.py.txt", line 845, in <module>
main()
File "/kaggle/input/yyyyyyyyyyyyy/casual_model_unsupervised_train.py.txt", line 752, in main
state, train_metric = p_train_step(state, batch)
File "/usr/local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/_src/api.py", line 2026, in cache_miss
out_tree, out_flat = f_pmapped_(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/_src/api.py", line 1902, in pmap_f
out = pxla.xla_pmap(
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 1859, in bind
return map_bind(self, fun, *args, **params)
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 1891, in map_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 1862, in process
return trace.process_map(self, fun, tracers, params)
File "/usr/local/lib/python3.8/site-packages/jax/core.py", line 680, in process_call
return primitive.impl(f, *tracers, **params)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 792, in xla_pmap_impl
compiled_fun, fingerprint = parallel_callable(
File "/usr/local/lib/python3.8/site-packages/jax/linear_util.py", line 285, in memoized_fun
ans = call(fun, *args)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 823, in parallel_callable
pmap_executable = pmap_computation.compile()
File "/usr/local/lib/python3.8/site-packages/jax/_src/profiler.py", line 206, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1091, in compile
self._executable = PmapExecutable.from_hlo(self._hlo, **self.compile_args)
File "/usr/local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1214, in from_hlo
compiled = dispatch.compile_or_get_cached(
File "/usr/local/lib/python3.8/site-packages/jax/_src/dispatch.py", line 768, in compile_or_get_cached
return backend_compile(backend, computation, compile_options)
File "/usr/local/lib/python3.8/site-packages/jax/_src/profiler.py", line 206, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/jax/_src/dispatch.py", line 713, in backend_compile
return backend.compile(built_c, compile_options=options)
jax._src.traceback_util.UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Ran out of memory in memory space hbm. Used 17.91G of 15.48G hbm. Exceeded hbm capacity by 2.43G.
Total hbm usage >= 18.43G:
reserved 530.00M
program 17.91G
arguments 0B
Output size 0B; shares 0B with arguments.
Program hbm requirement 17.91G:
global 132.0K
HLO temp 17.91G (99.8% utilization: Unpadded (14.75G) Padded (14.78G), 17.5% fragmentation (3.13G))
Largest program allocations in hbm:
1. Size: 6.14G
Operator: op_name="pmap(train_step)/jit(main)/dot_general[dimension_numbers=(((2,), (0,)), ((), ())) precision=None preferred_element_type=None]" source_file="/usr/local/lib/python3.8/site-packages/flax/linen/linear.py" source_line=196
Shape: f32[64,511,50257]{1,2,0:T(8,128)}
Unpadded size: 6.12G
Extra memory due to padding: 13.14M (1.0x expansion)
XLA label: fusion.3719 = fusion(get-tuple-element.1407, bitcast.1602), kind=kOutput, calls=fused_computation.2774
Allocation type: HLO temp
==========================
2. Size: 3.07G
Operator: op_name="pmap(train_step)/jit(main)/jit(transpose(jvp(log_softmax)))/add_any" source_file="/usr/local/lib/python3.8/site-packages/optax/_src/loss.py" source_line=172
Shape: bf16[64,511,50257]{1,2,0:T(8,128)(2,1)}
Unpadded size: 3.06G
Extra memory due to padding: 6.57M (1.0x expansion)
XLA label: fusion.8 = fusion(get-tuple-element.1576, get-tuple-element.1575, slice.3481, divide.161), kind=kLoop, calls=fused_computation.8
Allocation type: HLO temp
==========================
**Does any smart guy know the solution to this problem?**
thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21363/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21362
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21362/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21362/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21362/events
|
https://github.com/huggingface/transformers/pull/21362
| 1,561,867,132
|
PR_kwDOCUB6oc5IyO4D
| 21,362
|
Fix `GitModelIntegrationTest.test_batched_generation` device issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
In PR #21282, `input_ids` is not on the target device and CI fails with
```bash
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21362/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21362",
"html_url": "https://github.com/huggingface/transformers/pull/21362",
"diff_url": "https://github.com/huggingface/transformers/pull/21362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21362.patch",
"merged_at": 1675071477000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21361
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21361/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21361/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21361/events
|
https://github.com/huggingface/transformers/pull/21361
| 1,561,835,759
|
PR_kwDOCUB6oc5IyIMR
| 21,361
|
Fix TextGeneration and Text2TextGeneration pipeline issue with return_dict_in_generate
|
{
"login": "tokestermw",
"id": 4722119,
"node_id": "MDQ6VXNlcjQ3MjIxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4722119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tokestermw",
"html_url": "https://github.com/tokestermw",
"followers_url": "https://api.github.com/users/tokestermw/followers",
"following_url": "https://api.github.com/users/tokestermw/following{/other_user}",
"gists_url": "https://api.github.com/users/tokestermw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tokestermw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tokestermw/subscriptions",
"organizations_url": "https://api.github.com/users/tokestermw/orgs",
"repos_url": "https://api.github.com/users/tokestermw/repos",
"events_url": "https://api.github.com/users/tokestermw/events{/privacy}",
"received_events_url": "https://api.github.com/users/tokestermw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21361). All of your documentation changes will be reflected on that endpoint.",
"> Hi thank you for this PR. Do you mind sharing in which context you need to use this return changing flag?\r\n\r\nCurrently I use the scores for a custom stopping criterion (e.g. stop when cumulative probs are > x).",
"@tokestermw interesting! `pipeline` only returns the output sequence, but to store the `scores` internally in `.generate()`, `return_dict_in_generate` is indeed needed. Is this what is happening in your use case?",
"@gante yes that's right! to access the `scores` here: https://github.com/huggingface/transformers/blob/42b60f8b02941b0c40c42e150a101eb372c3856e/src/transformers/generation/stopping_criteria.py#L37\r\n",
"> Currently I use the scores for a custom stopping criterion (e.g. stop when cumulative probs are > x).\r\n\r\nShouldn't that be done with a custom `StoppingCriteria` ? \r\n\r\n```python\r\nclass MyStoppingCriteria:\r\n def __init__():\r\n self.cumulative = 0.0\r\n\r\n def __call__():\r\n self.cumulative += scores\r\n if self.cumulative > \r\n return True\r\n else:\r\n return False\r\n```\r\n\r\nThis PR a small enough to be ok anyway, just wondering if there's not a \"cleaner\" way for you to solve your issue.",
"@Narsil we do use a custom `StoppingCriteria`, and use it inside pipelines.\r\n\r\nBut yeah arguably custom stuff should maybe be done outside of pipelines.\r\n\r\nSidenote is that we can't currently pass in tokenizer args like `truncation=True` in pipelines, so we've had to either write a custom pipeline, or rewrite to not use pipelines :)",
"> Sidenote is that we can't currently pass in tokenizer args like truncation=True in pipelines, so we've had to either write a custom pipeline, or rewrite to not use pipelines :)\r\n\r\nWe could definitely add `tokenizer_kwargs` to enable all the args you want on the tokenizer.\r\n\r\nAnother way of doing it would be by subclassing\r\n\r\nhttps://huggingface.co/docs/transformers/v4.26.0/en/add_new_pipeline#how-to-create-a-custom-pipeline\r\n\r\nAnd then using `pipeline(...., pipeline_class=MyPipelineClass)` to use it instead of the default one.\r\nThen you can add all the fancy logic you need.\r\n\r\nThe docs also refer on how to share it !\r\n",
"Thanks @Narsil ! i can take a look at `tokenizer_kwargs` in a separate PR, please let me know if anything else is needed in this PR",
"Seems good, just don't pass all `tokenizer_kwargs` as a generic `**kwargs`. \r\n\r\nLike let's capture `pipeline(...tokenizer_kwargs={\"truncation\": True})` so that it cannot clash with `generate_kwargs` (inconveninent but necessary, `max_length` is an argument for both)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @echarlaix, potentialy including `tokenizer_kwargs` is planned 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
When running `return_dict_in_generate=True` in pipelines, the `generate` method errors out, e.g.
```python
generate_kwargs["min_length"] = generate_kwargs.get("min_length", self.model.config.min_length)
generate_kwargs["max_length"] = generate_kwargs.get("max_length", self.model.config.max_length)
self.check_inputs(input_length, generate_kwargs["min_length"], generate_kwargs["max_length"])
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
E AttributeError: 'GreedySearchEncoderDecoderOutput' object has no attribute 'shape'
```
Reproducible code:
```python
from transformers import pipeline
generator = pipeline('text-generation', 'gpt2')
generator('hello', return_dict_in_generate=True)
```
Fix is to add a check if `return_dict_in_generate=True`, then let `generated_sequence = generated_sequence.sequences`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante, @Narsil
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21361/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21361",
"html_url": "https://github.com/huggingface/transformers/pull/21361",
"diff_url": "https://github.com/huggingface/transformers/pull/21361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21361.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21360
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21360/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21360/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21360/events
|
https://github.com/huggingface/transformers/issues/21360
| 1,561,703,048
|
I_kwDOCUB6oc5dFbKI
| 21,360
|
Segment Fault when exporting GPT-2 as ONNX format
|
{
"login": "BorisPolonsky",
"id": 12964401,
"node_id": "MDQ6VXNlcjEyOTY0NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/12964401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BorisPolonsky",
"html_url": "https://github.com/BorisPolonsky",
"followers_url": "https://api.github.com/users/BorisPolonsky/followers",
"following_url": "https://api.github.com/users/BorisPolonsky/following{/other_user}",
"gists_url": "https://api.github.com/users/BorisPolonsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BorisPolonsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BorisPolonsky/subscriptions",
"organizations_url": "https://api.github.com/users/BorisPolonsky/orgs",
"repos_url": "https://api.github.com/users/BorisPolonsky/repos",
"events_url": "https://api.github.com/users/BorisPolonsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/BorisPolonsky/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Make sure you have the optimum library installed to have the latest version of our ONNX export, and if the bug still persists, please open an issue in that repo as they will be able to help you :-) ",
"> Make sure you have the optimum library installed to have the latest version of our ONNX export, and if the bug still persists, please open an issue in that repo as they will be able to help you :-)\r\n\r\nI've update the libraries via `pip3 install -U transformers optimum onnxruntime` and executed the script above and things ended up with `Segement Fault` despite having fewer warnings than before\r\n```\r\n['last_hidden_state']\r\nONNX OP Seet: 13\r\n/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.\r\n warnings.warn(\"'enable_onnx_checker' is deprecated and ignored. It will be removed in \"\r\n/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.\r\n warnings.warn(\"`use_external_data_format' is deprecated and ignored. Will be removed in next \"\r\n/opt/conda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:794: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if batch_size <= 0:\r\nSegmentation fault\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Any response?",
"Hi @BorisPolonsky , the transformers ONNX export is not maintained anymore, and you should install `optimum` to get the latest updates related to the ONNX export, see https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model\r\n\r\n```\r\npip install -U optimum\r\noptimum-cli export onnx --model gpt2 gpt2_onnx/\r\n```\r\n\r\nIf you encounter an issue with this command, feel free to open an issue in https://github.com/huggingface/optimum/issues and I will have a second look.\r\n\r\nThank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Migrating to optimum."
] | 1,675
| 1,680
| 1,680
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. launch container with `pytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime`
2. install `transformers` and `onnxruntime` with `pip`
3. Execute the following script
```
# %% [markdown]
# ## Reference
# - [ONNX Tutorial by HuggingFace](https://huggingface.co/docs/transformers/serialization)
# - [Custom ONNX Config for HuggingFace Transformers](https://huggingface.co/docs/transformers/serialization#implementing-a-custom-onnx-configuration)
#
# %%
import torch as t
from transformers import GPT2Tokenizer, GPT2Model
from transformers.models.gpt2 import GPT2Config, GPT2OnnxConfig
# %%
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
# %%
model.eval()
# %%
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
output
# %%
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel
onnx_path = Path("gpt2-.onnx")
onnx_config = GPT2OnnxConfig(model.config)
print(list(onnx_config.outputs.keys()))
print(f"ONNX OP Seet: {onnx_config.default_onnx_opset}")
onnx_inputs, onnx_outputs = export(tokenizer, model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
Log:
```
['last_hidden_state']
ONNX OP Seet: 13
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.
warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
/opt/conda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:796: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if batch_size <= 0:
Segmentation fault
(base) root@f2f3e984f0bd:/home/polonsky/Documents/model-infernce-demo# /opt/conda/bin/python /home/polonsky/Documents/model-infernce-demo/torchscript/gpt2_onnx.py
['last_hidden_state']
ONNX OP Seet: 13
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError.
warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in "
/opt/conda/lib/python3.7/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers.
warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next "
/opt/conda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:796: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if batch_size <= 0:
Segmentation fault
```
### Expected behavior
Yield model in onnx for production use
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21360/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21359
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21359/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21359/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21359/events
|
https://github.com/huggingface/transformers/issues/21359
| 1,561,473,666
|
I_kwDOCUB6oc5dEjKC
| 21,359
|
Difference between FlaxViTModel and FlaxCLIPVisionTransformer
|
{
"login": "ashors1",
"id": 71393111,
"node_id": "MDQ6VXNlcjcxMzkzMTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/71393111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashors1",
"html_url": "https://github.com/ashors1",
"followers_url": "https://api.github.com/users/ashors1/followers",
"following_url": "https://api.github.com/users/ashors1/following{/other_user}",
"gists_url": "https://api.github.com/users/ashors1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashors1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashors1/subscriptions",
"organizations_url": "https://api.github.com/users/ashors1/orgs",
"repos_url": "https://api.github.com/users/ashors1/repos",
"events_url": "https://api.github.com/users/ashors1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashors1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.",
"I think they're equivalent (one would need to check they are both applying pre-norm or post-norm etc.), but the reason we have 2 different implementations is because Transformers has a [one model, one file philosophy](https://huggingface.co/blog/transformers-design-philosophy)."
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
Hi, I noticed that HuggingFace has two different Flax-based implementations of the vision transformer, the [FlaxCLIPVisionTransformer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_flax_clip.py#L535) and the [FlaxViTModule](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_flax_vit.py#L507). Is there a reason there is a separate ViT implementation for CLIP? Are there any notable differences between the FlaxCLIPVisionTransformer and the FlaxViTModel? Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21359/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21358
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21358/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21358/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21358/events
|
https://github.com/huggingface/transformers/issues/21358
| 1,561,421,463
|
I_kwDOCUB6oc5dEWaX
| 21,358
|
`TFAutoModelForSequenceClassification` Onnx export not working
|
{
"login": "rohitgr7",
"id": 30778939,
"node_id": "MDQ6VXNlcjMwNzc4OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/30778939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohitgr7",
"html_url": "https://github.com/rohitgr7",
"followers_url": "https://api.github.com/users/rohitgr7/followers",
"following_url": "https://api.github.com/users/rohitgr7/following{/other_user}",
"gists_url": "https://api.github.com/users/rohitgr7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohitgr7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohitgr7/subscriptions",
"organizations_url": "https://api.github.com/users/rohitgr7/orgs",
"repos_url": "https://api.github.com/users/rohitgr7/repos",
"events_url": "https://api.github.com/users/rohitgr7/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohitgr7/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Uhmmm I am not knowledgeable about ONNX -- maybe @michaelbenayoun has some ideas? 🤔 ",
"It works on my side. In any case you can try exporting it with optimum:\r\n\r\n```python\r\noptimum-cli export onnx --model distilbert-base-uncased onnx/\r\n```",
"seems like the issue is with `numpy==1.24.1`. Works with `numpy==1.21.6`. Maybe tf2onnx needs to make some updates. Closing this. Thank you both 😃 "
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante and @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run:
```py
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
# Load tokenizer and TensorFlow weights from the Hub
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
# Save to disk
tokenizer.save_pretrained("local-tf-checkpoint")
tf_model.save_pretrained("local-tf-checkpoint")
```
Then:
```console
python -m transformers.onnx --model=local-tf-checkpoint onnx/
```
Taken directly from here: https://huggingface.co/docs/transformers/serialization
Looks like some package version issue since I installed it with
```console
pip install transformers[tf,onnx]
```
### Expected behavior
Error
```console
2023-01-30 00:32:22.939113: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Local TensorFlow model found.
Framework not requested. Using tf2onnx to export to ONNX.
2023-01-30 00:32:25.820716: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Some layers from the model checkpoint at local-tf-checkpoint were not used when initializing TFDistilBertModel: ['dropout_19', 'pre_classifier', 'classifier']
- This IS expected if you are initializing TFDistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFDistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFDistilBertModel were initialized from the model checkpoint at local-tf-checkpoint.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertModel for predictions without further training.
WARNING:tensorflow:From /Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
2023-01-30 00:32:29.729090: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
2023-01-30 00:32:37.305540: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py:58: FutureWarning: In the future `np.str` will be defined as the corresponding NumPy scalar. (This may have returned Python scalars in past versions.
np_data = np_data.astype(np.str).astype(object)
Traceback (most recent call last):
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py", line 58, in tf_to_onnx_tensor
np_data = np_data.astype(np.str).astype(object)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/numpy/__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'str'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 240, in <module>
main()
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 232, in main
export_with_transformers(args)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 165, in export_with_transformers
onnx_inputs, onnx_outputs = export(
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/convert.py", line 355, in export
return export_tensorflow(preprocessor, model, config, opset, output, tokenizer=tokenizer)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/transformers/onnx/convert.py", line 282, in export_tensorflow
onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/convert.py", line 494, in from_keras
model_proto, external_tensor_storage = _convert_common(
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/convert.py", line 164, in _convert_common
g = process_tf_graph(tf_graph, const_node_values=const_node_values,
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 459, in process_tf_graph
main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tfonnx.py", line 474, in graphs_from_tf
ordered_func = resolve_functions(tf_graph)
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_loader.py", line 760, in resolve_functions
_, _, _, _, _, functions = tflist_to_onnx(tf_graph, {})
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py", line 441, in tflist_to_onnx
onnx_tensor = tf_to_onnx_tensor(value, name=port_name(node.name))
File "/Users/goku/miniconda3/envs/tmp/lib/python3.9/site-packages/tf2onnx/tf_utils.py", line 63, in tf_to_onnx_tensor
raise RuntimeError("Not support type: {}".format(type(np_data.flat[0])))
RuntimeError: Not support type: <class 'bytes'>
```
Should export models without any error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21358/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21357
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21357/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21357/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21357/events
|
https://github.com/huggingface/transformers/issues/21357
| 1,561,338,865
|
I_kwDOCUB6oc5dECPx
| 21,357
|
'T5Config' object has no attribute '__deepcopy__'
|
{
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Reproduction
The following fails for me:
```python
model_name = "google/t5-v1_1-small"
config = AutoConfig.from_pretrained(model_name)
model_ref = T5ForConditionalGeneration._from_config(config)
'T5Config' object has no attribute '__deepcopy__'
File "/home/nouamane/projects/transformers/src/transformers/configuration_utils.py", line 260, in __getattribute__ (Current frame)
return super().__getattribute__(key)
File "/home/nouamane/miniconda/envs/py38/lib/python3.8/copy.py", line 151, in deepcopy
copier = getattr(x, "__deepcopy__", None)
File "/home/nouamane/projects/transformers/src/transformers/models/t5/modeling_t5.py", line 1498, in __init__
encoder_config = copy.deepcopy(config)
File "/home/nouamane/projects/transformers/src/transformers/modeling_utils.py", line 1077, in _from_config
model = cls(config, **kwargs)
File "/home/nouamane/projects/brrr/examples/t5/test_t5.py", line 78, in test_t5
model_ref = T5ForConditionalGeneration._from_config(config)
File "/home/nouamane/projects/brrr/examples/t5/test_t5.py", line 298, in <module>
test_t5()
AttributeError: 'T5Config' object has no attribute '__deepcopy__'
```
Could be a cache problem (because I just installed transformers from source today)
### Expected behavior
Loading config should work fine
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21357/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21356
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21356/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21356/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21356/events
|
https://github.com/huggingface/transformers/pull/21356
| 1,561,209,518
|
PR_kwDOCUB6oc5IwFCN
| 21,356
|
Patch rag tf generate
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Just need to rebase",
"Oups seems that this is to prehemptive! This will be for the tf timestamps PR #21334 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21356). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Patches the failing tests on `main`. After merging #21324, which added support for `logits_processor` in TF framework.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21356/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21356",
"html_url": "https://github.com/huggingface/transformers/pull/21356",
"diff_url": "https://github.com/huggingface/transformers/pull/21356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21356.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21355
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21355/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21355/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21355/events
|
https://github.com/huggingface/transformers/issues/21355
| 1,561,199,327
|
I_kwDOCUB6oc5dDgLf
| 21,355
|
Is a Transformer-based image caption model trained to predict the last token only in training phase?
|
{
"login": "adeljalalyousif",
"id": 97432157,
"node_id": "U_kgDOBc6yXQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97432157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeljalalyousif",
"html_url": "https://github.com/adeljalalyousif",
"followers_url": "https://api.github.com/users/adeljalalyousif/followers",
"following_url": "https://api.github.com/users/adeljalalyousif/following{/other_user}",
"gists_url": "https://api.github.com/users/adeljalalyousif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeljalalyousif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeljalalyousif/subscriptions",
"organizations_url": "https://api.github.com/users/adeljalalyousif/orgs",
"repos_url": "https://api.github.com/users/adeljalalyousif/repos",
"events_url": "https://api.github.com/users/adeljalalyousif/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeljalalyousif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should ask your question on the [forums](https://discuss.huggingface.co/) where the community can help you, as we keep issues for bugs and feature requests only.",
"Thanks for this advice"
] | 1,674
| 1,675
| 1,675
|
NONE
| null |
For the following code (which is snippet from https://keras.io/examples/vision/image_captioning/), I do not see the steps for entering the input sequence to model token by token in training phase. But instead of that, the input sequence is entered at once except the last toke by: batch_seq_inp = batch_seq[:, :-1] in the function def _compute_caption_loss_and_acc as shown below. Based on my knowladge, if we have an image that is captioned with a sentence like (image_1 : a man is running), the input output pair in training should be like:
image_1 SOS ==> a
image_1 SOS a ==> man
image_1 SOS a man ==> is
image_1 SOS a man is ==> running
image_1 SOS a man is running ==> END
So I am little confused.
class ImageCaptioningModel(keras.Model):
def __init__(
self, cnn_model, encoder, decoder, num_captions_per_image=5, image_aug=None,
):
super().__init__()
self.cnn_model = cnn_model
self.encoder = encoder
self.decoder = decoder
self.loss_tracker = keras.metrics.Mean(name="loss")
self.acc_tracker = keras.metrics.Mean(name="accuracy")
self.num_captions_per_image = num_captions_per_image
self.image_aug = image_aug
def calculate_loss(self, y_true, y_pred, mask):
loss = self.loss(y_true, y_pred)
mask = tf.cast(mask, dtype=loss.dtype)
loss *= mask
return tf.reduce_sum(loss) / tf.reduce_sum(mask)
def calculate_accuracy(self, y_true, y_pred, mask):
accuracy = tf.equal(y_true, tf.argmax(y_pred, axis=2))
accuracy = tf.math.logical_and(mask, accuracy)
accuracy = tf.cast(accuracy, dtype=tf.float32)
mask = tf.cast(mask, dtype=tf.float32)
return tf.reduce_sum(accuracy) / tf.reduce_sum(mask)
def _compute_caption_loss_and_acc(self, img_embed, batch_seq, training=True):
encoder_out = self.encoder(img_embed, training=training)
batch_seq_inp = batch_seq[:, :-1]
batch_seq_true = batch_seq[:, 1:]
mask = tf.math.not_equal(batch_seq_true, 0)
batch_seq_pred = self.decoder(
batch_seq_inp, encoder_out, training=training, mask=mask
)
loss = self.calculate_loss(batch_seq_true, batch_seq_pred, mask)
acc = self.calculate_accuracy(batch_seq_true, batch_seq_pred, mask)
return loss, acc
def train_step(self, batch_data):
batch_img, batch_seq = batch_data
batch_loss = 0
batch_acc = 0
if self.image_aug:
batch_img = self.image_aug(batch_img)
# 1. Get image embeddings
img_embed = self.cnn_model(batch_img)
# 2. Pass each of the five captions one by one to the decoder
# along with the encoder outputs and compute the loss as well as accuracy
# for each caption.
for i in range(self.num_captions_per_image):
with tf.GradientTape() as tape:
loss, acc = self._compute_caption_loss_and_acc(
img_embed, batch_seq[:, i, :], training=True
)
# 3. Update loss and accuracy
batch_loss += loss
batch_acc += acc
# 4. Get the list of all the trainable weights
train_vars = (
self.encoder.trainable_variables + self.decoder.trainable_variables
)
# 5. Get the gradients
grads = tape.gradient(loss, train_vars)
# 6. Update the trainable weights
self.optimizer.apply_gradients(zip(grads, train_vars))
# 7. Update the trackers
batch_acc /= float(self.num_captions_per_image)
self.loss_tracker.update_state(batch_loss)
self.acc_tracker.update_state(batch_acc)
# 8. Return the loss and accuracy values
return {"loss": self.loss_tracker.result(), "acc": self.acc_tracker.result()}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21355/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21354
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21354/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21354/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21354/events
|
https://github.com/huggingface/transformers/pull/21354
| 1,561,179,908
|
PR_kwDOCUB6oc5Iv--t
| 21,354
|
fix the issue that the output dict of jit model could not get [0]
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @yao-matrix please help to review",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes # (issue)
when the model is optimized by jit.trace. and then used in pipeline inference for token classification. there is error like KeyError: 0,
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
- pipelines: @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21354/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21354",
"html_url": "https://github.com/huggingface/transformers/pull/21354",
"diff_url": "https://github.com/huggingface/transformers/pull/21354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21354.patch",
"merged_at": 1675088636000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21353
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21353/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21353/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21353/events
|
https://github.com/huggingface/transformers/issues/21353
| 1,561,171,545
|
I_kwDOCUB6oc5dDZZZ
| 21,353
|
Megatron-11B
|
{
"login": "KnutJaegersberg",
"id": 17965169,
"node_id": "MDQ6VXNlcjE3OTY1MTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/17965169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KnutJaegersberg",
"html_url": "https://github.com/KnutJaegersberg",
"followers_url": "https://api.github.com/users/KnutJaegersberg/followers",
"following_url": "https://api.github.com/users/KnutJaegersberg/following{/other_user}",
"gists_url": "https://api.github.com/users/KnutJaegersberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KnutJaegersberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KnutJaegersberg/subscriptions",
"organizations_url": "https://api.github.com/users/KnutJaegersberg/orgs",
"repos_url": "https://api.github.com/users/KnutJaegersberg/repos",
"events_url": "https://api.github.com/users/KnutJaegersberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/KnutJaegersberg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"I'm still learning prompting, but after poking around a bit with this model using the pylib I shared above, I found the generated text was of quite low quality for the model size. I got way better results with GPT-J and even GPT-2. \r\nDespite its size, I doubt this model is useful enough. ",
"When I got coherent sentences out, it switched topics every second sentence. "
] | 1,674
| 1,675
| null |
NONE
| null |
### Model description
I discovered two implementations of this facebook model on the hub, which was trained on the same corpus as Roberta/Bert. I want to try out some prompting, but when I try to download it with
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("hyunwoongko/megatron-11B")
I get a
KeyError: 'megatron' exception.
This is a relatively sizable model which is as interesting as gpt-j to me, does it work with transformers?
I found one of the two model re-publishers below wrote their own library, but it depends on outdated versions of transformer. Can we use this model with transformers?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The two megatron-to-pytorch models on the hub:
https://huggingface.co/models?search=megatron-11
Extra py lib which should make it work:
https://pypi.org/project/megatron-11b/
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21353/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21352
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21352/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21352/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21352/events
|
https://github.com/huggingface/transformers/pull/21352
| 1,561,136,657
|
PR_kwDOCUB6oc5Iv2Rn
| 21,352
|
Remove duplicate declarations in dummy inputs for TFLongformer
|
{
"login": "peakji",
"id": 219051,
"node_id": "MDQ6VXNlcjIxOTA1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/219051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peakji",
"html_url": "https://github.com/peakji",
"followers_url": "https://api.github.com/users/peakji/followers",
"following_url": "https://api.github.com/users/peakji/following{/other_user}",
"gists_url": "https://api.github.com/users/peakji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peakji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peakji/subscriptions",
"organizations_url": "https://api.github.com/users/peakji/orgs",
"repos_url": "https://api.github.com/users/peakji/repos",
"events_url": "https://api.github.com/users/peakji/events{/privacy}",
"received_events_url": "https://api.github.com/users/peakji/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Remove duplicated lines in [modeling_tf_longformer.py](https://github.com/huggingface/transformers/compare/main...peakji:transformers:patch-1#diff-782b222e9d393fe6750cf8e4cd870bcf3748a92ade5086e518b4d716a80080f8).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21352/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21352",
"html_url": "https://github.com/huggingface/transformers/pull/21352",
"diff_url": "https://github.com/huggingface/transformers/pull/21352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21352.patch",
"merged_at": 1675090999000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21351
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21351/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21351/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21351/events
|
https://github.com/huggingface/transformers/pull/21351
| 1,561,120,893
|
PR_kwDOCUB6oc5IvzNO
| 21,351
|
translate index to zh(#20095)
|
{
"login": "bfss",
"id": 31245245,
"node_id": "MDQ6VXNlcjMxMjQ1MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/31245245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bfss",
"html_url": "https://github.com/bfss",
"followers_url": "https://api.github.com/users/bfss/followers",
"following_url": "https://api.github.com/users/bfss/following{/other_user}",
"gists_url": "https://api.github.com/users/bfss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bfss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bfss/subscriptions",
"organizations_url": "https://api.github.com/users/bfss/orgs",
"repos_url": "https://api.github.com/users/bfss/repos",
"events_url": "https://api.github.com/users/bfss/events{/privacy}",
"received_events_url": "https://api.github.com/users/bfss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Could you have a look please?",
"@ydshieh I'm ok with the translation."
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Translate index doc to zh
#20095
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21351/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21351",
"html_url": "https://github.com/huggingface/transformers/pull/21351",
"diff_url": "https://github.com/huggingface/transformers/pull/21351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21351.patch",
"merged_at": 1675115458000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21350
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21350/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21350/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21350/events
|
https://github.com/huggingface/transformers/pull/21350
| 1,561,097,235
|
PR_kwDOCUB6oc5Ivuiz
| 21,350
|
Corrected
|
{
"login": "HsiangNianian",
"id": 44714368,
"node_id": "MDQ6VXNlcjQ0NzE0MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/44714368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HsiangNianian",
"html_url": "https://github.com/HsiangNianian",
"followers_url": "https://api.github.com/users/HsiangNianian/followers",
"following_url": "https://api.github.com/users/HsiangNianian/following{/other_user}",
"gists_url": "https://api.github.com/users/HsiangNianian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HsiangNianian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HsiangNianian/subscriptions",
"organizations_url": "https://api.github.com/users/HsiangNianian/orgs",
"repos_url": "https://api.github.com/users/HsiangNianian/repos",
"events_url": "https://api.github.com/users/HsiangNianian/events{/privacy}",
"received_events_url": "https://api.github.com/users/HsiangNianian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh could have a quick look?"
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR.
@sgugger , @stevhliu & @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21350/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21350",
"html_url": "https://github.com/huggingface/transformers/pull/21350",
"diff_url": "https://github.com/huggingface/transformers/pull/21350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21350.patch",
"merged_at": 1675089495000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21349
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21349/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21349/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21349/events
|
https://github.com/huggingface/transformers/pull/21349
| 1,560,966,222
|
PR_kwDOCUB6oc5IvVET
| 21,349
|
Add Ernie-M Model to huggingface
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Great work @susnato ! Looking forward to reviewing your PR :) \r\nLet us know when you think the PR is ready",
"Hi @younesbelkada the official paddlenlp implementation of ErnieM does not have any LM head class, since it was neither trained on Causal nor on Masked LM. It was pretrained on both Cross Attention Masked LM and Back Translation Masked LM (both implementations are missing in paddlenlp). Do I need to add MaskedLM in this huggingface implementation since it's a encoder based model or should I bypass it and don't include any LM head like the paddlenlp implementation did?",
"Hi @susnato !\r\nThanks for your your message\r\nI think this quite depends on the use case of your model. I'd expect most of the users will rely on `ErnieMModel` since it's the model that is present at `paddlepaddle`. If there is an interest to add these models in the future we can always open follow-up PRs\r\n",
"> Hi @susnato ! Thanks for your your message I think this quite depends on the use case of your model. I'd expect most of the users will rely on `ErnieMModel` since it's the model that is present at `paddlepaddle`. If there is an interest to add these models in the future we can always open follow-up PRs\r\n\r\n@younesbelkada Ok, then I will not add any LMhead for now, and also the rest of the model is ready(with all tests passed), I am currently looking why circleci tests are failing.",
"Thanks! \r\nCurrently some tests are not passing because you need to define a `ERNIE_M_PRETRAINED_CONFIG_ARCHIVE_MAP` inside`configuration_ernie_m.py`, check here how it is done for `bert`: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/configuration_bert.py",
"Hi @younesbelkada I added that and did bunch of others changes with make repo-consistency, make style, but when I run make fixup still it says this error\r\n`\r\npython utils/check_config_docstrings.py\r\nTraceback (most recent call last):\r\n File \"/home/susnato/temp_files/transformers/utils/check_config_docstrings.py\", line 89, in <module>\r\n check_config_docstrings_have_checkpoints()\r\n File \"/home/susnato/temp_files/transformers/utils/check_config_docstrings.py\", line 85, in check_config_docstrings_have_checkpoints\r\n raise ValueError(f\"The following configurations don't contain any valid checkpoint:\\n{message}\")\r\nValueError: The following configurations don't contain any valid checkpoint:\r\nErnieMConfig\r\n `\r\n\r\n\r\nThe values I set are - `ERNIE_M_PRETRAINED_CONFIG_ARCHIVE_MAP = {\r\n \"ernie-m-base_pytorch\": \"https://huggingface.co/susnato/ernie-m-base_pytorch/blob/main/config.json\",\r\n \"ernie-m-large_pytorch\": \"https://huggingface.co/susnato/ernie-m-large_pytorch/blob/main/config.json\",\r\n}`",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @younesbelkada all checks are successful! the PR is ready to review, please review it.",
"Hi, @younesbelkada I made all the changes that you requested,\r\nLet me know if any other changes are needed or not and if I have missed any!",
"Hi, @younesbelkada I made all the changes as you requested. The tests are now all successful! Please check it.",
"Hi @younesbelkada, I made all those changes that you requested.",
"Thanks a lot @susnato ! Again great work on the integration so far!\r\nPlease wait for the next reviewers to add their review and we should be good merging the PR ;) ",
"Hi @sgugger I made those changes as you requested and the tests are passed too, please review them.",
"Hi @sgugger please forgive me for the previous unaddressed changes, but now I have tried to make all changes that you addressed. Please check it and let me know if I need to make new changes or not.",
"Hi, @ArthurZucker I made the changes you said, added the modeling & config file to the `documentation_tests.txt` and also created a new file `tests/models/ernie_m/test_tokenization_ernie_m.py` for testing `ernie-m` tokenizer. \r\n\r\nAlso there is one thing I want to mention here regarding tokenizer is that, I saw some inconsistency in ErnieMTokenizer(paddle implementation) regarding how it treats white space, for example - \r\n\r\n`from paddlenlp.transformers import AutoTokenizer`\r\n`tokenizer = AutoTokenizer.from_pretrained(\"PaddlePaddle/ernie-m-base\", from_hf_hub=True)`\r\n`tokenizer.tokenize(\"The quick brown fox jumps over the lazy dog.\")`\r\n```\r\n['▁The', '▁quick', '▁brown', 'fox', '▁jump', 's', '▁over', '▁the', '▁la', 'zy', '▁dog', '.']\r\n```\r\n\r\nhere, despite \"brown fox\" being two different words, the tokenizer doesn't seperate them(by \"▁\") so when we decode them we get \r\n``\r\n[CLS] The quick brownfox jumps over the lazy dog.[SEP]\r\n``\r\n\r\nwhere there should be a space between brown and fox. This issue is not present in most of the words in vocab.\r\n\r\nI managed to solved this issue by inserting a \"▁\" (at line 215 of `src/transformers/models/ernie_m/tokenization_ernie_m.py`)when we see this condition, but this might lead to very slightly different word_embeddings at times(at most there will be this \"▁\" character in some sentences between words)! ",
"Hi @ArthurZucker there seems to be a problem with `tests_tf` of \r\n```\r\nFAILED tests/models/hubert/test_modeling_tf_hubert.py::TFHubertModelTest::test_dataset_conversion\r\n```\r\nwhich I think is unrelated to this PR, I rebased to `upstream/main` multiple times but still facing this same issue, could you please have a look at this issue? \r\n\r\nEDIT : this [PR](https://github.com/huggingface/transformers/pull/21606) states this very issue, I will rebase again after it is merged .",
"Hi @ArthurZucker since the `tests_tf` are failing as before, could you please review this code(the recent changes that I made)? It will be very helpful to me because, then I will be able to work on new suggestions/comments, otherwise I am stuck here until this check is fixed.\r\nI will definitely rebase and push again when the test is fixed but in meantime this PR will gain some progress. :)",
"Of course! I was waiting for you to ping me 😉 ",
"Hi @ArthurZucker I pushed the changes please check!",
"Okay! LGTM\r\n@sgugger feel free to merge if you think this is ok! 😉 "
] | 1,674
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Ports Ernie-M from paddle to huggingface(pytorch) and also Fixes #21123
I have uploaded the pytorch converted weights [here](https://huggingface.co/susnato/ernie-m-base_pytorch) and [here](https://huggingface.co/susnato/ernie-m-large_pytorch). The paddle2pytorch weights conversion script has been provided there too.
Work done till now -
1. ported the weights.
2. Added `configuration_ernie_m.py`
`from transformers import AutoConfig`
`config = AutoConfig.from_pretrained("susnato/ernie-m-base_pytorch")`)
3. Added `tokenization_ernie_m.py` (Only Slow Tokenizer implemented)
`from transformers import ErnieMTokenizer`
`tokenizer = ErnieMTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")`
4. ErnieMModel in now working.
`from transformers import AutoModel`
`model = AutoModel.from_pretrained("susnato/ernie-m-base_pytorch") # susnato/ernie-m-large_pytorch`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [link here](https://github.com/huggingface/transformers/issues/21123)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21349/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21349/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21349",
"html_url": "https://github.com/huggingface/transformers/pull/21349",
"diff_url": "https://github.com/huggingface/transformers/pull/21349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21349.patch",
"merged_at": 1676471097000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21348
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21348/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21348/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21348/events
|
https://github.com/huggingface/transformers/pull/21348
| 1,560,912,430
|
PR_kwDOCUB6oc5IvKnm
| 21,348
|
Template for framework-agnostic tests
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I might be missing something, but not clear to me how/where this is tested here.\r\n\r\n@amyeroberts I see, this example does not test numerical differences :) \r\n\r\nHowever, consider the tests [here](https://github.com/huggingface/transformers/blob/main/tests/generation/test_logits_process.py), which also exist for TF. They are primarily numerical tests, they test the resulting vector against a constant. It is not uncommon for a user to find an edge case on these processors, fix the PT side of it, and add a few more numerical checks. These new numerical checks would fail in TF, but I often can't convince the users to make the change on the TF side (and then fail to follow up myself after it is merged). Result: a bug that is trivial to fix in that moment gets lost 🙈 This unified framework would prevent it -- the fix would need to include the corresponding TF change to pass the tests (either fixed by the user or by one of us). PT == expected values == TF"
] | 1,674
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
There are a few cross-framework pain points I often encounter:
1. Ensuring the interface of `.generate()` or models stay consistent across frameworks;
2. Ensuring that TF doesn't get neglected and satisfies 1., when contributors add features/fixes on the PT side;
3. Blocking cases where the interface is the same, but there are numerical differences.
After a brief chat with @sgugger, I thought inheritable framework-agnostic tests could be nice to help with these problems. This week I'll have to write a few TF `.generate()` tests that already exist on the PT side, so this could be a great chance to kill 2 birds with one stone.
However, I'd like to get the pattern right, hence this small PR -- replaces a pair of framework-specific tests with a framework-agnostic test. `Flax` is intentionally left out, as it is missing many testable features and is not being maintained, but it can easily be added in the future.
Let me know what you think of it!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21348/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21348/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21348",
"html_url": "https://github.com/huggingface/transformers/pull/21348",
"diff_url": "https://github.com/huggingface/transformers/pull/21348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21348.patch",
"merged_at": 1675164798000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21347
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21347/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21347/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21347/events
|
https://github.com/huggingface/transformers/pull/21347
| 1,560,886,369
|
PR_kwDOCUB6oc5IvFZq
| 21,347
|
Generate: Relaxed `max_length` and `max_new_tokens` coexistence
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @lewtun -- thank you for raising the issue 🙏 ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
TL;DR: stops raising an exception in `.generate()` when `max_length` and `max_new_tokens` are both set -- `max_new_tokens` will take precedence.
Context: Some downstream uses of `.generate()`, for legacy reasons, set `max_length` (e.g. pipelines, API). If a user tries manually setting `max_new_tokens`, as suggested in the documentation, an exception is thrown (even if `max_length` is manually set to `None`).
Because `max_length` can be set outside the `GenerationConfig` and `.generate()`, it's hard to detect whether the `max_length` is intentionally set (and thus shouldn't be allowed together with `max_new_tokens`) or simply a helpful default. Raising an exception can thus block a correct usage of `max_new_tokens`. This PR relaxes this requirement, making `max_new_tokens` take precedence and raising an informative warning instead.
Fixes #21369
___________________________________________________________________
Example of failing code before this PR:
```py
import requests
API_URL = "https://api-inference.huggingface.co/models/google/flan-t5-xl"
headers = {"Authorization": "Bearer hf_xxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query(
{"inputs": "The answer to the universe is", "parameters": {"max_new_tokens": 100}}
)
output
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21347/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21347/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21347",
"html_url": "https://github.com/huggingface/transformers/pull/21347",
"diff_url": "https://github.com/huggingface/transformers/pull/21347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21347.patch",
"merged_at": 1675101234000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21346
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21346/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21346/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21346/events
|
https://github.com/huggingface/transformers/issues/21346
| 1,560,664,386
|
I_kwDOCUB6oc5dBdlC
| 21,346
|
Stored XSS
|
{
"login": "Dark-Aura",
"id": 65353593,
"node_id": "MDQ6VXNlcjY1MzUzNTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/65353593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dark-Aura",
"html_url": "https://github.com/Dark-Aura",
"followers_url": "https://api.github.com/users/Dark-Aura/followers",
"following_url": "https://api.github.com/users/Dark-Aura/following{/other_user}",
"gists_url": "https://api.github.com/users/Dark-Aura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dark-Aura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dark-Aura/subscriptions",
"organizations_url": "https://api.github.com/users/Dark-Aura/orgs",
"repos_url": "https://api.github.com/users/Dark-Aura/repos",
"events_url": "https://api.github.com/users/Dark-Aura/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dark-Aura/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @Dark-Aura, Thanks for reaching out to us! 🤗 We have a bug bounty program with HackerOne and would love for you to submit security vulnerability reports to https://hackerone.com/hugging_face. We'll need to send you an invite since this is a private program, so please feel free to send us an email at security@huggingface.co or let me know your H1 username. Please let us know if there are any questions. Thanks again!",
"Thanks for your reply, however, I had already visited this page and I'm getting a 404 error, for your perusal I'm attaching the screenshot please take a look at it\r\n\r\n",
"Hi @Dark-Aura,\r\n\r\nThanks for sending your H1 username to us via email; you should receive an invite to our bug bounty program soon. Please let us know if you run into any issues submitting reports!\r\n\r\nThanks again,\r\nMichelle"
] | 1,674
| 1,675
| 1,675
|
NONE
| null |
### System Info
Hi Team,
I've discovered a **stored cross-site scripting** vulnerability on your domain (https://transformer.huggingface.co/). Is your organization currently conducting a bug bounty program for this website? If so, kindly provide me with the appropriate information. Additionally, would it be possible for you to furnish me with the email contact of your security team?
Best Regards
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
null
### Expected behavior
null
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21346/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21345
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21345/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21345/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21345/events
|
https://github.com/huggingface/transformers/pull/21345
| 1,560,538,878
|
PR_kwDOCUB6oc5It-yf
| 21,345
|
Add the GeLU activation from pytorch with the tanh approximation
|
{
"login": "jlamypoirier",
"id": 18523627,
"node_id": "MDQ6VXNlcjE4NTIzNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/18523627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlamypoirier",
"html_url": "https://github.com/jlamypoirier",
"followers_url": "https://api.github.com/users/jlamypoirier/followers",
"following_url": "https://api.github.com/users/jlamypoirier/following{/other_user}",
"gists_url": "https://api.github.com/users/jlamypoirier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlamypoirier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlamypoirier/subscriptions",
"organizations_url": "https://api.github.com/users/jlamypoirier/orgs",
"repos_url": "https://api.github.com/users/jlamypoirier/repos",
"events_url": "https://api.github.com/users/jlamypoirier/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlamypoirier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for working on this! Does the new implementation in Pytorch produce the exact same results as `gelu_fast`? If that is the case, I would prefer we just replace the current `gelu_fast` with this when PyTorch is 1.12 or above.",
"> Thanks for working on this! Does the new implementation in Pytorch produce the exact same results as `gelu_fast`? If that is the case, I would prefer we just replace the current `gelu_fast` with this when PyTorch is 1.12 or above.\r\n\r\nThe results are similar but there are still rounding errors, see my analysis in the related issue #21344. I would also be in favor of replacing the existing implementation / using it as default, but I would introduce small numerical differences in some models, is that a problem?",
"Ah yes, the difference is quite significant sadly, so this will probably introduce a difference that is too big :-/\r\nSo let's go with a new activation. Maybe `gelu_pytorch` is a better name?",
"> Ah yes, the difference is quite significant sadly, so this will probably introduce a difference that is too big :-/ So let's go with a new activation. Maybe `gelu_pytorch` is a better name?\r\n\r\nWouldn't it cause confusion with the default pytorch implementation? That one is currently named \"gelu\". (And the one named \"gelu_python\").\r\n\r\nAlso should I add an explicit pytorch version check?\r\n",
"Ok for the name then. For the version check, you will need to create a function that returns the instance of GELU and issues an import error if the PyTorch version is too low, then put that function in the mappinh.",
"> Ok for the name then. For the version check, you will need to create a function that returns the instance of GELU and issues an import error if the PyTorch version is too low, then put that function in the mappinh.\r\n\r\nMade a class to match the other activations, and raising a `NotImplementedError` (I don't think an `ImportError` is the best here since the function exists in earlier versions.) Also added to `test_get_activation`.",
"Failure is unrelated so merging. Thanks again for your contribution!"
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
Fixes #21344. See that issue for more details.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21345/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21345",
"html_url": "https://github.com/huggingface/transformers/pull/21345",
"diff_url": "https://github.com/huggingface/transformers/pull/21345.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21345.patch",
"merged_at": 1675348385000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21344
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21344/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21344/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21344/events
|
https://github.com/huggingface/transformers/issues/21344
| 1,560,513,935
|
I_kwDOCUB6oc5dA42P
| 21,344
|
Add the pytorch implementation of the OpenAI GeLU approximation
|
{
"login": "jlamypoirier",
"id": 18523627,
"node_id": "MDQ6VXNlcjE4NTIzNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/18523627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlamypoirier",
"html_url": "https://github.com/jlamypoirier",
"followers_url": "https://api.github.com/users/jlamypoirier/followers",
"following_url": "https://api.github.com/users/jlamypoirier/following{/other_user}",
"gists_url": "https://api.github.com/users/jlamypoirier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlamypoirier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlamypoirier/subscriptions",
"organizations_url": "https://api.github.com/users/jlamypoirier/orgs",
"repos_url": "https://api.github.com/users/jlamypoirier/repos",
"events_url": "https://api.github.com/users/jlamypoirier/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlamypoirier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
### Feature request
Add support for the pytorch implementation of OpenAI's approximation of the GeLU function, added in pytorch 1.12. This implementation is equivalent to `gelu_new` or `gelu_fast` but much faster. It can come as a separate activation function, for example `gelu_new_python`, to avoid distrupting existing models.
### Motivation
Many transformer models use OpenAI's approximation (tanh) for the GeLU, through the activation function `gelu_new` or `gelu_fast`. These implementations are extremely slow (despite their name) because they consist of multiple operations/kernels (8 and 9 respectively).
Since version 1.12, pytorch supports a single-kernel, C/cuda implementation through the argument `approximate='tanh'` ( https://pytorch.org/docs/stable/generated/torch.nn.GELU.html). This implementation is 6-10x faster than what currently exists in transformers, and is numerically equal up to rounding errors.
When benchmarking the inference speed of the [SantaCoder models](https://huggingface.co/bigcode/santacoder), I found that using the pytorch implementation allowed for an end-to-end speedup of ~15-20%.
I also benchmarked the speed and accuracy using the following code (on a A100-80GB):
```
import time
import torch
from transformers.activations import NewGELUActivation, FastGELUActivation
dtype=torch.float32
eps=torch.finfo(dtype).eps
x=torch.empty([2**30], device="cuda", dtype=dtype).normal_()
torch.cuda.synchronize()
t0=time.perf_counter()
y0=torch.nn.functional.gelu(x, approximate="tanh")
torch.cuda.synchronize()
t1=time.perf_counter()
y1=NewGELUActivation()(x)
torch.cuda.synchronize()
t2=time.perf_counter()
y2=FastGELUActivation()(x)
torch.cuda.synchronize()
t3=time.perf_counter()
y3=torch.nn.functional.gelu(x)
torch.cuda.synchronize()
t4=time.perf_counter()
print(f"Torch tanh: {1000*(t1-t0):.3f} ms")
print(f"New: {1000*(t2-t1):.3f} ms")
print(f"Fast: {1000*(t3-t2):.3f} ms")
print(f"Torch orig: {1000*(t4-t3):.3f} ms")
print(f"Torch tanh vs new: {(y1-y0).float().std().cpu().item()/eps:.3f}")
print(f"Torch tanh vs fast: {(y2-y0).float().std().cpu().item()/eps:.3f}")
print(f"New vs fast: {(y2-y1).float().std().cpu().item()/eps:.3f}")
print(f"Torch tanh vs torch orig: {(y3-y0).float().std().cpu().item()/eps:.3f}")
```
With output
```
Torch tanh: 4.921 ms
New: 43.253 ms
Fast: 50.269 ms
Torch orig: 4.989 ms
Torch tanh vs new: 0.042
Torch tanh vs fast: 0.147
New vs fast: 0.147
Torch tanh vs torch orig: 971.960
```
I.e., the tanh version of torch matches the fast and new gelu within epsilon while being 8.8x/10.2x faster, but is different from the original version
With dtype=torch.float16:
```
Torch tanh: 3.342 ms
New: 22.667 ms
Fast: 26.104 ms
Torch orig: 3.395 ms
Torch tanh vs new: 0.244
Torch tanh vs fast: 0.243
New vs fast: 0.143
Torch tanh vs torch orig: 0.216
```
I.e., it's 6.8x/7.8x faster, and the implementation doesn't matters because rounding errors dominate.
On cpu (float32), size 2**28 (268M):
```
Torch tanh: 182.575 ms
New: 1683.934 ms
Fast: 1925.547 ms
Torch orig: 141.410 ms
Torch tanh vs new: 0.043
Torch tanh vs fast: 0.144
New vs fast: 0.144
Torch tanh vs torch orig: 971.852
```
I.e., same accuracy and speedup (9.2x/10.5x faster)
### Your contribution
Opened a draft PR (#21345)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21344/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21343
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21343/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21343/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21343/events
|
https://github.com/huggingface/transformers/pull/21343
| 1,560,501,811
|
PR_kwDOCUB6oc5It2p_
| 21,343
|
[`run_(clm|mlm).py` examples] add streaming dataset support
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Reworked w/o `kwargs` and copying the code where needed, please let me know when it's good for you and I can replicate to mlm",
"should I figure out when streaming was added and put a assert if an earlier datasets is used?",
"Good point. Streaming was introduced a while ago, but I think it stabilized with the 2.0 version, so maybe use this one as a minimal requirement for streaming?",
"- added version check\r\n- ported to mlm\r\n- added a doc entry\r\n\r\nThis is good to go for a final review, Sylvain. Thank you.",
"And thank you for reviewing my work, Sylvain!"
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
This PR adds streaming dataset support. It's fine if it remains a PR for those who might need it. if it's to be merged should probably check when streaming was added to `datasets` and require that version.
1. API-wise everything is as before but need to pass `--streaming` to load the dataset in a streaming mode.
2. and also since `IterableDataset` has no `__len__` this makes the `--max_steps` flag required (possibly can make this more clear by asserting earlier if `--streaming` and `not --max_steps`)
This should make a huge speed up to start working with large datasets. So that work can start immediately and the data gets loaded progressively (which overall is likely to be slower - haven't measured yet - but it makes starting much easier).
Example run:
```
python examples/pytorch/language-modeling/run_clm.py --bf16 --seed 42 \
--model_name_or_path facebook/opt-1.3b --dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1 --per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 --gradient_accumulation_steps 1 --do_train \
--do_eval --logging_steps 10 --save_steps 1000 --eval_steps 100 --weight_decay \
0.1 --num_train_epochs 1 --adam_beta1 0.9 --adam_beta2 0.95 --learning_rate \
0.0002 --lr_scheduler_type linear --warmup_steps 500 --report_to tensorboard \
--output_dir save_dir --max_steps 1_000_000 --streaming
```
Ported this to `run_mlm.py` as well.
Thank you, @lhoestq for helping with this
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21343/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21343",
"html_url": "https://github.com/huggingface/transformers/pull/21343",
"diff_url": "https://github.com/huggingface/transformers/pull/21343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21343.patch",
"merged_at": 1675116096000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21342
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21342/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21342/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21342/events
|
https://github.com/huggingface/transformers/issues/21342
| 1,560,352,428
|
I_kwDOCUB6oc5dARas
| 21,342
|
Errors while training apple/mobilevit-xx-small on image-classification example with and without deepspeed
|
{
"login": "prathikr",
"id": 31260940,
"node_id": "MDQ6VXNlcjMxMjYwOTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/31260940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prathikr",
"html_url": "https://github.com/prathikr",
"followers_url": "https://api.github.com/users/prathikr/followers",
"following_url": "https://api.github.com/users/prathikr/following{/other_user}",
"gists_url": "https://api.github.com/users/prathikr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prathikr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prathikr/subscriptions",
"organizations_url": "https://api.github.com/users/prathikr/orgs",
"repos_url": "https://api.github.com/users/prathikr/repos",
"events_url": "https://api.github.com/users/prathikr/events{/privacy}",
"received_events_url": "https://api.github.com/users/prathikr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @amyeroberts and @alaradirik ",
"See #21221 - the example scripts aren't meant to work out-of-the-box for any model, as MobileViT for instance doesn't normalizes the images with a mean and std, so one needs to comment out the normalization line.",
"@NielsRogge thank you, I can confirm that commenting out the normalization line resolves this issue. How about the DeepSpeed compatibility issue? @JingyaHuang mentioned to me that MobileViT has more of a CNN architecture rather than transformer so it may not work at all with DeepSpeed. Is this accurate? Or is there another workaround you can provide? Thank you in advance.",
"Hi @prathikr, as your previous assumption of the issue's root was `hidden_sizes`, I suspected it could come from the lack of [support for variable hidden_size in ONNX Runtime's graph optimization](https://github.com/microsoft/onnxruntime/blob/81120e9e8b377567daa00d55614c902f35b2ae8f/onnxruntime/python/tools/transformers/optimizer.py#L145) other than a problem from the DeepSpeed(but it could be, I am not aware of this).",
"@JingyaHuang I don't think so because this issue arises when running without ONNX Runtime as well",
"@NielsRogge @JingyaHuang any updates on the deepspeed issue for MobileViT? ",
"What's the reason you'd like to use Deepspeed?\r\n\r\nAlso, please provide a full stacktrace",
"@NielsRogge, most Microsoft internal training pipelines including AzureML leverage DeepSpeed since it provides better training speed and smaller memory footprint. When we evaluate any Hugging Face models, we always try to integrate both ORT and DeepSpeed to maximize training speed.",
"Traceback (most recent call last):\r\n File \"/home/prathikrao/transformers/examples/pytorch/image-classification/run_image_classification.py\", line 392, in <module>\r\n main()\r\n File \"/home/prathikrao/transformers/examples/pytorch/image-classification/run_image_classification.py\", line 366, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/prathikrao/transformers/src/transformers/trainer.py\", line 1547, in train\r\n return inner_training_loop(\r\n File \"/home/prathikrao/transformers/src/transformers/trainer.py\", line 1616, in _inner_training_loop\r\n deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(\r\n File \"/home/prathikrao/transformers/src/transformers/deepspeed.py\", line 312, in deepspeed_init\r\n hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps)\r\n File \"/home/prathikrao/transformers/src/transformers/deepspeed.py\", line 174, in trainer_config_finalize\r\n hidden_size = model.config.hidden_size\r\n File \"/home/prathikrao/transformers/src/transformers/configuration_utils.py\", line 260, in __getattribute__\r\n return super().__getattribute__(key)\r\nAttributeError: 'MobileViTConfig' object has no attribute 'hidden_size'",
"@NielsRogge above is the full stacktrace. I believe I'm seeing this because MobileViT has an attribute named [hidden_sizes](https://github.com/huggingface/transformers/blob/7119bb052a3f492b9af3afe4f3f13132445eba6e/src/transformers/models/mobilevit/configuration_mobilevit.py#L63) which is a list (different from ViT, for example, which has [hidden_size](https://github.com/huggingface/transformers/blob/7119bb052a3f492b9af3afe4f3f13132445eba6e/src/transformers/models/vit/configuration_vit.py#L47) as an int). Not sure if this is an architectural difference that would make Deepspeed incompatible or if there is some workaround for this.",
"I'll cc @stas00 here as he's our Deepspeed expert. Several vision models indeed have a `hidden_sizes` attribute in their config (as a list of integers), as vision models often consist of several stages, each stage having its own dimensionality (unlike models like ViT which use the same hidden size for each Transformer block).",
"Yes, of course, I can help here. Thank you for pinging me, @NielsRogge \r\n\r\nOK, here is what's happening. The issue isn't of deepspeed but of its integration into transformers.\r\n\r\nHere we pull `model.config.hidden_size` to create the most efficient for the model setup \r\n\r\nhttps://github.com/huggingface/transformers/blob/5b67ab9924cf7587b39b59eb0bf0abd3d099e8b9/src/transformers/deepspeed.py#L174-L175\r\n\r\nnow you're saying not all models have it.\r\n\r\nSo here are some suggestions at how to overcome this problem, while keeping the optimization as close as possible to the best.\r\n\r\n1. check if the model has `config.hidden_size` and if it doesn't and we have these 2 settings in the incoming ds_config:\r\n```\r\n zero_optimization.stage3_prefetch_bucket_size = \"auto\"\r\n zero_optimization.stage3_param_persistence_threshold = \"auto\"\r\n```\r\nassert about it and then the user can replace `auto` with the value they think works the best and run again.\r\n\r\n2. check if the model has `config.hidden_size` and if it doesn't and we have these 2 settings in the incoming ds_config:\r\n```\r\n zero_optimization.stage3_prefetch_bucket_size = \"auto\"\r\n zero_optimization.stage3_param_persistence_threshold = \"auto\"\r\n```\r\ncheck that it has `hidden_sizes` and use the one with the highest value and use that instead of `config.hidden_size` \r\n\r\nShould we try option #2?",
"Please try this PR and let me know if it fixes the problem https://github.com/huggingface/transformers/pull/21504\r\n\r\nThank you!\r\n\r\np.s. I'm making an assumption that the largest hidden size is the most optimal choice, I could be wrong here. ",
"Thank you @stas00, I can confirm this solves the issue. Not sure about the performance but it at least runs with this fix.",
"Thanks a lot for testing, @prathikr. Will merge this asap."
] | 1,674
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
### System Info
- transformers installed from source
- python 3.8
- ZeRO-Stage-1
### Who can help?
@amyeroberts @NielsRogge @JingyaHuang
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m torch.distributed.launch --nproc_per_node=8 ~/transformers/examples/pytorch/image-classification/run_image_classification.py --model_name_or_path apple/mobilevit-xx-small --dataset_name beans --overwrite_output_dir --output_dir ./outputs/ --remove_unused_columns False --do_train --do_eval --learning_rate 2e-5 --num_train_epochs 50 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 10 --evaluation_strategy epoch --seed 1337 --fp16 True --report_to none --ignore_mismatched_sizes True
AttributeError: 'MobileViTImageProcessor' object has no attribute 'image_mean'
python -m torch.distributed.launch --nproc_per_node=8 ~/transformers/examples/pytorch/image-classification/run_image_classification.py --model_name_or_path apple/mobilevit-xx-small --dataset_name beans --overwrite_output_dir --output_dir ./outputs/ --remove_unused_columns False --do_train --do_eval --learning_rate 2e-5 --num_train_epochs 50 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 10 --evaluation_strategy epoch --seed 1337 --fp16 True --report_to none --ignore_mismatched_sizes True --deepspeed ~/zero_stage_1.json
AttributeError: 'MobileViTConfig' object has no attribute 'hidden_size'
### Expected behavior
I expect both examples to train with the deepspeed-enabled run completing faster than baseline. Currently, both scenarios error out. Thank you in advance for the assistance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21342/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21342/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21341
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21341/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21341/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21341/events
|
https://github.com/huggingface/transformers/pull/21341
| 1,560,098,532
|
PR_kwDOCUB6oc5Isd4a
| 21,341
|
Generate: TF `compute_transition_scores`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@patrickvonplaten reverted the change in the order of operations in beam sample, and added a comment to based on our offline conversation.\r\n\r\nLMK if you're happy with the PR 🙏 "
] | 1,674
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
This PR adds the TF `compute_transition_scores`, akin to PT's #21191.
What seemingly started off as a simple task, ended up being a complex task -- the TF side had many missing and/or incorrect secondary `generate` outputs 😬 This means that we need to beef up the TF side of `generate` tests, which is much shorter than its PT counterpart.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21341/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21341",
"html_url": "https://github.com/huggingface/transformers/pull/21341",
"diff_url": "https://github.com/huggingface/transformers/pull/21341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21341.patch",
"merged_at": 1675874204000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21340
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21340/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21340/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21340/events
|
https://github.com/huggingface/transformers/pull/21340
| 1,559,845,571
|
PR_kwDOCUB6oc5Irm-x
| 21,340
|
Nystromformer ONNX export
|
{
"login": "whr778",
"id": 5939523,
"node_id": "MDQ6VXNlcjU5Mzk1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5939523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whr778",
"html_url": "https://github.com/whr778",
"followers_url": "https://api.github.com/users/whr778/followers",
"following_url": "https://api.github.com/users/whr778/following{/other_user}",
"gists_url": "https://api.github.com/users/whr778/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whr778/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whr778/subscriptions",
"organizations_url": "https://api.github.com/users/whr778/orgs",
"repos_url": "https://api.github.com/users/whr778/repos",
"events_url": "https://api.github.com/users/whr778/events{/privacy}",
"received_events_url": "https://api.github.com/users/whr778/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi there. The ONNX integration has moved to the [optimum library](https://github.com/huggingface/optimum) so you should open your pull request there :-)",
"Hi, okay. Should I delete this PR or fix the error and submit another PR to optimum??",
"Yes, you can cloes this PR. We don' accept new PRs as all the support is in optimum now.",
"Moving code to optimum"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
# This PR implements ONNX export functionality for Nystromformer models.
In addition to running the test cases, I exported a locally built Nystromformer 2048 input sequence model to ONNX and ran prediction.
I verified the prediction output.
Fixes # (https://github.com/huggingface/transformers/issues/21339)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21340/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21340",
"html_url": "https://github.com/huggingface/transformers/pull/21340",
"diff_url": "https://github.com/huggingface/transformers/pull/21340.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21340.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21339
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21339/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21339/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21339/events
|
https://github.com/huggingface/transformers/issues/21339
| 1,559,844,285
|
I_kwDOCUB6oc5c-VW9
| 21,339
|
Add Nystromformer support to ONNX export
|
{
"login": "whr778",
"id": 5939523,
"node_id": "MDQ6VXNlcjU5Mzk1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5939523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whr778",
"html_url": "https://github.com/whr778",
"followers_url": "https://api.github.com/users/whr778/followers",
"following_url": "https://api.github.com/users/whr778/following{/other_user}",
"gists_url": "https://api.github.com/users/whr778/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whr778/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whr778/subscriptions",
"organizations_url": "https://api.github.com/users/whr778/orgs",
"repos_url": "https://api.github.com/users/whr778/repos",
"events_url": "https://api.github.com/users/whr778/events{/privacy}",
"received_events_url": "https://api.github.com/users/whr778/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Moving code to optimum"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### Feature request
Add Nystromformer support to ONNX export
### Motivation
Nystromformer models are very compute inexpensive long sequence models which perform exceptionally well.
### Your contribution
I will be submitting a PR (code and docs are complete) PR is in progress.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21339/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21338
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21338/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21338/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21338/events
|
https://github.com/huggingface/transformers/pull/21338
| 1,559,821,683
|
PR_kwDOCUB6oc5Irhu7
| 21,338
|
Automated compatible models list for task guides
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,676
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a script that aggregates model architectures compatible with a task illustrated in a task guide and adds a list of links to them in a <Tip> in the guide. This serves several purposes:
1. Reinforces the idea that task guides are applicable to more than one model architecture.
2. Improves discoverability of models.
3. Serves as the first step to improve navigation between task guides and model docs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21338/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21338",
"html_url": "https://github.com/huggingface/transformers/pull/21338",
"diff_url": "https://github.com/huggingface/transformers/pull/21338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21338.patch",
"merged_at": 1674843569000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21337
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21337/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21337/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21337/events
|
https://github.com/huggingface/transformers/pull/21337
| 1,559,791,937
|
PR_kwDOCUB6oc5IrbRI
| 21,337
|
Fix `RobertaPreLayerNorm` doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Fix `RobertaPreLayerNorm` doctest. The doctest should have 0 failure against this commit 🔥
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21337/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21337",
"html_url": "https://github.com/huggingface/transformers/pull/21337",
"diff_url": "https://github.com/huggingface/transformers/pull/21337.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21337.patch",
"merged_at": 1674832825000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21336
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21336/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21336/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21336/events
|
https://github.com/huggingface/transformers/pull/21336
| 1,559,602,486
|
PR_kwDOCUB6oc5Iqxwf
| 21,336
|
Cleanup the usage of `layer_norm_eps` in some models
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @NielsRogge Just want to hear from you to make sure we are good for this change 🙏 before I continue.\r\n\r\nI can definitely just add these config classes to a list of edge cases in my (WIP) tests. But I feel it's better to clean them up, so future models won't copy/paste the same code, and we accumulate more and more edge cases to skip in the tests."
] | 1,674
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
**(No breaking change in this PR)**
**(So far I only change `OneFormerConfig`, but I will update other config classes whose default `layer_norm_eps = 1e-5`)**
Fix the missing usage of `config.layer_norm_eps` in some pytorch models: **if the default value in the config class is `1e-05`** (i.e. the same as the default value in `nn.LayerNorm`. In this case, we don't break the current behavior, and **make the (WIP) test dealing with much fewer edge cases** (which is always a bit of burden).
**In #20699, my claim regarding the breaking change was a bit misleading**: only those config classes having default `layer_norm_eps` not being `1e-5` (for example, `LxmertConfig`) will have breaking change - which I don't do anything in this PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21336/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21336",
"html_url": "https://github.com/huggingface/transformers/pull/21336",
"diff_url": "https://github.com/huggingface/transformers/pull/21336.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21336.patch",
"merged_at": 1675169657000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21335
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21335/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21335/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21335/events
|
https://github.com/huggingface/transformers/issues/21335
| 1,559,576,324
|
I_kwDOCUB6oc5c9T8E
| 21,335
|
TypeError: _forward_unimplemented() got an unexpected keyword argument 'input_ids'
|
{
"login": "QuantumStatic",
"id": 67118602,
"node_id": "MDQ6VXNlcjY3MTE4NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/67118602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QuantumStatic",
"html_url": "https://github.com/QuantumStatic",
"followers_url": "https://api.github.com/users/QuantumStatic/followers",
"following_url": "https://api.github.com/users/QuantumStatic/following{/other_user}",
"gists_url": "https://api.github.com/users/QuantumStatic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QuantumStatic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QuantumStatic/subscriptions",
"organizations_url": "https://api.github.com/users/QuantumStatic/orgs",
"repos_url": "https://api.github.com/users/QuantumStatic/repos",
"events_url": "https://api.github.com/users/QuantumStatic/events{/privacy}",
"received_events_url": "https://api.github.com/users/QuantumStatic/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"`DistilBertPreTrainedModel` is an abstract class and shouldn't be used directly. Maybe you wanted to use `DistilBertModel` or `DistilBertForPretraining`?",
"Thank you for your quick response. It was my silly mistake to use an abstract class for pre-training. I was able to import `DistilBertModel`, however the import for `DistilBertForPretraining` failed, but that's alright.\r\n\r\nHowever when I try to run the model now I get the following error.\r\n\r\n```\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).\r\n```\r\n\r\nI have followed the webpage titled [Fine-tuning with custom datasets](https://huggingface.co/transformers/v3.4.0/custom_datasets.html). My function that creates the initial lists with texts and labels is below. The data is formatted very similarly to the webpage:\r\n```python\r\ndef create_MBIC_data_dict() -> dict[str, str]:\r\n data_dict = {'text': [], 'label':[]}\r\n with open(f\"{DATA_FOLDER_PATH}/final_labels_MBIC_new.csv\") as csv_file:\r\n csv_reader = csv.reader(csv_file)\r\n line_count = 0\r\n for row in csv_reader:\r\n if line_count != 0:\r\n data_dict['text'].append(row[0])\r\n label_val = -1\r\n match row[7]:\r\n case \"Biased\":\r\n label_val = 1\r\n case \"Non-biased\":\r\n label_val = 0\r\n case \"No agreement\":\r\n label_val = 2\r\n data_dict['label'].append(label_val)\r\n line_count += 1\r\n\r\n return data_dict\r\n```\r\nAfterwards the `create_hugging_face_dataset` function executes which creates the dataset. \r\n\r\n@sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,690
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
### Who can help?
@ArthurZucker and @younesbelkada since I am using `distilbert-base-uncased`<br>(and maybe @sgugger, since I am following this [link](https://huggingface.co/transformers/v3.2.0/custom_datasets.html)) on the hugging face website
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using a custom dataset to fine tune `distilbert-base-uncased`. I followed the method described [on the hugging face wesbite](https://huggingface.co/transformers/v3.2.0/custom_datasets.html) to the T. Here is my code for making the dataset.
```python
def create_hugging_face_dataset(data:dict):
train_text, test_text, train_label, test_label = train_test_split(data['text'], data['label'], test_size=0.1, shuffle=True)
train_text, validation_text, train_label, validation_label = train_test_split(train_text, train_label, test_size=0.1, shuffle=True)
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_text, truncation=True, padding=True)
test_encodings = tokenizer(test_text, truncation=True, padding=True)
validation_encodings = tokenizer(validation_text, truncation=True, padding=True)
class MBICDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.Tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.Tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_ds = MBICDataset(train_encodings, train_label)
test_ds = MBICDataset(test_encodings, test_label)
validation_ds = MBICDataset(validation_encodings, validation_label)
FINAL_DS = {"train":train_ds, "test":test_ds, "validation":validation_ds}
```
After making the dataset I try to fine-tune the model using the following code.
```python
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
training_stuff = {
"batch_size": 64,
"epochs": 4,
"learning_rate": 1e-5,
"weight_decay": 0.01
}
training_args = TrainingArguments(
output_dir="C:/Users/uujain2/Desktop/Utkarsh/FYP/Models/DistilBert",
per_device_train_batch_size=training_stuff["batch_size"],
evaluation_strategy="steps",
num_train_epochs=training_stuff["epochs"],
fp16=True,
save_steps=100,
eval_steps=50,
logging_steps=10,
weight_decay=training_stuff["weight_decay"],
learning_rate=training_stuff["learning_rate"],
save_total_limit=64,
remove_unused_columns=False,
push_to_hub=False,
report_to='tensorboard',
load_best_model_at_end=True,
)
model = DistilBertPreTrainedModel.from_pretrained(
'distilbert-base-uncased',
num_labels=3,
id2label={0: 'Biased', 1: 'Non-biased', 2: 'No agreeemnt'},
label2id={'Biased': 0, 'Non-biased': 1, 'No agreement': 2},
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=FINAL_DS['train'],
eval_dataset=FINAL_DS['validation'],
tokenizer=tokenizer,
)
train_results = trainer.train()
```
However, I run into the following error.
```
Traceback (most recent call last):
File "c:\Users\uujain2\Desktop\Utkarsh\FYP\Code\test.py", line 68, in <module>
train_results = trainer.train()
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 1501, in train
return inner_training_loop(
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 1749, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 2508, in training_step
loss = self.compute_loss(model, inputs)
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\trainer.py", line 2540, in compute_loss
outputs = model(**inputs)
File "C:\Users\uujain2\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
TypeError: _forward_unimplemented() got an unexpected keyword argument 'input_ids'
```
### Expected behavior
I expect the model to start the finetuning process instead of throwing this error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21335/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21334
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21334/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21334/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21334/events
|
https://github.com/huggingface/transformers/pull/21334
| 1,559,527,973
|
PR_kwDOCUB6oc5IqhiX
| 21,334
|
Tf timestamps whisper + update generate support
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21334). All of your documentation changes will be reflected on that endpoint.",
"Awesome thanks for the review 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"lmk when you want to pick this up again :P Meanwhile, shall we add the WIP label, so that the bot doesn't ping us?",
"yes! Hahah sorry, maybe next week or 2 weeks from now !",
"Okay! Thanks to @gante's recommendations, the xla generation works perfectly! The slow timestamp processing test also passes 🥳 ",
"Thanks for your review, will adresse all of this ",
"@ArthurZucker I was testing out if I get the timestamps with TF model with your ```tf-timestamps-whisper``` branch on colab but I see this:\r\n```\r\n[/content/transformers/src/transformers/models/whisper/tokenization_whisper.py](https://localhost:8080/#) in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, output_offsets, time_precision, decode_with_timestamps, **kwargs)\r\n 593 )\r\n 594 if decode_with_timestamps:\r\n--> 595 text = self._decode_with_timestamps(token_ids, time_precision=time_precision)\r\n 596 # retrieve offsets\r\n 597 if output_offsets:\r\n\r\n[/content/transformers/src/transformers/models/whisper/tokenization_whisper.py](https://localhost:8080/#) in _decode_with_timestamps(self, token_ids, time_precision)\r\n 501 for token in token_ids:\r\n 502 if token >= timestamp_begin:\r\n--> 503 timestamp = f\"<|{(token - timestamp_begin) * time_precision:.2f}|>\"\r\n 504 outputs.append(timestamp)\r\n 505 outputs.append([])\r\n\r\n[/usr/local/lib/python3.10/dist-packages/tensorflow/python/util/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)\r\n 151 except Exception as e:\r\n 152 filtered_tb = _process_traceback_frames(e.__traceback__)\r\n--> 153 raise e.with_traceback(filtered_tb) from None\r\n 154 finally:\r\n 155 del filtered_tb\r\n\r\n[/usr/local/lib/python3.10/dist-packages/tensorflow/python/ops/gen_math_ops.py](https://localhost:8080/#) in mul(x, y, name)\r\n 6574 if tld.is_eager:\r\n 6575 try:\r\n-> 6576 _result = pywrap_tfe.TFE_Py_FastPathExecute(\r\n 6577 _ctx, \"Mul\", name, x, y)\r\n 6578 return _result\r\n\r\nTypeError: Cannot convert 0.02 to EagerTensor of dtype int32\r\n```\r\n\r\n\r\n",
"Hey! That’s probably because I haven’t pull from main for a while and we changed the whisper tokenizer. As you can see the decoding process is the one failing here ",
"@ArthurZucker Thanks for the response. I got the issue resolved with \r\n```\r\ntimestamp = f\"<|{float(token - timestamp_begin) * time_precision:.2f}|>\"\r\n```\r\ni.e. changing ```token - timestamp_begin``` to ```float(token - timestamp_begin)```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,691
| 1,691
|
COLLABORATOR
| null |
# What does this PR
This PR updates the way we generation TF and FLAX to fix the breaking changes that we had.
It also adds support for the timestamps in `TF`.
Follows #21965
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21334/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21334/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21334",
"html_url": "https://github.com/huggingface/transformers/pull/21334",
"diff_url": "https://github.com/huggingface/transformers/pull/21334.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21334.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21333
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21333/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21333/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21333/events
|
https://github.com/huggingface/transformers/pull/21333
| 1,559,471,102
|
PR_kwDOCUB6oc5IqVLP
| 21,333
|
Little cleanup: let huggingface_hub manage token retrieval
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes that would be cool if a Great Depreciation is initiated! :) I can help if needed.\r\n\r\nIt seems the tests are failing with some `ModuleNotFoundError: No module named 'transformers_modules.local'` errors. Is it related to the PR? Doesn't seem to be but since it's affecting tests named `test_from_pretrained_***`, I'm asking.\r\nIn any case, if the PR looks good to you, can I let you merge it?",
"Those are flaky tests I need to fix :-) Merging!"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Since [`huggingface_hub==0.11.0` release](https://github.com/huggingface/huggingface_hub/releases/tag/v0.11.0), `hfh` always send the stored token when making requests to the hub, unless explicitly told no (`use_auth_token=False`). Before that, `transformers` had implemented some workarounds to retrieve the cached token when `token=None` is provided by the user. This PR aims to remove those workarounds since `hfh` automatically does this part.
Note: in case of the `PushToHubMixin` class, I changed some the arguments/return value of a private methods (no need to return a token anymore). I thought it would be ok as the method is private but please let me know if you prefer that I revert this part.
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21333/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21333",
"html_url": "https://github.com/huggingface/transformers/pull/21333",
"diff_url": "https://github.com/huggingface/transformers/pull/21333.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21333.patch",
"merged_at": 1674839390000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21332
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21332/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21332/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21332/events
|
https://github.com/huggingface/transformers/pull/21332
| 1,559,421,203
|
PR_kwDOCUB6oc5IqKbx
| 21,332
|
Add variant to transformers
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> pytorch_model.{variant}.bin sounds better to me, to keep the file-extension (not so important for .bin, but more important for .h5, .safetensors or any other format)\r\n\r\nEven for `.bin` files, I'd say it's good to keep the file extension as it does not break the LFS property for existing `.gitattributes` files (see [huggingface/the-no-branch-repo](https://huggingface.co/huggingface/the-no-branch-repo/tree/main/text_encoder) where bin files are uploaded as regular).",
"Failing test is unrelated. Think this PR is good for merge. \r\n\r\n@wauplin @julien-c good for you? \r\n\r\nThe resulting folder structure now looks as described in the PR statement: https://github.com/huggingface/transformers/pull/21332#issue-1559421203",
"Thanks for the reviews! Merging",
"cc @sgugger would it be possible to add this feature to `push_to_hub` as well?\r\n\r\nI'd like to use it for BLIP-2. For the moment it seems the only way to do this is calling `save_pretrained(\"...\", variant=\"fp16\")` and then manually upload the PyTorch checkpoint to the model repo",
"Happy to review a PR."
] | 1,674
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
This PR adds a `"variant"` keyword argument to PyTorch's `from_pretrained` and `save_pretrained` so that multiple weight variants can be saved in the model repo.
You can try it out by running:
```python
from transformers import CLIPTextModel
path = "huggingface/the-no-branch-repo" # or ./text_encoder if local
print("This should work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder", variant="no_ema")
print("This should work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder", variant="fp16")
print("This should work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder")
print("This should NOT work!:")
model = CLIPTextModel.from_pretrained(path, subfolder="text_encoder", variant="other")
```
From this repo: https://huggingface.co/huggingface/the-no-branch-repo/tree/main/text_encoder . The repo is a dummy stable diffusion model and folder structure looks as follows:
```
├── feature_extractor
│ └── preprocessor_config.json
├── load.py
├── model_index.json
├── safety_checker
│ ├── config.json
│ └── pytorch_model.bin
├── save.py
├── scheduler
│ └── scheduler_config.json
├── text_encoder
│ ├── config.json
│ ├── pytorch_model.bin
│ ├── pytorch_model.fp16.bin
│ └── pytorch_model.no_ema.bin
├── tokenizer
│ ├── merges.txt
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── vocab.json
├── unet
│ ├── config.json
│ └── diffusion_pytorch_model.bin
└── vae
├── config.json
└── diffusion_pytorch_model.bin
```
cc @pcuenca @patil-suraj @sgugger @LysandreJik @julien-c @osanseviero
**[Update] This PR should be ready for merge**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21332/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21332",
"html_url": "https://github.com/huggingface/transformers/pull/21332",
"diff_url": "https://github.com/huggingface/transformers/pull/21332.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21332.patch",
"merged_at": 1675239713000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21331
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21331/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21331/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21331/events
|
https://github.com/huggingface/transformers/pull/21331
| 1,559,010,929
|
PR_kwDOCUB6oc5Iox8x
| 21,331
|
Bump onnx from 1.11.0 to 1.13.0 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Bumps [onnx](https://github.com/onnx/onnx) from 1.11.0 to 1.13.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/onnx/onnx/releases">onnx's releases</a>.</em></p>
<blockquote>
<h2>v1.13.0</h2>
<p>ONNX v1.13.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit <a href="https://onnx.ai/">onnx.ai</a> to learn more about ONNX and associated projects.</p>
<h1>New operators</h1>
<ul>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Col2Im-18">Col2Im</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/3948">#3948</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwisenot-18">BitwiseNot</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4497">#4497</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwiseand-18">BitwiseAnd</a>, <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwiseor-18">BitwiseOr</a> and <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#bitwisexor-18">BitwiseXor</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4496">#4496</a></li>
</ul>
<h1>Operator extensions</h1>
<ul>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#resize-18">Resize</a> - New attributes: <code>antialias</code>, <code>axes</code> and <code>keep_aspect_ratio_policy</code>, allow for both <code>scales</code> and <code>sizes</code> to be provided when one of them is an empty constant <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4126">#4126</a>, <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4388">#4388</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#pad-18">Pad</a> - New attribute <code>axes</code> <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4190">#4190</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#optionalhaselement-18">OptionalHasElement</a> - New input types handling <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4326">#4326</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#optionalhaselement-18">OptionalHasElement</a> and <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#optionalgetelement-18">OptionalGetElement</a> - Accept tensor and sequence types <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4421">#4421</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#scatterelements-18">ScatterElement</a> and <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#scatternd-18">ScatterND</a> - Add <code>max</code> and <code>min</code> as supported reduction attributes <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4411">#4411</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#split-18">Split</a> - Add support for uneven tensor splitting and a new <code>num_outputs</code> attribute <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4481">#4481</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#lppool-18">LpPool</a> - New attributes: <code>ceil_mode</code> and <code>dilations</code> <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4534">#4534</a></li>
</ul>
<h1>Function updates</h1>
<ul>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#centercroppad-18">CenterCropPad</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4190">#4190</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#mish-18">mish</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4350">#4350</a></li>
<li><a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md#groupnormalization-18">GroupNormalization</a> added in <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4621">#4621</a></li>
</ul>
<h1>Reference Python runtime</h1>
<p>Reference Python runtime dependent on only Python and numpy has been added. <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4483">#4483</a></p>
<h1>Python 3.11 support</h1>
<p>ONNX 1.13.0 supports Python 3.11. <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4490">#4490</a></p>
<h1>Apple Silicon support</h1>
<p>Support for M1/M2 ARM processors has been added. <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4642">#4642</a></p>
<h1>More</h1>
<p>ONNX 1.13.0 also comes with numerous:</p>
<ul>
<li>bugfixes</li>
<li>infrastructure improvements</li>
<li>CI improvements</li>
<li>documentation updates</li>
<li>security updates</li>
</ul>
<p>For full details see <a href="https://github.com/onnx/onnx/wiki/Logistics-for-ONNX-Release-1.13.0">Logistics for ONNX Release 1.13.0</a>.</p>
<h1>Deprecation notice</h1>
<ul>
<li><code>TENSOR_TYPE_TO_STORAGE_TENSOR_TYPE</code> has been deprecated <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4270">#4270</a></li>
<li>ONNXIFI: ONNX Interface for Framework Integration has been deprecated <a href="https://github-redirect.dependabot.com/onnx/onnx/pull/4431">#4431</a></li>
</ul>
<h1>Installation</h1>
<p>You can upgrade to the latest release using <code>pip install onnx --upgrade</code> or build from source following the README <a href="https://github.com/onnx/onnx/tree/rel-1.13.0#build-onnx-from-source">instructions</a>.</p>
<h1>Contributors</h1>
<p>Thanks to these individuals for their contributions in this release since last 1.12.0 release: <a href="https://github.com/AnandKri"><code>@AnandKri</code></a>, <a href="https://github.com/cbourjau"><code>@cbourjau</code></a>, <a href="https://github.com/jcwchen"><code>@jcwchen</code></a>, <a href="https://github.com/gramalingam"><code>@gramalingam</code></a>, <a href="https://github.com/garymm"><code>@garymm</code></a>, <a href="https://github.com/GaetanLepage"><code>@GaetanLepage</code></a>, <a href="https://github.com/ilya-lavrenov"><code>@ilya-lavrenov</code></a>, <a href="https://github.com/jnovikov"><code>@jnovikov</code></a>, <a href="https://github.com/JackBoosY"><code>@JackBoosY</code></a>, <a href="https://github.com/jbachurski"><code>@jbachurski</code></a>, <a href="https://github.com/tjich"><code>@tjich</code></a>, <a href="https://github.com/jantonguirao"><code>@jantonguirao</code></a>, <a href="https://github.com/justinchuby"><code>@justinchuby</code></a>, <a href="https://github.com/natke"><code>@natke</code></a>, <a href="https://github.com/philass"><code>@philass</code></a>, <a href="https://github.com/prasanthpul"><code>@prasanthpul</code></a>, <a href="https://github.com/p-wysocki"><code>@p-wysocki</code></a>, <a href="https://github.com/SpaceIm"><code>@SpaceIm</code></a>, <a href="https://github.com/stephenneuendorffer"><code>@stephenneuendorffer</code></a>,<a href="https://github.com/take-cheeze"><code>@take-cheeze</code></a>, <a href="https://github.com/sechkova"><code>@sechkova</code></a>, <a href="https://github.com/thiagocrepaldi"><code>@thiagocrepaldi</code></a>, <a href="https://github.com/xadupre"><code>@xadupre</code></a>, <a href="https://github.com/mszhanyi"><code>@mszhanyi</code></a>, <a href="https://github.com/yuanyao-nv"><code>@yuanyao-nv</code></a>, <a href="https://github.com/andife"><code>@andife</code></a>, <a href="https://github.com/daquexian"><code>@daquexian</code></a>, <a href="https://github.com/kylesayrs"><code>@kylesayrs</code></a>, <a href="https://github.com/liqunfu"><code>@liqunfu</code></a>, <a href="https://github.com/longlee0622"><code>@longlee0622</code></a>, <a href="https://github.com/HSQ79815"><code>@HSQ79815</code></a>, <a href="https://github.com/williamberman"><code>@williamberman</code></a>, <a href="https://github.com/YanBC"><code>@YanBC</code></a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/onnx/onnx/blob/main/docs/Changelog.md">onnx's changelog</a>.</em></p>
<blockquote>
<!-- raw HTML omitted -->
<h2>Operator Changelog</h2>
<p><em>This file is automatically generated from the
<a href="https://github.com/onnx/onnx/blob/main/docs/onnx/defs">def files</a> via <a href="https://github.com/onnx/onnx/blob/main/docs/onnx/defs/gen_doc.py">this script</a>.
Do not modify directly and instead edit operator definitions.</em></p>
<p>For an operator input/output's differentiability, it can be differentiable,
non-differentiable, or undefined. If a variable's differentiability
is not specified, that variable has undefined differentiability.</p>
<h1>ai.onnx (default)</h1>
<h2>Version 1 of the default ONNX operator set</h2>
<h3><!-- raw HTML omitted --><!-- raw HTML omitted --><strong>Abs-1</strong><!-- raw HTML omitted --></h3>
<p>Absolute takes one input data (Tensor<!-- raw HTML omitted -->) and produces one output data
(Tensor<!-- raw HTML omitted -->) where the absolute is, y = abs(x), is applied to
the tensor elementwise.</p>
<h4>Version</h4>
<p>This version of the operator has been available since version 1 of the default ONNX operator set.</p>
<h4>Attributes</h4>
<!-- raw HTML omitted -->
<h4>Inputs</h4>
<!-- raw HTML omitted -->
<h4>Outputs</h4>
<!-- raw HTML omitted -->
<h4>Type Constraints</h4>
<!-- raw HTML omitted -->
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/onnx/onnx/commit/1ba785612a79fe749aa1e478336e534743372639"><code>1ba7856</code></a> Mark final RC (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4696">#4696</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/7adb4214b7f0e8486cf18e97e6951c69038c3375"><code>7adb421</code></a> misc fixes for issues found in ort integration (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4681">#4681</a>) (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4695">#4695</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/a9130150957d9ed3b5957c1e8b24e3cccea6fdcf"><code>a913015</code></a> Mark release as rc1 (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4674">#4674</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/3fd41d249bb8006935aa0031a332dd945e61b7e5"><code>3fd41d2</code></a> Bump version (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4666">#4666</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/bad0697bb9feafe656ecad9ff794426708b527aa"><code>bad0697</code></a> Add LpPool-18 - add <code>ceil_mode</code> and <code>dilations</code> attributes (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4534">#4534</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/7a1fae4dcb0d3cf035b8258723b4495133180391"><code>7a1fae4</code></a> make primary ops function step 2 (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4512">#4512</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/8fb26ede15adcda983ed126b2d6dfba52af4e748"><code>8fb26ed</code></a> Fixed some typos in python.rst (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4668">#4668</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/9955f35a306ed2ea28650b4f42a5a4056cc2d82c"><code>9955f35</code></a> Fix typo in python.rst (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4667">#4667</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/cd6e5db337e5cd008128e10eb2edc68db47d6413"><code>cd6e5db</code></a> Consider GraphInferenceContext in inference functions: InferenceContext (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4632">#4632</a>)</li>
<li><a href="https://github.com/onnx/onnx/commit/466edb7992da7d4eac62e972e6082587c1410a78"><code>466edb7</code></a> Add Python 3.11 support (<a href="https://github-redirect.dependabot.com/onnx/onnx/issues/4490">#4490</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/onnx/onnx/compare/v1.11.0...v1.13.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21331/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21331",
"html_url": "https://github.com/huggingface/transformers/pull/21331",
"diff_url": "https://github.com/huggingface/transformers/pull/21331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21331.patch",
"merged_at": 1674832393000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21330
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21330/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21330/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21330/events
|
https://github.com/huggingface/transformers/issues/21330
| 1,558,953,837
|
I_kwDOCUB6oc5c679t
| 21,330
|
Add XLM-V
|
{
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Can I work on this issue? And can you point me to where should I learn more about this?",
"Some more info:\r\n\r\nWeights can be - according to this tweet [this](https://twitter.com/LiangDavis/status/1618738467315531777) found here:\r\n\r\nhttps://dl.fbaipublicfiles.com/fairseq/xlmv/xlmv.base.tar.gz",
"Hi guys,\r\n\r\nI adopted the RoBERTa conversion script and model conversion was sucessful:\r\n\r\nhttps://gist.github.com/stefan-it/def0e13c872e992aa54dff2768ec5da4\r\n\r\nIt outputs:\r\n\r\n```\r\ntorch.Size([1, 11, 901629]) torch.Size([1, 11, 901629])\r\nmax_absolute_diff = 7.62939453125e-06\r\nDo both models output the same tensors? 🔥\r\nSaving model to /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working\r\nConfiguration saved in /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working/config.json\r\nModel weights saved in /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working/pytorch_model.bin\r\n```",
"@jalajk24 , sorry, I've overlooked your comment.\r\n\r\nHere's an explanation what I did so far:\r\n\r\n* Finding the official checkpoint (which is a bit hard without Twitter, because XLM-V is not yet mentioned in the official `fairseq` repo...)\r\n* Try to convert the checkpoint with the existing code base\r\n* I used the original RoBERTa [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py) and adjust some outdated config parameters (e.g. `roberta.args` is replaced by `roberta.cfg` in newer `fairseq` versions)\r\n* Fixing other changed variables, e.g. `roberta_sent_encoder.layernorm_embedding` must be used instead of the old `roberta_sent_encoder.emb_layer_norm`\r\n* Then conversion runs: when both models (Original `fairseq` model and the converted model in Transformers) output the same tensor for a given input sequence -> model conversion was sucessful.\r\n* If that would not be the case (e.g. we had this when converting XLM-R-XL and XLM-R-XXL models, see [here](https://github.com/huggingface/transformers/pull/13727)) we need to adjust the model architecture (XLM-R-XL used some pre-layer-norm stuff).\r\n\r\nThe next steps would be on the tokenizer part:\r\n\r\n* Load the original checkpoint with `fairseq` and tokenize some input sentence\r\n* Use the `XLM-R` tokenizer with the new XLM-V sentencepiece vocab and tokenize the same input sentence\r\n* Check if both tokenizers output the same tokenized sequence",
"Cool @stefan-it! So, maye we can create a model card and push the model (and tokenizer) to the hub (under the META AI org). WDYT?",
"@mrm8488 Sounds good! I will perform some tokenizer experiments and then I can upload the model -> maybe @patrickvonplaten can invite me to the [Meta AI](https://huggingface.co/facebook) organization on the model hub (for a short time period), when the model is ready to be... tested on downstream tasks :hugs: ",
"Hey @stefan-it, \r\n\r\nFor sure! Invited you :-) ",
"Thanks @patrickvonplaten !\r\n\r\nI wrote a script that compares XLM-V tokenizer and HF tokenizer (which is basically a `XLMRobertaTokenizer` using the provided `sentencepiece.bpe.model` model):\r\n\r\nhttps://gist.github.com/stefan-it/14295d37880bfb6329fe1db9d3e6a14c\r\n\r\nIt uses the WikiANN NER dataset that contains 176 languages, tokenizes each training sentence and compares the output of the original XLM-V tokenizer and the HF one. Some differences can be seen in the GIST mentioned above, e.g.:\r\n\r\n```txt\r\nMismatch for ar sentence:\r\nأبى أيوب الأنصارى .\r\nXLM-V ids: [0, 6, 482745, 6, 529250, 478338, 382485, 6, 5, 2]\r\nHF ids: [0, 6, 482745, 6, 529250, 478338, 382485, 6, 5, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for az sentence:\r\nO , nəinki Çexiyada , eləcə də bütün dünyada antifaşist ədəbiyyatının ən görkəmli nümayəndələrindən biridir .\r\nXLM-V ids: [0, 122, 6, 4, 78808, 2376, 4377, 25427, 6, 4, 17739, 523, 1174, 14374, 214304, 162, 4193, 3386, 1358, 1105, 1221, 89755, 345, 1825, 63822, 19671, 8914, 280, 214304, 499, 162, 381, 6, 5, 2]\r\nHF ids: [0, 122, 6, 4, 78808, 2376, 4377, 25427, 6, 4, 17739, 523, 1174, 14374, 162, 214304, 4193, 3386, 1358, 1105, 1221, 89755, 345, 1825, 63822, 19671, 8914, 280, 214304, 499, 162, 381, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for az sentence:\r\nFilmin bəstəkarı Roberto Rossellininin qardaşı Renzo Rossellinidir .\r\nXLM-V ids: [0, 70066, 93154, 309, 77404, 862785, 1639, 43, 49187, 872558, 862785, 43, 14803, 6, 5, 2]\r\nHF ids: [0, 70066, 93154, 309, 77404, 862785, 43, 1639, 49187, 872558, 862785, 43, 14803, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for be sentence:\r\nнекаторыя аленяводы з верхняй Калымы ўжо качавалі на чукоцкіх землях .\r\nXLM-V ids: [0, 212747, 187222, 187276, 231515, 186902, 245172, 186910, 191873, 187211, 186906, 190574, 202645, 197768, 186882, 190562, 187180, 217232, 212793, 6, 5, 2]\r\nHF ids: [0, 212747, 187222, 187276, 231515, 186902, 245172, 186910, 191873, 187211, 186906, 190574, 217400, 192302, 186882, 190562, 187180, 217232, 212793, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for bn sentence:\r\nআব্রাআম দ্য মোয়াভ্র্\r\nXLM-V ids: [0, 450078, 447452, 391401, 383767, 442939, 388008, 392002, 500283, 388127, 2]\r\nHF ids: [0, 450078, 447452, 391401, 383767, 442939, 388008, 392002, 500283, 388127, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for ckb sentence:\r\nشەڕی ناوخۆییی لیبیا ( ٢٠١١ )\r\nXLM-V ids: [0, 448384, 3, 382407, 424947, 383163, 395213, 390588, 382407, 481417, 18, 430460, 396007, 1057, 2]\r\nHF ids: [0, 448384, 3, 382407, 424947, 383163, 395213, 382407, 390588, 481417, 18, 430460, 396007, 1057, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for el sentence:\r\nτο λιμάνι του Μαρσασλόκκκ ήταν Φοινικική αποικία .\r\nXLM-V ids: [0, 51, 33074, 54, 20175, 4103, 2207, 21516, 180155, 2263, 702, 1764, 179092, 1457, 127312, 1100, 6, 5, 2]\r\nHF ids: [0, 51, 33074, 54, 20175, 4103, 2207, 21516, 2263, 180155, 702, 1764, 179092, 1457, 127312, 1100, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for eu sentence:\r\nÞjóðólfur úr Hvini\r\nXLM-V ids: [0, 576603, 584875, 704, 7755, 272, 110340, 2]\r\nHF ids: [0, 576603, 584875, 704, 7755, 272, 110340, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for fi sentence:\r\nohjaus British Wind Energy Association\r\nXLM-V ids: [0, 18196, 82236, 60938, 48570, 71969, 2]\r\nHF ids: [0, 18196, 82236, 60938, 48570, 71969, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for fr sentence:\r\n***************************** '' Charles de Bourbon-Siciles ''\r\nXLM-V ids: [0, 541, 519880, 736484, 519880, 3426, 17736, 59, 648141, 13, 238, 676633, 11, 3426, 2]\r\nHF ids: [0, 541, 736484, 519880, 519880, 3426, 17736, 59, 648141, 13, 238, 676633, 11, 3426, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for hr sentence:\r\n*KKK Varteks ( Varaždin )\r\nXLM-V ids: [0, 541, 13108, 379, 2056, 11962, 18, 794202, 1057, 2]\r\nHF ids: [0, 541, 379, 13108, 2056, 11962, 18, 794202, 1057, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for ja sentence:\r\n漳 州 訛 り 、 ' ' ' 泉 ' ' ' は 泉 州 訛 り を 表 す ) ] ] \r\nXLM-V ids: [0, 6, 381875, 6, 284214, 6, 371882, 6, 283722, 6, 283381, 536, 536, 536, 6, 287298, 536, 536, 536, 6, 283385, 6, 287298, 6, 284214, 6, 371882, 6, 283722, 6, 283391, 6, 284061, 6, 284248, 1057, 6305, 6305, 2]\r\nHF ids: [0, 6, 381875, 6, 284214, 6, 371882, 6, 283722, 6, 283381, 536, 536, 536, 6, 287298, 536, 536, 536, 6, 283385, 6, 287298, 6, 284214, 6, 371882, 6, 283722, 6, 283391, 6, 284061, 6, 284248, 1057, 6305, 6305, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for km sentence:\r\n' '' ក្រមង៉ុយ '' 'គឺជាកវីម្នាក់ដែលមិនសរសេរនូវកំណាព្យកាព្យឃ្លោងដែលលោកច្រៀងនោះ ឡើយ ។ ស្នាដៃរបស់លោកដែលគង់វង្សមកដល់សព្វថ្ងៃនេះកើតមានឡើងដោយការអញ្ជើញ ភ្នំពេញ ហើយធ្វើការកត់ត្រាទុក ។\r\nXLM-V ids: [0, 536, 3426, 6, 436488, 414054, 470537, 406071, 3426, 536, 417648, 388584, 417615, 398401, 383964, 386188, 484094, 413545, 430365, 392709, 443000, 401931, 443000, 513438, 424986, 383964, 383825, 6, 470313, 392431, 445340, 383824, 6, 527700, 384224, 383825, 383964, 6, 486458, 486640, 6, 454853, 6, 504066, 459752, 423127, 386428, 410408, 385471, 383363, 510944, 394566, 386849, 388469, 383363, 384712, 398013, 438262, 423820, 383824, 2]\r\nHF ids: [0, 536, 3426, 6, 436488, 414054, 470537, 406071, 3426, 536, 417648, 388584, 417615, 398401, 383964, 386188, 484094, 413545, 430365, 392709, 443000, 401931, 443000, 513438, 424986, 383964, 383825, 6, 470313, 392431, 445340, 383824, 6, 527700, 384224, 383825, 383964, 6, 486458, 486640, 6, 454853, 6, 504066, 459752, 423127, 386428, 410408, 385471, 383363, 510944, 394566, 386849, 388469, 383363, 384712, 398013, 438262, 423820, 383824, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for ko sentence:\r\n북쪽으로는 사바 구 , 서쪽으로는 소피아 구 , 남서쪽으로는 알라오트라망고로 구 , 남쪽으로는 아치나나나 구와 접한다 .\r\nXLM-V ids: [0, 460610, 402460, 383267, 384648, 384084, 6, 4, 464357, 402460, 383973, 408125, 384084, 6, 4, 384737, 497040, 402460, 384068, 382873, 383469, 420080, 387243, 382503, 382498, 384084, 6, 4, 445962, 402460, 383309, 383375, 459065, 382738, 384084, 382541, 390528, 383229, 6, 5, 2]\r\nHF ids: [0, 460610, 402460, 383267, 384648, 384084, 6, 4, 464357, 402460, 383973, 408125, 384084, 6, 4, 384737, 497040, 402460, 384068, 382873, 383469, 420080, 387243, 382503, 382498, 384084, 6, 4, 445962, 402460, 383309, 383375, 382738, 459065, 384084, 382541, 390528, 383229, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for lv sentence:\r\nEiropas autoceļš E77\r\nXLM-V ids: [0, 3477, 121549, 619, 181, 6697, 2]\r\nHF ids: [0, 3477, 121549, 619, 181, 6697, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for mk sentence:\r\nПоретко , на пример во делови од Пиринска Македонија и Егејска Македонија некои од горните женски облеки – ‘’’саите’’’ се кроеле од домашно ткаено платно во сина боја .\r\nXLM-V ids: [0, 186970, 192733, 187180, 6, 4, 186882, 188182, 186930, 201221, 186939, 221926, 187217, 187685, 186883, 248608, 211453, 187685, 193651, 186939, 240530, 198728, 186987, 187184, 186991, 39, 14464, 42, 187373, 186961, 11099, 42, 186894, 203637, 197766, 186939, 210461, 6, 189541, 188031, 212555, 186930, 194795, 199817, 6, 5, 2]\r\nHF ids: [0, 186970, 192733, 187180, 6, 4, 186882, 188182, 186930, 201221, 186939, 221926, 187217, 187685, 186883, 248608, 211453, 187685, 193651, 186939, 240530, 198728, 186987, 187184, 186991, 39, 14464, 42, 187373, 186961, 42, 11099, 186894, 203637, 197766, 186939, 210461, 6, 189541, 188031, 212555, 186930, 194795, 199817, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for ml sentence:\r\nഅനു എലിസബത്ത് ജോസ്\r\nXLM-V ids: [0, 397569, 385011, 528343, 388795, 385776, 481383, 2]\r\nHF ids: [0, 397569, 385011, 528343, 388795, 385776, 481383, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for ms sentence:\r\n███ Sidang Kemuncak Asia Timur\r\nXLM-V ids: [0, 6, 369908, 377468, 593458, 3944, 664695, 8451, 551742, 2]\r\nHF ids: [0, 6, 377468, 369908, 593458, 3944, 664695, 8451, 551742, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for no sentence:\r\nDe siste tre semestre var han i Grenoble i Frankrike , der mye av fritiden ble tilbrakt i Les2alpes og LaGrave .\r\nXLM-V ids: [0, 447, 550187, 17752, 611647, 246, 25684, 28, 657552, 28, 557692, 6, 4, 2860, 549299, 15446, 617530, 117029, 664714, 28, 17112, 430, 460, 10083, 6995, 1079, 29815, 383, 6, 5, 2]\r\nHF ids: [0, 447, 550187, 17752, 611647, 246, 25684, 28, 657552, 28, 557692, 6, 4, 2860, 549299, 15446, 617530, 117029, 664714, 28, 17112, 430, 460, 10083, 6995, 1079, 597, 573563, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for or sentence:\r\nଲେଉଟାଣି ଜୋହାନ୍ ଅଗଷ୍ଟସ ଆର୍ଫୱେଡ଼ସନ୍\r\nXLM-V ids: [0, 6, 387665, 391689, 393963, 403921, 393333, 392380, 395060, 388377, 522433, 387310, 6, 476299, 398439, 432754, 392919, 424507, 2]\r\nHF ids: [0, 6, 387665, 391689, 393963, 403921, 393333, 392380, 395060, 388377, 522433, 387310, 6, 476299, 398439, 432754, 392919, 424507, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for sh sentence:\r\nKefej ( kralj Tegeje ) \r\nXLM-V ids: [0, 3944, 12705, 18, 793761, 96767, 382, 1057, 2]\r\nHF ids: [0, 3944, 12705, 18, 793761, 96767, 382, 1057, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for sl sentence:\r\n__________10__________ Eugenio Siena Alfa Romeo\r\nXLM-V ids: [0, 272238, 1741, 666448, 12002, 848378, 836660, 26591, 72466, 2]\r\nHF ids: [0, 272238, 1741, 12002, 666448, 848378, 836660, 26591, 72466, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for sr sentence:\r\nПрерасподела доходка , Економски факултет Београд USJF - Preraspodela dohotka.ppt\r\nXLM-V ids: [0, 188107, 189047, 187172, 192298, 190169, 186948, 6, 4, 228329, 186887, 192995, 190449, 15373, 662660, 20, 1182, 120, 793095, 567795, 656994, 90130, 5, 457258, 2]\r\nHF ids: [0, 188107, 189047, 187172, 192298, 190169, 186948, 6, 4, 228329, 186887, 192995, 190449, 15373, 662660, 20, 1182, 120, 793095, 567795, 656994, 90130, 5, 457258, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for te sentence:\r\nదారిమార్పు ఇండియన్ ఇన్స్టిట్యూట్ ఆఫ్ టెక్నాలజీ మద్రాస్\r\nXLM-V ids: [0, 436137, 464065, 387183, 460474, 400919, 520935, 493353, 384438, 397587, 466836, 385426, 480198, 383019, 2]\r\nHF ids: [0, 436137, 464065, 387183, 460474, 400919, 520935, 493353, 384438, 397587, 466836, 385426, 480198, 383019, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for ur sentence:\r\nجاوید شیخ - جاوید \r\nXLM-V ids: [0, 408290, 389645, 20, 408290, 2]\r\nHF ids: [0, 408290, 389645, 20, 408290, 6, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for uz sentence:\r\nDastlab Oltin Oʻrdattt asosiy siyosiy markazi hisoblangan .\r\nXLM-V ids: [0, 61568, 14, 3181, 586435, 43, 122, 1476, 47569, 211172, 14, 15966, 43523, 22564, 42030, 7050, 6, 5, 2]\r\nHF ids: [0, 61568, 14, 3181, 586435, 43, 122, 1476, 47569, 14, 211172, 15966, 43523, 22564, 42030, 7050, 6, 5, 2]\r\n------------------------------------------------------------------------------------------\r\nMismatch for zh-yue sentence:\r\nR E D I R E C T # 巴 菲 特 \r\nXLM-V ids: [0, 266, 181, 205, 168, 266, 181, 232, 157, 524, 335519, 6, 286994, 6, 283738, 2]\r\nHF ids: [0, 266, 181, 205, 168, 266, 181, 232, 157, 524, 335519, 6, 286994, 6, 283738, 6, 2]\r\n------------------------------------------------------------------------------------------\r\n```\r\n\r\n",
"Can we tolerate these mismatches :thinking: ",
"Model is up now on the model hub:\r\n\r\nhttps://huggingface.co/stefan-it/xlm-v-base\r\n\r\n-> I would like to conduct some experiments on downstream tasks (mainly NER) to measure performance.\r\n\r\nMaybe e.g. @mrm8488 also wants to fine-tune models so that we can try to reproduce some of the paper results :)\r\n\r\nAfter some experiments I can transfer the model to the Meta AI organization. The MLM performance is really good, so the model *should* work:\r\n\r\n```python\r\nIn [3]: unmasker(\"Paris is the <mask> of France.\")\r\nOut[3]: \r\n[{'score': 0.9286897778511047,\r\n 'token': 133852,\r\n 'token_str': 'capital',\r\n 'sequence': 'Paris is the capital of France.'},\r\n {'score': 0.018073994666337967,\r\n 'token': 46562,\r\n 'token_str': 'Capital',\r\n 'sequence': 'Paris is the Capital of France.'},\r\n {'score': 0.013238662853837013,\r\n 'token': 8696,\r\n 'token_str': 'centre',\r\n 'sequence': 'Paris is the centre of France.'},\r\n {'score': 0.010450296103954315,\r\n 'token': 550136,\r\n 'token_str': 'heart',\r\n 'sequence': 'Paris is the heart of France.'},\r\n {'score': 0.005028395913541317,\r\n 'token': 60041,\r\n 'token_str': 'center',\r\n 'sequence': 'Paris is the center of France.'}]\r\n```\r\n\r\n",
"Thank you so much @stefan-it. Ofc, I will try to reproduce some of the reported results.",
"I've replicated the MasakhaNER v1 results from the paper:\r\n\r\nI fine-tuned 5 models (with different seeds) on the English WikiANN (Rahimi split) and evaluated them on MasakhaNER v1. Note: `DATE` entities do not exist in WikiANN, so they were replaced with `O` for zero-shot evaluation. I averaged F1-Score over the 5 models to get the final score. Models were fine-tuned with a sequence length of 512 (paper uses 128, I recognized this after fine-tuning experiments), but other hyper-parameter are the same as used in XLM-V paper: Batch size is 32, learning rate 2e-05 and number of epochs is 10.\r\n\r\nPutting it all together (see Table 11 in XLM-V paper):\r\n\r\n| Model | amh | hau | ibo | kin | lug | luo | pcm | swa | wol | yor | Avg.\r\n| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----- | ---- | ---- | ----\r\n| XLM-R (Paper) | 25.1 | 43.5 | 11.6 | 9.4 | 9.5 | 8.4 | 36.8 | 48.9 | 5.3 | 10.0 | 20.9\r\n| XLM-R (Reproduced) | 27.1 | 42.4 | 14.2 | 12.4 | 14.3 | 10.0 | 40.6 | 50.2 | 6.3 | 11.5 | 22.9\r\n| XLM-V (Paper) | 20.6 | 35.9 | 45.9 | 25.0 | 48.7 | 10.4 | 38.2 | 44.0 | 16.7 | 35.8 | 32.1\r\n| XLM-V (Reproduced) | 25.3 | 45.7 | 55.6 | 33.2 | 56.1 | 16.5 | 40.7 | 50.8 | 26.3 | 47.2 | 39.7\r\n\r\nPerformance diff for WikiANN between XLM-R and XLM-V in the paper is 11.2%. Reproduced experiments gave an performance diff of 16.8%.\r\n\r\nSo I think these experiments show, that the model is working and it achieves great results on MasakhaNER v1!\r\n\r\nI will set-up a repository for all these results and conduct more experiments on WikiANN (second NER downstream tasks that is mentioned in in the paper).\r\n\r\n@patrickvonplaten Do you think the model is then ready to be moved to the Meta AI org? I've also written an initial model card.",
"Here's the comparison on WikiANN zero-shot (see Table10 in XLM-V paper):\r\n\r\n| Model | ro | gu | pa | lt | az | uk | pl | qu | hu | fi | et | tr | kk | zh | my | yo | sw\r\n| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----\r\n| XLM-R (Paper) | 73.5 | 62.9 | 53.6 | 72.7 | 61.0 | 72.4 | 77.5 | 60.4 | 75.8 | 74.4 | 71.2 | 75.4 | 42.2 | 25.3 | 48.9 | 33.6 | 66.3\r\n| XLM-R (Reproduced) | 73.8 | 65.5 | 50.6 | 74.3 | 64.0 | 76.5 | 78.4 | 60.8 | 77.7 | 75.9 | 73.0 | 76.4 | 45.2 | 29.8 | 52.3 | 37.6 | 67.0 \r\n| XLM-V (Paper) | 73.8 | 66.4 | 48.7 | 75.6 | 66.7 | 65.7 | 79.5 | 70.0 | 79.5 | 78.7 | 75.0 | 77.3 | 50.4 | 30.2 | 61.5 | 54.2 | 72.4\r\n| XLM-V (Reproduced) | 77.2 | 65.4 | 53.6 | 74.9 | 66.0 | 69.4 | 79.8 | 66.9 | 79.0 | 77.9 | 76.2 | 76.8 | 48.5 | 28.1 | 58.4 | 62.6 | 71.6 \r\n\r\n| Model | th | ko | ka | ja | ru | bg | es | pt | it | fr | fa | ur | mr | hi | bn | el | de\r\n| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----\r\n| XLM-R (Paper) | 5.2 | 49.4 | 65.4 | 21.0 | 63.1 | 76.1 | 70.2 | 77.0 | 76.9 | 76.5 | 44.6 | 51.4 | 61.5 | 67.2 | 69.0 | 73.8 | 74.4\r\n| XLM-R (Reproduced) | 4.7 | 49.4 | 67.5 | 21.9 | 65.2 | 77.5 | 76.7 | 79.0 | 77.7 | 77.9 | 49.0 | 55.1 | 61.3 | 67.8 | 69.6 | 74.1 | 75.4 \r\n| XLM-V (Paper) | 3.3 | 53.0 | 69.5 | 22.4 | 68.1 | 79.8 | 74.5 | 80.5 | 78.7 | 77.6 | 50.6 | 48.9 | 59.8 | 67.3 | 72.6 | 76.7 | 76.8\r\n| XLM-V (Reproduced) | 2.6 | 51.6 | 71.2 | 20.6 | 67.8 | 79.4 | 76.2 | 79.9 | 79.5 | 77.5 | 51.7 | 51.5 | 61.9 | 69.2 | 73.2 | 75.9 | 77.1\r\n\r\n| Model | en | nl | af | te | ta | ml | eu | tl | ms | jv | id | vi | he | ar | Avg.\r\n| ------------------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ----\r\n| XLM-R (Paper) | 83.0 | 80.0 | 75.8 | 49.2 | 56.3 | 61.9 | 57.2 | 69.8 | 68.3 | 59.4 | 48.6 | 67.7 | 53.2 | 43.8 | 61.3\r\n| XLM-R (Reproduced) | 83.4 | 80.8 | 75.8 | 49.3 | 56.8 | 62.2 | 59.1 | 72.2 | 62.3 | 58.3 | 50.0 | 67.9 | 52.6 | 47.8 | 62.6 \r\n| XLM-V (Paper) | 83.4 | 81.4 | 78.3 | 51.8 | 54.9 | 63.1 | 67.1 | 75.6 | 70.0 | 67.5 | 52.6 | 67.1 | 60.1 | 45.8 | 64.7\r\n| XLM-V (Reproduced) | 84.1 | 81.3 | 78.9 | 50.9 | 55.9 | 63.0 | 65.7 | 75.9 | 70.8 | 64.8 | 53.9 | 69.6 | 61.1 | 47.2 | 65.0\r\n\r\nDiff. between XLM-V and XLM-R in the paper: (64.7 - 61.3) = 3.4%.\r\nDiff. between reproduced XLM-V and XLM-R: (65.0 - 62.6) = 2.4%. \r\n\r\nSame conclusion: the converted/integrated XLM-V works great :hugs: ",
"Great job @stefan-it !!! 🔥",
"Thanks @mrm8488 !\r\n\r\nRepo is btw: up here: https://github.com/stefan-it/xlm-v-experiments :)",
"Thanks a lot for your contribution @stefan-it 🙏 \r\n\r\nJust transferred the checkpoint to the appropriate organization: https://huggingface.co/facebook/xlm-v-base\r\n\r\nHowever, I feel like it could be beneficial to have a separate model_doc for XLM-V (similar to how we did this for T5v1.1 etc.).\r\n\r\nDo you mind opening a PR for that?",
"Thanks! Closing this issue as the model is now available: https://huggingface.co/docs/transformers/main/en/model_doc/xlm-v.",
"Amazing work @stefan-it - thanks a lot! ",
"Amazing @stefan-it . Should I add some ft metric @patrickvonplaten as done for other models? I fine-tuned it on XNLI: https://huggingface.co/mrm8488/xlm-v-base-finetuned-xglue-xnli "
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
### Model description
[XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)
Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
Should work as [XLM-RoBERTa](https://twitter.com/LiangDavis/status/1618738467315531777?s=20&t=nObyGbBEqmBZr9rmTEAeVg)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21330/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21330/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21329
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21329/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21329/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21329/events
|
https://github.com/huggingface/transformers/pull/21329
| 1,558,708,368
|
PR_kwDOCUB6oc5Invrg
| 21,329
|
Add VQGAN-CLIP research project
|
{
"login": "ErwannMillon",
"id": 18487334,
"node_id": "MDQ6VXNlcjE4NDg3MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18487334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErwannMillon",
"html_url": "https://github.com/ErwannMillon",
"followers_url": "https://api.github.com/users/ErwannMillon/followers",
"following_url": "https://api.github.com/users/ErwannMillon/following{/other_user}",
"gists_url": "https://api.github.com/users/ErwannMillon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErwannMillon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErwannMillon/subscriptions",
"organizations_url": "https://api.github.com/users/ErwannMillon/orgs",
"repos_url": "https://api.github.com/users/ErwannMillon/repos",
"events_url": "https://api.github.com/users/ErwannMillon/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErwannMillon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi there,\r\nThanks for the feedback, I fixed the style issues and removed the face image. \r\nAlso refactored the code to have more accurate function names, used just the tokenizer, changed the assertions to exceptions, and removed some extraneous code (eg double crop, freeze_module)\r\nHave a great day :)\r\nErwann",
"@ErwannMillon - thanks for the updates. It's looking good 😎 ! \r\n\r\nJust two last things before I think we're ready to merge: \r\n* Removing the `face.jpg` file and instead pointing to a place where it can be downloaded\r\n* Could you also remove the notebook? Like the image, we want to avoid adding large files as much as possible. Happy for you to link to e.g. a colab which shows this demo in the README. "
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Implements VQGAN-CLIP using huggingface CLIP models
Related to #21064
This Research Project allows users to generate or edit images with a single line of code. It wraps the huggingface CLIPProcessor class, allowing images to be processed as torch tensors in order to preserve gradient flow through the transformations.
Features:
- Positive and negative prompts
- Multiple prompts
- Prompt Weights
- Creating GIF animations of the transformations
- Wandb logging
Tagging @amyeroberts for review :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21329/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21329/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21329",
"html_url": "https://github.com/huggingface/transformers/pull/21329",
"diff_url": "https://github.com/huggingface/transformers/pull/21329.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21329.patch",
"merged_at": 1675367136000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21328
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21328/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21328/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21328/events
|
https://github.com/huggingface/transformers/pull/21328
| 1,558,495,525
|
PR_kwDOCUB6oc5InBmn
| 21,328
|
Fix M2M100 positional embedding creation for ONNX
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
# What does this PR do?
This PR changes the reshape step when computing the sinusoidal positional embeddings in M2M100 to make it work with ONNX.
Shape inference is incorrect before:

You can see that ONNX sets the last axis of the shape of `last_hidden_state` to be dynamic, with some auto-generated name, while it should be static.
After the fix:

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21328/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21328",
"html_url": "https://github.com/huggingface/transformers/pull/21328",
"diff_url": "https://github.com/huggingface/transformers/pull/21328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21328.patch",
"merged_at": 1674812599000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21327
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21327/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21327/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21327/events
|
https://github.com/huggingface/transformers/pull/21327
| 1,558,495,166
|
PR_kwDOCUB6oc5InBh0
| 21,327
|
Remove more unused attributes in config classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Remove more unused attributes in config classes.
There are more changes needed than I expected previously. I have to adopt the new test (now only in my branch) in a progressive way and make the changes to pass that test in the same time.
I tried to change more places in single PR to avoid too many PRs related to this topic. But there will still be a few PRs in the future 🙏 .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21327/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21327",
"html_url": "https://github.com/huggingface/transformers/pull/21327",
"diff_url": "https://github.com/huggingface/transformers/pull/21327.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21327.patch",
"merged_at": 1675179339000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21326
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21326/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21326/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21326/events
|
https://github.com/huggingface/transformers/issues/21326
| 1,558,463,650
|
I_kwDOCUB6oc5c5ESi
| 21,326
|
Deepspeed with Trainer RecursionError: maximum recursion depth exceeded while calling a Python object
|
{
"login": "bettyballin",
"id": 16786715,
"node_id": "MDQ6VXNlcjE2Nzg2NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/16786715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bettyballin",
"html_url": "https://github.com/bettyballin",
"followers_url": "https://api.github.com/users/bettyballin/followers",
"following_url": "https://api.github.com/users/bettyballin/following{/other_user}",
"gists_url": "https://api.github.com/users/bettyballin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bettyballin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bettyballin/subscriptions",
"organizations_url": "https://api.github.com/users/bettyballin/orgs",
"repos_url": "https://api.github.com/users/bettyballin/repos",
"events_url": "https://api.github.com/users/bettyballin/events{/privacy}",
"received_events_url": "https://api.github.com/users/bettyballin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"In general this belongs to https://github.com/microsoft/DeepSpeed/issues as this is a deepspeed issue. I see you file it here https://github.com/microsoft/DeepSpeedExamples/issues/84#issuecomment-1405311822 but I think this is the wrong place.\r\n\r\nI have run into this problem myself recently - it was triggered by `zero.Init` - and I had some nested `from_pretrained` calls and multiple `zero.Init` calls. Once I recoded to have only a single `zero.Init` call the problem went away. \r\n\r\nSo in your code sample you shouldn't do:\r\n\r\n```\r\nwith deepspeed.zero.Init(dtype=torch.float16):\r\n model = AutoModelForSequenceClassification.from_config(config=config,torch_dtype=torch.float16)\r\n```\r\n\r\nbecause `from_config` already uses `zero.Init` internally! so you end up with nested `zero.Init` and it breaks.\r\n\r\nIt should be just:\r\n\r\n```\r\nmodel = AutoModelForSequenceClassification.from_config(config=config,torch_dtype=torch.float16)\r\n```",
"Thank you @stas00 , this resolved the issue! "
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
System Info:
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Stack trace:
```
File "myPythonScript.py", line 230, in train
trainer.train()
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1527, in train
return inner_training_loop(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1597, in _inner_training_loop
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/deepspeed.py", line 344, in deepspeed_init
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/__init__.py", line 125, in initialize
return inner_training_loop(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/trainer.py", line 1597, in _inner_training_loop
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
File "/miniconda3/envs/venv/lib/python3.10/site-packages/transformers/deepspeed.py", line 344, in deepspeed_init
engine = DeepSpeedEngine(args=args,
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 348, in wrapper
deepspeed_engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/__init__.py", line 125, in initialize
engine = DeepSpeedEngine(args=args,
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 348, in wrapper
if not hasattr(module, "_ds_child_entered"):
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if not hasattr(module, "_ds_child_entered"):
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
File "/home/ballin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2022, in __dir__
parameters = list(self._parameters.keys())
File "/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
....
... multiple hundred lines of the same two function calls ....
....
File "/miniconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2028, in __dir__
parameters = list(self._parameters.keys())
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2028, in __dir__
parameters = list(self._parameters.keys())
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 495, in __getattr__
if name in dir(self):
File "/mnt/ssestorage2-data/ballin/miniconda3/envs/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2026, in __dir__
module_attrs = dir(self.__class__)
RecursionError: maximum recursion depth exceeded while calling a Python object
```
The code used in myClassifier.py:
```
MODEL = "microsoft/bloom-deepspeed-inference-fp16"
TOKENIZER = AutoTokenizer.from_pretrained(MODEL)
training_args = TrainingArguments(
do_train=True,
do_eval=True,
fp16=True,
load_best_model_at_end=True,
evaluation_strategy="epoch",
save_strategy="epoch",
deepspeed="ds_config.json",
local_rank=os.environ.get("LOCAL_RANK")
)
config = AutoConfig.from_pretrained(MODEL)
with deepspeed.zero.Init(dtype=torch.float16):
model = AutoModelForSequenceClassification.from_config(config=config,torch_dtype=torch.float16)
model = model.eval()
dist.barrier()
trainer = Trainer(
model=model,
args=training_args,
train_dataset=ds["train"],
eval_dataset=ds["test"],
tokenizer=TOKENIZER
)
trainer.train()
```
I am using the deepspeed configuration file from https://huggingface.co/docs/transformers/main/en/main_classes/deepspeed#zero3-config and call deepspeed with ```deepspeed --num_gpus=4 --master_addr="myIP" --master_port=1234 --hostfile=job/hostfile myPythonScript.py``` on 2 nodes using 4x NVIDIA A100 with 80 GB each.
### Expected behavior
The training should run without an error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21326/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21325
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21325/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21325/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21325/events
|
https://github.com/huggingface/transformers/issues/21325
| 1,558,445,557
|
I_kwDOCUB6oc5c4_31
| 21,325
|
Token batching
|
{
"login": "bpopeters",
"id": 10211311,
"node_id": "MDQ6VXNlcjEwMjExMzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10211311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bpopeters",
"html_url": "https://github.com/bpopeters",
"followers_url": "https://api.github.com/users/bpopeters/followers",
"following_url": "https://api.github.com/users/bpopeters/following{/other_user}",
"gists_url": "https://api.github.com/users/bpopeters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bpopeters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bpopeters/subscriptions",
"organizations_url": "https://api.github.com/users/bpopeters/orgs",
"repos_url": "https://api.github.com/users/bpopeters/repos",
"events_url": "https://api.github.com/users/bpopeters/events{/privacy}",
"received_events_url": "https://api.github.com/users/bpopeters/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi there, what you are asking for is not supported. Note that Transformers is primarily a library of models. You can adapt the data preprocessing part of any of our existing examples to suit your needs, but we won't support every feature out of the box as it's not the goal of the library.",
"Hello,\r\n\r\nThank you for your quick reply. I'll admit I'm a bit surprised that this is considered out of scope. It is a models library, yes, but the main ways people interact with models are through training (including finetuning) and inference. In either case, inputs need to be batched. This is a very mainstream technique for doing it, especially for self-attention-based models because of the popularity of very large batches (at least when training from scratch, I'm fairly new to finetuning so perhaps the situation is different).\r\n\r\nThank you again for your help.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Token batching is a necessary feature for some tasks like machine translation as it is a recognized setting in the field. When you want to make sure that your experimental setup is consistent with other frameworks, you must do so."
] | 1,674
| 1,681
| 1,678
|
NONE
| null |
Hello,
Many frameworks support _token batching_, in which batches are constructed not so that they contain the same number of sequences, but rather so that they contain approximately the same number of tokens (so a batch could consist either of a large number of short sequences or a small number of long sequences). One motivation for this is so that memory use is roughly constant from batch to batch, which makes it easier to use a very large batch size without risking an out-of-memory error.
For example, this is the behavior when using `--max-tokens` instead of `--batch-size` in fairseq.
I found a previous issue (https://github.com/huggingface/transformers/issues/14767) where this was asked. At the time, someone claimed that the feature existed and posted a video. However, the examples presented in that video do **not** actually implement this feature. Subsequent comments pointed out that the issue remained unresolved, but they were ignored.
So my question is, does token batching already exist in transformers? If so, how can I make use of it?
Thank you for your help! i wasn't sure if I should have made this a feature request because it's not actually clear to me whether the feature has already been implemented or not.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21325/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21325/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21324
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21324/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21324/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21324/events
|
https://github.com/huggingface/transformers/pull/21324
| 1,558,370,357
|
PR_kwDOCUB6oc5ImmhP
| 21,324
|
[Whisper] another patch
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This fixes some issues I came up with when benchmarking the model + the CI-tests are not passing.
We updated the configs online, which changed a lot of things.
Also TF's forced decoder logit processor was wrong
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21324/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21324",
"html_url": "https://github.com/huggingface/transformers/pull/21324",
"diff_url": "https://github.com/huggingface/transformers/pull/21324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21324.patch",
"merged_at": 1674833717000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21323
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21323/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21323/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21323/events
|
https://github.com/huggingface/transformers/pull/21323
| 1,558,347,592
|
PR_kwDOCUB6oc5ImhuE
| 21,323
|
Generate: better `compute_transition_scores` examples
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
MEMBER
| null |
# What does this PR do?
Adds further notes and details to the examples in `compute_transition_scores`, so it can be used out of the box with encoder-decoder models.
Inspired from the interaction in #21321
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21323/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21323",
"html_url": "https://github.com/huggingface/transformers/pull/21323",
"diff_url": "https://github.com/huggingface/transformers/pull/21323.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21323.patch",
"merged_at": 1674749165000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21321
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21321/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21321/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21321/events
|
https://github.com/huggingface/transformers/issues/21321
| 1,558,097,123
|
I_kwDOCUB6oc5c3qzj
| 21,321
|
`compute_transition_scores` becomes erroneous when setting the minimal length of generation
|
{
"login": "Aktsvigun",
"id": 36672861,
"node_id": "MDQ6VXNlcjM2NjcyODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aktsvigun",
"html_url": "https://github.com/Aktsvigun",
"followers_url": "https://api.github.com/users/Aktsvigun/followers",
"following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}",
"gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions",
"organizations_url": "https://api.github.com/users/Aktsvigun/orgs",
"repos_url": "https://api.github.com/users/Aktsvigun/repos",
"events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aktsvigun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @Aktsvigun 👋 Thank you for raising this issue, it is great to iron out usage difficulties 🤗 \r\n\r\nThere is no bug, you forgot to account for `length_penalty` -- see the other example in `compute_transition_scores`'s docstring. I'm pasting the corrected snippet below.\r\n\r\nTwo further notes:\r\n- `normalize_logits` normalizes the logits, such that `sum(exp(logits)) = 1` at each generated token. Our models do not perform this normalization by default, and it is very helpful to evaluate the generate output.\r\n- To get `outputs.sequences_scores` back, we need to make sure we operate on the scores in the same conditions as in `.generate()` -- with `normalize_logits=False` and applying `length_penalty` as in the example below.\r\n\r\n___________________________\r\n\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nimport numpy as np\r\n\r\ncheckpoint = 'facebook/bart-large-cnn'\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)\r\n\r\ntext = \"Liverpool football club was established in 1892. It is one of the largest and the most famous football clubs in the world with six Champions League titles, 19 premier league titles and others. In 2015 Jurgen Klopp became a Liverpool manager and since that time led the team to another Champions league and Premier League trophies.\"\r\ninputs = tokenizer([text], return_tensors=\"pt\")\r\n\r\n# Example 1: Print the scores for each token generated with Greedy Search\r\noutputs = model.generate(\r\n **inputs,\r\n min_new_tokens=10,\r\n max_new_tokens=256,\r\n return_dict_in_generate=True,\r\n output_scores=True\r\n)\r\ntransition_scores = model.compute_transition_scores(\r\n outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False\r\n)\r\ngenerated_tokens = outputs.sequences[:, 1:]\r\nfor tok, score in zip(generated_tokens[0], transition_scores[0]):\r\n print(f\"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}\")\r\n\r\n# output_length -> 1 from forced BOS token, np.sum(transition_scores.numpy() < 0, axis=1) from the other tokens\r\noutput_length = 1 + np.sum(transition_scores.numpy() < 0, axis=1)\r\nlength_penalty = model.generation_config.length_penalty\r\nreconstructed_scores = transition_scores.sum(axis=1) / (output_length**length_penalty)\r\nprint(np.allclose(outputs.sequences_scores, reconstructed_scores))\r\n```",
"I'm closing this issue as it seems to be solved, but feel free to reopen it if you have other related issues :)",
"Hi @gante! \r\nThanks a lot for the quick response!\r\n\r\nSure, my bad omitting the length penalty parameter. \r\n\r\nThanks again for the amazing function!\r\n\r\n"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### System Info
transformers==4.27.0.dev0 (latest from master)
### Who can help?
@gante @pat
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import numpy as np
checkpoint = 'facebook/bart-large-cnn'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
text = "Liverpool football club was established in 1892. It is one of the largest and the most famous football clubs in the world with six Champions League titles, 19 premier league titles and others. In 2015 Jurgen Klopp became a Liverpool manager and since that time led the team to another Champions league and Premier League trophies."
inputs = tokenizer([text], return_tensors="pt")
# Example 1: Print the scores for each token generated with Greedy Search
outputs = model.generate(
**inputs,
min_new_tokens=10,
max_new_tokens=256,
return_dict_in_generate=True,
output_scores=True
)
transition_scores = model.compute_transition_scores(
outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=True
)
generated_tokens = outputs.sequences[:, 1:]
for tok, score in zip(generated_tokens[0], transition_scores[0]):
print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}")
print(outputs.sequences_scores[0].item(), transition_scores.mean().item())
```
### Expected behavior
Hi. A recently added `compute_transition_scores` method for text generation behaves erroneously when the minimal length of generated text is set. The last row in my code prints the average sequence log score obtained throughout generation and it clearly differs from that obtained via the function (0.02 vs 0.19) - I observe the same behavior for other sequences/models when `min_length` parameter is not default in the `model.generate` method.
P.s. Unsure what the argument `normalize_logits` in the function does, however, setting it to False does not solve the problem.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21321/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21320
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21320/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21320/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21320/events
|
https://github.com/huggingface/transformers/pull/21320
| 1,558,066,402
|
PR_kwDOCUB6oc5Ilkxs
| 21,320
|
[`Vision-Encoder- Decoder`] Add `vision-encoder-decoder` on `AutoProcessor`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"See https://github.com/huggingface/transformers/pull/21319#issuecomment-1404938821",
"Closing in favor of opening PRs on the Hub as described in the comment! ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21320). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Similarly as https://github.com/huggingface/transformers/pull/21319 & #21299 a doctest was failing because the correct processor was not mapped in the `AutoProcessor` mapping for `vision-encoder-decoder`
This fixes also a doctest that was failing, link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
cc @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21320/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21320",
"html_url": "https://github.com/huggingface/transformers/pull/21320",
"diff_url": "https://github.com/huggingface/transformers/pull/21320.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21320.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21319
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21319/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21319/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21319/events
|
https://github.com/huggingface/transformers/pull/21319
| 1,558,053,849
|
PR_kwDOCUB6oc5IliGj
| 21,319
|
[`Speech-Encoder-Decoder`] Add `speech-encoder-decoder` to `AutoProcessor`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @younesbelkada ! Thank you for the PR(s). In my opinion, (generic) composite models like (text/vision/speech) encoder-decoder models are not meant to use `Auto` mappings, as their design is to compose any pair of models (whenever it works).\r\n\r\nThe fix should be updating the config file on the hub for these checkpoints. In this case, by adding `processor_class`, for which I have opened those Hub PR.",
"Makes sense! Thanks for explaining! ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently loading the correct processor for `speech-encoder-decoder` using `AutoProcessor` is broken on `main`.
The issue and the fix is very identical as https://github.com/huggingface/transformers/pull/21299 , as a doctest was also failing. Link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
cc @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21319/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21319",
"html_url": "https://github.com/huggingface/transformers/pull/21319",
"diff_url": "https://github.com/huggingface/transformers/pull/21319.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21319.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21318
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21318/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21318/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21318/events
|
https://github.com/huggingface/transformers/pull/21318
| 1,558,042,191
|
PR_kwDOCUB6oc5Ilfjz
| 21,318
|
[Doctest] Fix `Perceiver` doctest
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a failing doctest for `PerceiverModel`. Link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
With #21225 being merged, the snippet [here](https://github.com/huggingface/transformers/blob/31336dcf3f93dee19cd13c981f16982d612040d2/src/transformers/models/perceiver/modeling_perceiver.py#L796):
```python
...
model = PerceiverModel(config, input_preprocessor=preprocessor, decoder=decoder)
# you can then do a forward pass as follows:
tokenizer = PerceiverTokenizer()
```
has been modified by:
```python
...
model = PerceiverModel(config, input_preprocessor=preprocessor, decoder=decoder)
# you can then do a forward pass as follows:
tokenizer = AutoTokenizer()
```
As there is no canonical way to automatically load a tokenizer from something which is different than a path, or model id, one should load any default tokenizer by instantiating it using the child class and not using `AutoTokenizer`.
This PR reverts this change and fixes the doctest
cc @ydshieh 💯
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21318/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21318",
"html_url": "https://github.com/huggingface/transformers/pull/21318",
"diff_url": "https://github.com/huggingface/transformers/pull/21318.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21318.patch",
"merged_at": 1674749798000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21317
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21317/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21317/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21317/events
|
https://github.com/huggingface/transformers/issues/21317
| 1,557,983,194
|
I_kwDOCUB6oc5c3O_a
| 21,317
|
trainer get_optimizer_cls_and_kwargs doesn't seem to use optim_args
|
{
"login": "zupatisc",
"id": 61888674,
"node_id": "MDQ6VXNlcjYxODg4Njc0",
"avatar_url": "https://avatars.githubusercontent.com/u/61888674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zupatisc",
"html_url": "https://github.com/zupatisc",
"followers_url": "https://api.github.com/users/zupatisc/followers",
"following_url": "https://api.github.com/users/zupatisc/following{/other_user}",
"gists_url": "https://api.github.com/users/zupatisc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zupatisc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zupatisc/subscriptions",
"organizations_url": "https://api.github.com/users/zupatisc/orgs",
"repos_url": "https://api.github.com/users/zupatisc/repos",
"events_url": "https://api.github.com/users/zupatisc/events{/privacy}",
"received_events_url": "https://api.github.com/users/zupatisc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Yes, this is intended as the regular adam kwargs have their own training argument.",
"Oh okay I see, is there anything speaking against using optim_args to pass kwargs to adafactor?",
"Adafactor in the library is deprecated an not maintained, you should rely on another implementation.",
"Thanks for the info but if that's the case, is there a deprecation warning somewhere for this? Because I don't recall seeing one.\r\nIn any case your info closes this issue, thank you.",
"We haven't officially deprecated it since we are waiting for someome to add support for another integration of it (like for AnyPrecisionAdam and the others)."
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
Hello,
I was reading the code for the trainer and I noticed that the optimizer arguments that get passed to the trainer with the TrainingArguments don't seem to be actually applied for any optimizers aside from AnyPrecisionAdamW, is this intended?
I don't think this question really fits into any of the templates so I didn't use them.
@sgugger
https://github.com/huggingface/transformers/blob/4e41b87e3d13af0d1d7d3d27d101e60c33c92100/src/transformers/trainer.py#L1077
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21317/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21316
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21316/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21316/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21316/events
|
https://github.com/huggingface/transformers/pull/21316
| 1,557,834,126
|
PR_kwDOCUB6oc5Ikywn
| 21,316
|
Small QoL for qa.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding a small qol to avoid panic exceptions down at the tokenizer level.
Fixes https://github.com/huggingface/tokenizers/issues/944
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21316/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21316",
"html_url": "https://github.com/huggingface/transformers/pull/21316",
"diff_url": "https://github.com/huggingface/transformers/pull/21316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21316.patch",
"merged_at": 1674741009000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21315
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21315/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21315/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21315/events
|
https://github.com/huggingface/transformers/pull/21315
| 1,557,830,646
|
PR_kwDOCUB6oc5IkyDD
| 21,315
|
check paths in `utils/documentation_tests.txt`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds a new check to ensure the paths in `utils/documentation_tests.txt` are all valid, so doctest CI won't fail from the beginning.
related PR: #21314 - ~~We need to wait that PR merged before margining this one.~~ It's merged.
The effect of this PR
<img width="800" alt="Screenshot 2023-01-26 111139" src="https://user-images.githubusercontent.com/2521628/214810581-4367c7d8-94b7-4534-94f6-b51d3582a5cb.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21315/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21315",
"html_url": "https://github.com/huggingface/transformers/pull/21315",
"diff_url": "https://github.com/huggingface/transformers/pull/21315.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21315.patch",
"merged_at": 1674743627000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21314
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21314/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21314/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21314/events
|
https://github.com/huggingface/transformers/pull/21314
| 1,557,795,420
|
PR_kwDOCUB6oc5Ikqiy
| 21,314
|
Fix 2 paths in the doctest list
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
doctest has 0 failures
```
🌞 There were no failures: all 0 tests passed. The suite ran in 0h0m0s.
```
which is caused by
```
ERROR: file or directory not found: src/transformers/models/maskformer/configuration_mask2former.py
collecting ... collected 0 items
```
😭😭😭
This PR makes the failures (if any) visible again.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21314/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21314",
"html_url": "https://github.com/huggingface/transformers/pull/21314",
"diff_url": "https://github.com/huggingface/transformers/pull/21314.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21314.patch",
"merged_at": 1674731228000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21313
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21313/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21313/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21313/events
|
https://github.com/huggingface/transformers/issues/21313
| 1,557,657,757
|
I_kwDOCUB6oc5c1_id
| 21,313
|
post_process_instance_segmentation does not resize outputs?
|
{
"login": "nickponline",
"id": 590151,
"node_id": "MDQ6VXNlcjU5MDE1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/590151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickponline",
"html_url": "https://github.com/nickponline",
"followers_url": "https://api.github.com/users/nickponline/followers",
"following_url": "https://api.github.com/users/nickponline/following{/other_user}",
"gists_url": "https://api.github.com/users/nickponline/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickponline/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickponline/subscriptions",
"organizations_url": "https://api.github.com/users/nickponline/orgs",
"repos_url": "https://api.github.com/users/nickponline/repos",
"events_url": "https://api.github.com/users/nickponline/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickponline/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @alaradirik ",
"@alaradirik @NielsRogge what's weird is `post_process_semantic_segmentation` works as expects but not `post_process_instance_segmentation`\r\n\r\n```\r\nresults = processor.post_process_semantic_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )\r\nresults[0]['segmentation'].cpu().numpy().shape\r\n(1000, 1000)\r\n```\r\n\r\n\r\n```\r\nresults = processor.post_process_instance_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )\r\nresults[0]['segmentation'].cpu().numpy().shape\r\n(128, 128)\r\n```",
"Hi @nickponline!\r\n\r\nAre you using MaskFormer or Mask2Former? We are aware of the issue and it should be fixed with the latest release. Could you try upgrading to transformers 4.26.0?",
"MaskFormer\n\nOn Fri, Jan 27, 2023 at 12:08 AM Alara Dirik ***@***.***>\nwrote:\n\n> Hi @nickponline <https://github.com/nickponline>!\n>\n> Are you using MaskFormer or Mask2Former? We are aware of the issue and it\n> should be fixed with the latest release. Could you try upgrading to\n> transformers 4.26.0?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/21313#issuecomment-1406151217>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAEQCR2MY7H2MV5ED6QIEJTWUN66PANCNFSM6AAAAAAUHDZ5ZI>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"@alaradirik @NielsRogge when I use 4.26.0 the resizing works and I get masks back of the `target_size`, but I do notice that the fidelity of the semantic segmentation results are better than the instance segmentation ones. Is something different in how the masks are upsampled? Example below and both masks are 1000x1000\r\n\r\n instance segmentation (notice the low resolution) \r\n\r\n\r\n\r\n semantic semantic segmentation (right). \r\n\r\n\r\n\r\n\r\n\r\n",
"@alaradirik these results are with Mask2Former ^",
"@nickponline that is a good point. \r\n\r\nTo answer your question, Mask2Former outputs mask logits of shape (96, 96) for efficiency purposes. The `post_process_semantic_segmentation` method directly interpolates the mask logits to the target size, whereas the `post_process_instance_segmentation` method first interpolates the mask logits to the preprocessed image size (384, 384), computes the final score of each binary mask proposal by multiplying the mask proposal score with the class score and resizes the final instance segmentation map (discrete instance id values instead of continuous logit values) . \r\n\r\nThe `post_process_instance_segmentation` method yields the same results as the original Mask2Former post-processing. However, you can smoothen the results by cloning the repo, editing line 990 of the [image_processing_mask2former.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/image_processing_mask2former.py) file such that the mask logits are interpolated to the target size (instead of (384, 384)) and building the library locally.\r\n",
"@alaradirik I also noticed the hard-coded resizing to (384, 384) in `post_process_instance_segmentation`. What if the user chooses to resize the input images to a different input size, e.g., (480, 640)? Shouldn't the post-processing adapt to the actual input size?",
"@Callidior you can pass `target_sizes` to the `post_process_instance_segmentation` method, which is a list containing the desired (height, width) as tuples",
"@NielsRogge As far as I understood, resizing to `target_sizes` happens after the hard-coded resize to 384x384. It does not replace it.\r\nBut what if I resize the input images, for example, to a non-quadratic size of 640x960. The post-processing would first resize the segmentation maps to 384x384 and then to 640x960. This would loose much more spatial precision along one dimension than the other.",
"Yes but as @alaradirik explains, this is to comply to the original implementation, which always first interpolates to (384x384). So as she suggest, if you really want to interpolate directly to the desired size, feel free to fork and edit [this line](https://github.com/huggingface/transformers/blob/2ea1ef909016484bee9d60c05582031464490f77/src/transformers/models/mask2former/image_processing_mask2former.py#L990) ",
"Hi @Callidior, as @NielsRogge pointed out, we follow the original implementation in order to make it easier for users to benchmark official checkpoints or their fine-tuned model against other models. \r\n\r\nIn order to make changes, you can git clone the repo, change the relevant line and install the library locally with `pip install -e \".[dev]\"`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,674
| 1,680
| 1,680
|
NONE
| null |
### System Info
```
Python 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
import transformers
transformers.__version__
'4.25.1'
```
### Who can help?
```
results = processor.post_process_instance_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )
results[0]['segmentation'].cpu().numpy().shape
(128, 128)
```
Shouldn't the output be ```(1000, 1000)```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
results = processor.post_process_instance_segmentation(outputs, target_sizes=[(1000, 1000), (1000, 1000)] )
results[0]['segmentation'].cpu().numpy().shape
(128, 128)
```
Shouldn't the output be `(1000, 1000)`
### Expected behavior
Output masks should be same size as `target_sizes`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21313/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21312
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21312/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21312/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21312/events
|
https://github.com/huggingface/transformers/pull/21312
| 1,557,651,714
|
PR_kwDOCUB6oc5IkMyx
| 21,312
|
[pure bf16 training] w/ `AnyPrecisionAdamW` and Kahan summation
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21312). All of your documentation changes will be reflected on that endpoint."
] | 1,674
| 1,677
| null |
CONTRIBUTOR
| null |
This PR was prompted by [this discussion](https://github.com/pytorch/torchdistx/pull/52#discussion_r1082027732) with @lessw2020.
The PR works, just keeping it as Draft for now as I haven't polished it to be ready for merging.
# How to perform pure bf16 training (not mixed) running with `AnyPrecisionAdamW` also in bf16 w/ Kahan summation
I think it should require x8 bytes per param, instead of x18 for mixed precision training - i.e. 1/2 memory usage for everything but activations memory.
(also included a hack into loading `load_from_disk` to get saved datasets, but it's unrelated to the actual feature - will remove at the end)
To test checkout this branch:
```
git clone https://github.com/huggingface/transformers transformers-bf16
cd transformers-bf16
git checkout full-bf16-train
```
## getting `AnyPrecisionAdamW`
You can try to install the bleed edge [`torchdistx`](https://github.com/pytorch/torchdistx/) but it's very difficult to do. Since the optimizer is just python code, we just hack-install it doing just this:
```
mkdir -p $CONDA_PREFIX/lib/python3.8/site-packages/torchdistx/optimizers
wget https://raw.githubusercontent.com/pytorch/torchdistx/main/src/python/torchdistx/optimizers/anyprecision_optimizer.py \
-O $CONDA_PREFIX/lib/python3.8/site-packages/torchdistx/optimizers/__init__.py
```
you will just need to update your destination path if you're not using CONDA or have a different python version. To be more specific adjust the location of your python's `site-packages` directory.
# Training
If you have an 80GB A100, you can do `opt-1.3b` setup below, otherwise for smaller cards choose one of the smaller setups.
You can of course do this for any model, this PR is model invariant.
And you can do either finetuning or training from scratch
## opt-1.3b / bf16-pure training from scratch
First, prep an initialized opt-1.3 model:
```
cat << EOT > prep-bf16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-1.3b"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-1.3b-bf16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
python prep-bf16.py
```
Train from scratch:
```
rm -rf save_dir; PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=1 --nnode=1 --node_rank=0 \
--master_addr=127.0.0.1 --master_port=9901 \
examples/pytorch/language-modeling/run_clm.py --bf16 \
--half_precision_backend no_amp --seed 42 --model_name_or_path opt-1.3b-bf16 \
--dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 --optim \
adamw_anyprecision --optim_args \
'use_kahan_summation=true, momentum_dtype=bfloat16, variance_dtype=bfloat16, compensation_buffer_dtype=bfloat16' \
--per_device_train_batch_size 12 --per_device_eval_batch_size 12 \
--gradient_accumulation_steps 1 --do_train --do_eval --logging_steps 10 \
--save_steps 1000 --eval_steps 100 --weight_decay 0.1 --num_train_epochs 1 \
--adam_beta1 0.9 --adam_beta2 0.95 --learning_rate 0.0002 --lr_scheduler_type \
linear --warmup_steps 500 --report_to tensorboard --output_dir save_dir
```
Let's check that I got the math right for opt-1.3B
Theoretical memory allocation for optim states, weights, grads
```
breakdown: n_params*(optim + grad + weights)
bf16 mixed precision: 1.3*(8 + 2 + 4+2 ) = 1.3*16 = 20.8GB
bf16 pure: 1.3*(4+2 + 2 + 2 ) = 1.3*10 = 13.0GB
-----------------------------------------------------
diff: 7.8GB
```
Real memory allocation: (got by adding `--skip_memory_metrics 0` flag to get memory usage reports)
```
a. bf16 mixed precision:
before_init_mem_gpu = 0MB
init_mem_gpu_alloc_delta = 5019MB
init_mem_gpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 20076MB
train_mem_gpu_peaked_delta = 123MB
-----------------------------------------
total = 25218MB
b. bf16 pure:
before_init_mem_gpu = 0MB
init_mem_gpu_alloc_delta = 5019MB
init_mem_gpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 12548MB
train_mem_gpu_peaked_delta = 124MB
-----------------------------------------
total = 17691MB
diff: 7.53GB
```
So the theoretical and actual numbers check out memory wise.
## opt-125m / bf16-pure training from scratch
If you want to fit into a smaller card, let's do opt-125m
Then prep an empty opt-125m model:
```
cat << EOT > prep-bf16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-125m"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-125m-bf16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
python prep-bf16.py
```
Train from scratch in pure bf16:
```
rm -rf save_dir; PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=1 --nnode=1 --node_rank=0 \
--master_addr=127.0.0.1 --master_port=9901 \
examples/pytorch/language-modeling/run_clm.py --bf16 \
--half_precision_backend no_amp --seed 42 --model_name_or_path opt-125m-bf16 \
--dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 --optim \
adamw_anyprecision --optim_args \
'use_kahan_summation=true, momentum_dtype=bfloat16, variance_dtype=bfloat16, compensation_buffer_dtype=bfloat16' \
--per_device_train_batch_size 12 --per_device_eval_batch_size 12 \
--gradient_accumulation_steps 1 --do_train --do_eval --logging_steps 10 \
--save_steps 1000 --eval_steps 100 --weight_decay 0.1 --num_train_epochs 1 \
--adam_beta1 0.9 --adam_beta2 0.95 --learning_rate 0.0002 --lr_scheduler_type \
linear --warmup_steps 500 --report_to tensorboard --output_dir save_dir
```
## opt-125m / fp16-amp training from scratch
Same for mixed precision fp16 (we want bf16 to give us a similar loss curve when everything else is the same):
```
cat << EOT > prep-fp16.py
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
mname = "facebook/opt-125m"
config = AutoConfig.from_pretrained(mname)
model = AutoModel.from_config(config, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(mname)
path = "opt-125m-fp16"
model.save_pretrained(path)
tokenizer.save_pretrained(path)
EOT
python prep-fp16.py
```
```
rm -rf save_dir; PYTHONPATH="src" python -m torch.distributed.run \
--nproc_per_node=1 --nnode=1 --node_rank=0 \
--master_addr=127.0.0.1 --master_port=9901 \
examples/pytorch/language-modeling/run_clm.py --ff16 \
--seed 42 --model_name_or_path opt-125m-fp16 \
--dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 \
--per_device_train_batch_size 12 --per_device_eval_batch_size 12 \
--gradient_accumulation_steps 1 --do_train --do_eval --logging_steps 10 \
--save_steps 1000 --eval_steps 100 --weight_decay 0.1 --num_train_epochs 1 \
--adam_beta1 0.9 --adam_beta2 0.95 --learning_rate 0.0002 --lr_scheduler_type \
linear --warmup_steps 500 --report_to tensorboard --output_dir save_dir
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21312/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21312",
"html_url": "https://github.com/huggingface/transformers/pull/21312",
"diff_url": "https://github.com/huggingface/transformers/pull/21312.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21312.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21311
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21311/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21311/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21311/events
|
https://github.com/huggingface/transformers/issues/21311
| 1,557,415,991
|
I_kwDOCUB6oc5c1Eg3
| 21,311
|
[WHISPER] Add language to whisper output
|
{
"login": "altryne",
"id": 463317,
"node_id": "MDQ6VXNlcjQ2MzMxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/463317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/altryne",
"html_url": "https://github.com/altryne",
"followers_url": "https://api.github.com/users/altryne/followers",
"following_url": "https://api.github.com/users/altryne/following{/other_user}",
"gists_url": "https://api.github.com/users/altryne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/altryne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/altryne/subscriptions",
"organizations_url": "https://api.github.com/users/altryne/orgs",
"repos_url": "https://api.github.com/users/altryne/repos",
"events_url": "https://api.github.com/users/altryne/events{/privacy}",
"received_events_url": "https://api.github.com/users/altryne/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"We'll be adding a `tokenizer_kwargs`, to allow the `skip_special_tokens` to be overwritten. This should allow you to do something like \r\n```\r\n>>> out = pipeline(..., tokenizer_kwargs={\"skip_special_tokens\": False}, return_timestamps=True, max_length = 2)\r\n\"<startoftranscript><en>\"\r\n```\r\nThen either you regex or encode with the tokenizer and that should do the trick. cc @Narsil as we talked about this offline\r\n",
"Would that work for you ?",
"I... am not sure? \r\n\r\nI can only come at this as a fairly clueless dev that barely understands tokenization. \r\nIn that case, compared to how whisper is built, the above seems very complex to do. \r\n\r\n@ArthurZucker I think as we chatted, you guys have many limitations in keeping pipelines generic features. \r\n\r\nCould there be an easier way to get the detected language? \r\n\r\nMaybe exposing the `detect_language` feature of whisper via `pipe.model.detect_language(audio_file)` somehow? \r\n\r\n",
"As a workaround I'm loading and running whisper base to just detect language, I would love to be at least able to use the loaded transformers whisper. \r\n\r\nSo far, no luck.\r\n\r\ndetect_language is not exposed .\r\n\r\nand running pipe.model.generate() on the source file gives me : \r\n`{AttributeError}'str' object has no attribute 'shape'` \r\n\r\nWhich I assume is because generate needs a numpy array of the audio? 🤔 \r\n\r\nBut def. complex for the average user",
"For anyone getting here, I found a better workaround. \r\nThanks to @ArthurZucker notebook with examples: \r\nhttps://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=i5sKbZpsY9-J\r\n\r\nIt does still require a whisper dependency, but doesn't load the openai whisper model into memory at all, just uses it's utils and dependencies on ffmpeg-python. \r\n\r\n```python\r\naudio = whisper.load_audio(source_file)\r\nshort_audio_for_lang_detection = whisper.pad_or_trim(audio)\r\ninputs = pipe.feature_extractor(short_audio_for_lang_detection,\r\nreturn_tensors=\"pt\",sampling_rate=16_000).input_features.to(pipe.device)\r\n\r\nlang_token = pipe.model.generate(inputs, max_new_tokens=1)[0, 1]\r\ndetected_language_token = pipe.tokenizer.decode(lang_token)\r\n\r\n\r\ndetected_language = detected_language_token[2:-2]\r\nlanguage_title = LANGUAGES[detected_language]\r\nlog.info(f\"Detected language: {language_title}\")",
"`pipe.feature_extractor(short_audio_for_lang_detection)` should by default give only the first 30seconds, so `hort_audio_for_lang_detection = whisper.pad_or_trim(audio)` is probably useless. \r\n\r\n@Narsil how about we make \r\n```\r\n if isinstance(inputs, str):\r\n if inputs.startswith(\"http://\") or inputs.startswith(\"https://\"):\r\n # We need to actually check for a real protocol, otherwise it's impossible to use a local file\r\n # like http_huggingface_co.png\r\n inputs = requests.get(inputs).content\r\n else:\r\n with open(inputs, \"rb\") as f:\r\n inputs = f.read()\r\n\r\n if isinstance(inputs, bytes):\r\n inputs = ffmpeg_read(inputs, self.feature_extractor.sampling_rate)\r\n``` \r\ninto a simple function that you call in the preprocess? This could remove all whisper dependencies in this example. WDYT? ",
"could be awesome to have `model.detect_language` instead of all the mess above and dependencies on whisper! ",
"> into a simple function that you call in the preprocess? \r\n\r\nSure, I'm not sure I understand how that cleans up the audio trimming, but we can definitely abstract away.",
"> could be awesome to have model.detect_language instead of all the mess above and dependencies on whisper!\r\n\r\nIf you have some good ideas, please suggest them instead of waving them out.\r\n\r\nUnfortunately, we can't just add `detect_language` whereever it may be. Whisper is not the final model for audio, when 3 months down the line and another model which works entirely differently comes into play, and we have specified `detect_language` for whisper, we're going to be in a bad shape to support this new shiny model in a seamless fashion. Making model specific code is trivial, this is what the snippet provided above is for. Making abstractions over many models which work very differently is much harder, and that's what we're trying to do. So that users can switch to new shiny model later, without rewriting their entire code.\r\n\r\n`model` doesn't own and will never own `feature_extractor` which is required for the mel extraction of the audio for instance so `model.detect_language` doesn't work.\r\n\r\nThen `pipeline` works on large audio, by chunking them into some length in seconds, so potentially a single file could have multiple languages being detected in each chunk, so we have to account for that.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This was fixed by #21427 closing it!"
] | 1,674
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
Adding the translated language in whisper output, in addition to currently returned `text` and `chunks`
Whisper outputs language tag and for autodetection is important to have this feature as some use-cases don't know the language of the translation.
### Motivation
One example usage is, for both transcription and translation, detecting if the language is `en`, we don't need to add additional translation.
### Your contribution
Tried, couldn't find where
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21311/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21311/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21310
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21310/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21310/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21310/events
|
https://github.com/huggingface/transformers/pull/21310
| 1,557,396,583
|
PR_kwDOCUB6oc5IjXPY
| 21,310
|
Update Hebrew language code to he per IANA registry
|
{
"login": "altryne",
"id": 463317,
"node_id": "MDQ6VXNlcjQ2MzMxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/463317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/altryne",
"html_url": "https://github.com/altryne",
"followers_url": "https://api.github.com/users/altryne/followers",
"following_url": "https://api.github.com/users/altryne/following{/other_user}",
"gists_url": "https://api.github.com/users/altryne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/altryne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/altryne/subscriptions",
"organizations_url": "https://api.github.com/users/altryne/orgs",
"repos_url": "https://api.github.com/users/altryne/repos",
"events_url": "https://api.github.com/users/altryne/events{/privacy}",
"received_events_url": "https://api.github.com/users/altryne/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool! thanks for this, it would be best to also update the models online wdyt @sanchit-gandhi ",
"Funny bug that happens now, I cannot process hebrew. \r\n\r\nI use original whisper to detect language (because #21311 is not fixed yet) and it returns \"he\" \r\n\r\nThen I use that to send to transformers whisper, and it fails as the token is not recognized (it expects iw) 🤭 ",
"Added PRs for models (this.. was not easy 😅 ) \r\nBase PR - https://huggingface.co/openai/whisper-base/discussions/8\r\nTiny PR - https://huggingface.co/openai/whisper-tiny/discussions/6\r\nSmall PR - https://huggingface.co/openai/whisper-small/discussions/13\r\nMedium PR - https://huggingface.co/openai/whisper-medium/discussions/8\r\nLarge PR - https://huggingface.co/openai/whisper-large/discussions/21\r\nLarge V2 PR - https://huggingface.co/openai/whisper-large-v2/discussions/18",
"@sgugger @ArthurZucker thanks for merging this in! \r\nThe model PRs need to be merged in for this to work, correct? \r\nOtherwise there's a mismatch between this repo and loaded models in token names",
"Yes, they'll need to be merged.",
"They are all merged 😉 "
] | 1,674
| 1,675
| 1,674
|
CONTRIBUTOR
| null |
Here's my original PR into whisper that changes the same: https://github.com/openai/whisper/pull/401
# What does this PR do?
Changes the language code for the Hebrew language from `iw` to `he`
Per [IANA registry](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry), `iw` was deprecated as the code for Hebrew in 1989 and the preferred code is `he`
The correct subtag:
```
%%
Type: language
Subtag: he
Description: Hebrew
Added: 2005-10-16
Suppress-Script: Hebr
%%
```
And the deprecation
```
%%
Type: language
Subtag: iw
Description: Hebrew
Added: 2005-10-16
Deprecated: 1989-01-01
Preferred-Value: he
Suppress-Script: Hebr
%%
```
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21310/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21310",
"html_url": "https://github.com/huggingface/transformers/pull/21310",
"diff_url": "https://github.com/huggingface/transformers/pull/21310.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21310.patch",
"merged_at": 1674758080000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21309
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21309/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21309/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21309/events
|
https://github.com/huggingface/transformers/issues/21309
| 1,557,300,564
|
I_kwDOCUB6oc5c0oVU
| 21,309
|
Documentation example error for Train a TensorFlow model with Keras
|
{
"login": "lexipalmer13",
"id": 56708031,
"node_id": "MDQ6VXNlcjU2NzA4MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/56708031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lexipalmer13",
"html_url": "https://github.com/lexipalmer13",
"followers_url": "https://api.github.com/users/lexipalmer13/followers",
"following_url": "https://api.github.com/users/lexipalmer13/following{/other_user}",
"gists_url": "https://api.github.com/users/lexipalmer13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lexipalmer13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lexipalmer13/subscriptions",
"organizations_url": "https://api.github.com/users/lexipalmer13/orgs",
"repos_url": "https://api.github.com/users/lexipalmer13/repos",
"events_url": "https://api.github.com/users/lexipalmer13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lexipalmer13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 ",
"Hi @lexipalmer13 - that code runs fine for me locally, but we did have a lot of compatibility issues with TF 2.11. Version 4.26, which we released two days ago, should fix those issues. Can you try running `pip install --upgrade transformers` to see if it works for you with the newest version?",
"Hi @Rocketknight1 - thanks so much for getting back to me! It continues to throw the same error even with the updated transformers. I put the error below (again it's only the model.fit that's causing me issues so the initial packages/model loading/pre-processing is all running). It seems the main issue is this\r\n`NotFoundError: Graph execution error:`\r\n\r\n\r\n```python\r\n\r\n2023-01-27 13:41:30.016811: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz\r\n2023-01-27 13:41:39.700272: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.\r\n2023-01-27 13:41:43.449299: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x127acaa60\r\n2023-01-27 13:41:43.449332: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x127acaa60\r\n....repeats a bunch of times\r\n2023-01-27 13:41:47.628274: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at xla_ops.cc:418 : NOT_FOUND: could not find registered platform with id: 0x127acaa60\r\n---------------------------------------------------------------------------\r\nNotFoundError Traceback (most recent call last)\r\nCell In[19], line 1\r\n----> 1 model.fit(tokenized_data, labels)\r\n\r\nFile ~/miniconda/lib/python3.10/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)\r\n 67 filtered_tb = _process_traceback_frames(e.__traceback__)\r\n 68 # To get the full stack trace, call:\r\n 69 # `tf.debugging.disable_traceback_filtering()`\r\n---> 70 raise e.with_traceback(filtered_tb) from None\r\n 71 finally:\r\n 72 del filtered_tb\r\n\r\nFile ~/miniconda/lib/python3.10/site-packages/tensorflow/python/eager/execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)\r\n 50 try:\r\n 51 ctx.ensure_initialized()\r\n---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\r\n 53 inputs, attrs, num_outputs)\r\n 54 except core._NotOkStatusException as e:\r\n 55 if name is not None:\r\n\r\nNotFoundError: Graph execution error:\r\n\r\nDetected at node 'StatefulPartitionedCall_199' defined at (most recent call last):\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel_launcher.py\", line 17, in <module>\r\n app.launch_new_instance()\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/traitlets/config/application.py\", line 1041, in launch_instance\r\n app.start()\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelapp.py\", line 724, in start\r\n self.io_loop.start()\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/tornado/platform/asyncio.py\", line 215, in start\r\n self.asyncio_loop.run_forever()\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/asyncio/base_events.py\", line 603, in run_forever\r\n self._run_once()\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/asyncio/base_events.py\", line 1899, in _run_once\r\n handle._run()\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/asyncio/events.py\", line 80, in _run\r\n self._context.run(self._callback, *self._args)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py\", line 512, in dispatch_queue\r\n await self.process_one()\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py\", line 501, in process_one\r\n await dispatch(*args)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py\", line 408, in dispatch_shell\r\n await result\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/kernelbase.py\", line 731, in execute_request\r\n reply_content = await reply_content\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/ipkernel.py\", line 417, in do_execute\r\n res = shell.run_cell(\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/ipykernel/zmqshell.py\", line 540, in run_cell\r\n return super().run_cell(*args, **kwargs)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py\", line 2945, in run_cell\r\n result = self._run_cell(\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py\", line 3000, in _run_cell\r\n return runner(coro)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/async_helpers.py\", line 129, in _pseudo_sync_runner\r\n coro.send(None)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py\", line 3203, in run_cell_async\r\n has_raised = await self.run_ast_nodes(code_ast.body, cell_name,\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py\", line 3382, in run_ast_nodes\r\n if await self.run_code(code, result, async_=asy):\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/IPython/core/interactiveshell.py\", line 3442, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"/var/folders/ny/h_bygvy53h16kd57z4lsmsvh0000gn/T/ipykernel_6697/3344439326.py\", line 1, in <module>\r\n model.fit(tokenized_data, labels)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 65, in error_handler\r\n return fn(*args, **kwargs)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py\", line 1650, in fit\r\n tmp_logs = self.train_function(iterator)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py\", line 1249, in train_function\r\n return step_function(self, iterator)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py\", line 1233, in step_function\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/engine/training.py\", line 1222, in run_step\r\n outputs = model.train_step(data)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/transformers/modeling_tf_utils.py\", line 1572, in train_step\r\n self.optimizer.minimize(loss, self.trainable_variables, tape=tape)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 527, in minimize\r\n self.apply_gradients(grads_and_vars)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1140, in apply_gradients\r\n return super().apply_gradients(grads_and_vars, name=name)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 634, in apply_gradients\r\n iteration = self._internal_apply_gradients(grads_and_vars)\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1166, in _internal_apply_gradients\r\n return tf.__internal__.distribute.interim.maybe_merge_call(\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1216, in _distributed_apply_gradients_fn\r\n distribution.extended.update(\r\n File \"/Users/lexipalmer/miniconda/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py\", line 1211, in apply_grad_to_update_var\r\n return self._update_step_xla(grad, var, id(self._var_key(var)))\r\nNode: 'StatefulPartitionedCall_199'\r\ncould not find registered platform with id: 0x127acaa60\r\n\t [[{{node StatefulPartitionedCall_199}}]] [Op:__inference_train_function_34674]\r\n\r\n```",
"Hi @lexipalmer13, thanks for the error traceback! I believe this error isn't related to `transformers` after all - the issue is an incompatibility specifically triggered by using XLA on TF 2.11 with Apple's M1's silicon. You can see a thread detailing the issue [here](https://developer.apple.com/forums/thread/721619).\r\n\r\nThe underlying cause is that TensorFlow moved to a new optimizer format in TF 2.11. This was the cause of the compatibility issues we experienced with `transformers` as well. The new optimizer format automatically compiles the update step with XLA, triggering the bug. As a workaround for now, you can replace the line\r\n```py\r\nfrom tensorflow.keras.optimizers import Adam\r\n```\r\nwith\r\n```py\r\nfrom tensorflow.keras.optimizers.legacy import Adam\r\n```\r\n\r\nHopefully this issue will be resolved in TF soon, and you won't need this workaround anymore!",
"Hi @Rocketknight1 Yes, that fixed it! Thanks so much for your help!"
] | 1,674
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <No>
- Using distributed or parallel set-up in script?: <No>
Note: I'm using tensorflow-metal since I'm running on an M1 chip
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tried both versions of the documentation code; the produce the same error.
Version 1:
```python
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
dataset = dataset["train"]
from transformers import AutoTokenizer
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True)
labels = np.array(dataset["label"])
from transformers import TFAutoModelForSequenceClassification
from tensorflow.keras.optimizers import Adam
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
model.compile(optimizer=Adam(3e-5))
tokenized_data = dict(tokenized_data)
model.fit(tokenized_data, labels)
```
Version 2:
```python
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
dataset = dataset["train"]
from transformers import AutoTokenizer
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_dataset(data):
# Keys of the returned dictionary will be added to the dataset as columns
return tokenizer(data["sentence"])
dataset = dataset.map(tokenize_dataset)
tf_dataset = model.prepare_tf_dataset(dataset, batch_size=16, shuffle=True, tokenizer=tokenizer)
from transformers import TFAutoModelForSequenceClassification
from tensorflow.keras.optimizers import Adam
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
model.compile(optimizer=Adam(3e-5))
model.fit(tf_dataset)
```
### Expected behavior
Every line works until the final one which produces an error. I would expect the model to be fit.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21309/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21308
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21308/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21308/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21308/events
|
https://github.com/huggingface/transformers/pull/21308
| 1,557,143,656
|
PR_kwDOCUB6oc5Iigmr
| 21,308
|
Small fix to ExponentialDecayLengthPenalty docstring
|
{
"login": "njhill",
"id": 16958488,
"node_id": "MDQ6VXNlcjE2OTU4NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/16958488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njhill",
"html_url": "https://github.com/njhill",
"followers_url": "https://api.github.com/users/njhill/followers",
"following_url": "https://api.github.com/users/njhill/following{/other_user}",
"gists_url": "https://api.github.com/users/njhill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njhill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njhill/subscriptions",
"organizations_url": "https://api.github.com/users/njhill/orgs",
"repos_url": "https://api.github.com/users/njhill/repos",
"events_url": "https://api.github.com/users/njhill/events{/privacy}",
"received_events_url": "https://api.github.com/users/njhill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
## What does this PR do?
Currently, the `ExponentialDecayLengthPenalty` doc string incorrectly states that its `exponential_decay_length_penalty` tuple parameter is optional.
Also changed the corresponding type hint to be more specific.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@gante @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21308/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21308",
"html_url": "https://github.com/huggingface/transformers/pull/21308",
"diff_url": "https://github.com/huggingface/transformers/pull/21308.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21308.patch",
"merged_at": 1674675968000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21307
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21307/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21307/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21307/events
|
https://github.com/huggingface/transformers/pull/21307
| 1,557,046,298
|
PR_kwDOCUB6oc5IiLsJ
| 21,307
|
[WHISPER] Small patch
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
A small nit might be causing fails in the CI. This will patch it
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21307/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21307",
"html_url": "https://github.com/huggingface/transformers/pull/21307",
"diff_url": "https://github.com/huggingface/transformers/pull/21307.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21307.patch",
"merged_at": 1674683363000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21306
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21306/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21306/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21306/events
|
https://github.com/huggingface/transformers/issues/21306
| 1,557,023,896
|
I_kwDOCUB6oc5czkyY
| 21,306
|
Conversion Script Tatoeba
|
{
"login": "hthanhbmt",
"id": 7088460,
"node_id": "MDQ6VXNlcjcwODg0NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7088460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hthanhbmt",
"html_url": "https://github.com/hthanhbmt",
"followers_url": "https://api.github.com/users/hthanhbmt/followers",
"following_url": "https://api.github.com/users/hthanhbmt/following{/other_user}",
"gists_url": "https://api.github.com/users/hthanhbmt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hthanhbmt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hthanhbmt/subscriptions",
"organizations_url": "https://api.github.com/users/hthanhbmt/orgs",
"repos_url": "https://api.github.com/users/hthanhbmt/repos",
"events_url": "https://api.github.com/users/hthanhbmt/events{/privacy}",
"received_events_url": "https://api.github.com/users/hthanhbmt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"I'll have a look thanks for reporting",
"Sorry, but it seems that all the formats are messed up w.r.t. the old scripts. These are not maintained and thus we don't plan on fixing this. If you want to however, feel free to contribute!"
] | 1,674
| 1,681
| 1,681
|
NONE
| null |
### System Info
I follow this guide https://github.com/huggingface/transformers/tree/main/scripts/tatoeba to convert models from Tatoeba Translation Challenge to Huggingface but when I ran
`python3 src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models kor-eng --save_dir converted`
it returns like the below, I found that the find_vocab_file function requires a vocab file with ext .yml, but the model from Tatoeba doesn't has.
```
0%| | 0/1 [00:01<?, ?it/s]Traceback (most recent call last):
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 1324, in <module>
resolver.convert_models(args.models[0])
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 90, in convert_models
convert(save_dir / model["_name"], dest_dir / f"opus-mt-{pair_name}")
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 663, in convert
opus_state = OpusState(source_dir)
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 494, in __init__
self.tokenizer = self.load_tokenizer()
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 592, in load_tokenizer
add_special_tokens_to_vocab(self.source_dir, not self.share_encoder_decoder_embeddings)
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 409, in add_special_tokens_to_vocab
vocab = load_yaml(find_vocab_file(model_dir))
File "/tmp/transformers/src/transformers/models/marian/convert_marian_to_pytorch.py", line 385, in find_vocab_file
return list(model_dir.glob("*vocab.yml"))[0]
IndexError: list index out of range
```
@sgugger @stevhliu
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. clone and install https://github.com/huggingface/transformers/tree/main/scripts/tatoeba
2. run python3 src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models kor-eng --save_dir converted
### Expected behavior
it can convert successfully
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21306/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21305
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21305/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21305/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21305/events
|
https://github.com/huggingface/transformers/issues/21305
| 1,556,956,844
|
I_kwDOCUB6oc5czUas
| 21,305
|
[`Blenderbot`] Discrepancy between `BlenderbotTokenizer` and `BlenderbotTokenizerFast`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @younesbelkada \r\n\r\nIt would be nice if you also show `inputs ` and `inputs_fast` (we can definitely check ourselves), or mention if this is the same or not :-)",
"Thanks a lot! I have updated the description with more details ",
"I'll have a look but the fact that the second scripts works well is already good. Will check that all the inputs_ids and generated_ids are the same \r\n",
"@ArthurZucker I wanted to work on this issue, I did little more digging and found out that this issue (difference in input_ids by the tokenizer) happens when <s\\> is not followed by a space. The 2nd script works as the is space between \\<s> and next character. ",
"This could mean that the `clean_up_tokenization_space` or `spaces_between_special_tokens` args don't have the same values in the different models.",
"Okay, Let me dig further in this direction.",
"You can now control the `clean_up_tokenization_space` parameter when initialising a model (merged in #22341) which should have fixed this issue (need to update the param) "
] | 1,674
| 1,685
| 1,685
|
CONTRIBUTOR
| null |
### System Info
`main` branch
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The initial issue is that I didn't get the same generated output between when using `BlenderbotTokenizer` and `BlenderboTokenizerFast`. The initial script to reproduce is the following:
```python
from transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname, add_prefix_space=False)
tokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname, add_prefix_space=False)
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s> That's unfortunate. "
"Are they trying to lose weight or are they just trying to be healthier?</s> "
"<s> I'm not sure."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
inputs_fast = tokenizer_fast([NEXT_UTTERANCE], return_tensors="pt")
# check that the fast tokenizer is the same as the slow one
assert torch.all(inputs.input_ids == inputs_fast.input_ids)
from transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
tokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname)
def generate(tokenizer):
UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], return_tensors="pt")
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. "
"Are they trying to lose weight or are they just trying to be healthier?</s> "
"<s> I'm not sure."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
next_reply_ids = model.generate(**inputs)
# print("decoded input : ", tokenizer.batch_decode(inputs.input_ids, skip_special_tokens=False)[0])
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=False)[0])
generate(tokenizer)
generate(tokenizer_fast)
>>> That's too bad. Have you tried encouraging them to change their eating habits?
>>> I see. Well, it's good that they're trying to change their eating habits.
```
Interestingly this always pass:
```python
import torch
from transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
tokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname)
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs.</s> <s> That's unfortunate. "
"Are they trying to lose weight or are they just trying to be healthier?</s> "
"<s> I'm not sure."
)
UTTERANCE = "My friends are cool but they eat too many carbs."
_ = tokenizer([UTTERANCE], return_tensors="pt")
_ = tokenizer_fast([UTTERANCE], return_tensors="pt")
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
inputs_fast = tokenizer_fast([NEXT_UTTERANCE], return_tensors="pt")
# check that the fast tokenizer is the same as the slow one
assert torch.all(inputs.input_ids == inputs_fast.input_ids)
next_reply_ids = model.generate(**inputs)
next_reply_ids_fast = model.generate(**inputs_fast)
assert torch.all(inputs.input_ids == inputs_fast.input_ids)
print(tokenizer.batch_decode(next_reply_ids))
>>> I see. Well, it's good that they're trying to change their eating habits.
print(tokenizer_fast.batch_decode(next_reply_ids_fast))
>>> I see. Well, it's good that they're trying to change their eating habits.
```
### Expected behavior
Both generations should be the same ideally!
cc @ydshieh @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21305/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21305/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21304
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21304/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21304/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21304/events
|
https://github.com/huggingface/transformers/pull/21304
| 1,556,934,370
|
PR_kwDOCUB6oc5Ihzne
| 21,304
|
Use `model_class.__name__` and compare against `XXX_MAPPING_NAMES`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Is this kind of change OK for you ..? If so, I will apply the same change to other places.",
"The 2 failed tests are known to be flaky (for now) - Merge the PR without re-runing CI."
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Currently, in `tests/test_modeling_common.py`, there are a lot of condition like
```python
if model_class in get_values(MODEL_MAPPING)
```
which implies in order to get this information, all models must be able to be imported, or rely on some of our mechanism to make sure the execution won't fail the program.
In some rare case, like `natten` is installed but having a incompatible version with `torch`, we will get
```bash
E RuntimeError: Failed to import transformers.models.dinat.modeling_dinat because of the following error (look up to see its traceback):
E Failed to import NATTEN's CPP backend. This could be due to an invalid/incomplete install. Please uninstall NATTEN (pip uninstall natten) and re-install with the correct torch build: natten.shi-labs.com.
```
even running a single test with a model (for example, `efficientformer`) that doesn't need `natten`.
This PR changes the condition to
```python
if model_class.__name__ in get_values(MODEL_MAPPING_NAMES)
```
which gives the same results + avoid such confusing failures + potentially reduce some overhead
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21304/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21304",
"html_url": "https://github.com/huggingface/transformers/pull/21304",
"diff_url": "https://github.com/huggingface/transformers/pull/21304.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21304.patch",
"merged_at": 1674729091000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21303
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21303/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21303/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21303/events
|
https://github.com/huggingface/transformers/issues/21303
| 1,556,908,233
|
I_kwDOCUB6oc5czIjJ
| 21,303
|
Image Classification Pipeline returns score= 1.0
|
{
"login": "guillaumeguy",
"id": 5290678,
"node_id": "MDQ6VXNlcjUyOTA2Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5290678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaumeguy",
"html_url": "https://github.com/guillaumeguy",
"followers_url": "https://api.github.com/users/guillaumeguy/followers",
"following_url": "https://api.github.com/users/guillaumeguy/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaumeguy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaumeguy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaumeguy/subscriptions",
"organizations_url": "https://api.github.com/users/guillaumeguy/orgs",
"repos_url": "https://api.github.com/users/guillaumeguy/repos",
"events_url": "https://api.github.com/users/guillaumeguy/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaumeguy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There is no pipeline available for regression tasks, you need to use the model directly and takes its outputs.",
"Thanks @sgugger! Super fast answer! \r\n\r\nAs I found the pipelines to be very helpful, I'm sharing my solution below for folks that want to still use them. \r\n\r\nOne can just rewrite the function in the `postprocess` [function](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/image_classification.py#L116):\r\n\r\n```\r\ndef postprocess(self, model_outputs, top_k=5):\r\n if top_k > self.model.config.num_labels:\r\n top_k = self.model.config.num_labels\r\n\r\n if self.framework == \"pt\":\r\n pred = model_outputs.logits\r\n else:\r\n raise ValueError(f\"Unsupported framework: {self.framework}\")\r\n\r\n scores = pred.tolist()\r\n return scores\r\n```\r\n\r\nYou can then instantiate the pipeline of the overwritten class: \r\n`pipe = ImageClassificationPipeline(model=model,feature_extractor=extractor,device='cuda:0')`\r\n\r\nAnd run your inference:\r\n```\r\ndef data():\r\n for path in paths:\r\n yield PILImage.open(path)\r\n\r\n\r\nfrom tqdm import tqdm\r\nscores = []\r\nfor out in tqdm(pipe(data())):\r\n scores.append(out)\r\n```",
"Yes that's why the pipeline is called classification, rather than regression. We would need an `ImageRegressionPipeline` for this use case ;)",
"Closing this issue as it seems resolved."
] | 1,674
| 1,675
| 1,675
|
NONE
| null |
### System Info
Vision Transformers Documentation mentions that they do support regression when num_labels == 1. However, it seems incompatible with Pipeline.
In this code, the logits are normalized into scores. However, when num_labels = 1, it effectively turns the score to `1`.
https://github.com/huggingface/transformers/blob/63b204eadd9829985ba13e7e4d51f905adfc2d5e/src/transformers/pipelines/image_classification.py#L116
### Who can help?
@amyeroberts @nielsr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Train a ViT on a regression (num_labels = 1)
2. Use pipeline

### Expected behavior
The model should returns predictions
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21303/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21302
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21302/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21302/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21302/events
|
https://github.com/huggingface/transformers/pull/21302
| 1,556,905,509
|
PR_kwDOCUB6oc5IhtQ4
| 21,302
|
Documentation code sample fixes
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,676
| 1,674
|
CONTRIBUTOR
| null |
Several code examples in the docs will fail if used as is. In many cases, it's a missing dependency, other times, this is due to naming inconsistency, one may be related to a change in API, and a few are due to the execution order (i.e., things used before defined in a tutorial).
This maintenance PR fixes these issues so that the code samples in the docs work as expected and do not cause unnecessary frustration.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21302/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21302",
"html_url": "https://github.com/huggingface/transformers/pull/21302",
"diff_url": "https://github.com/huggingface/transformers/pull/21302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21302.patch",
"merged_at": 1674664420000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21301
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21301/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21301/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21301/events
|
https://github.com/huggingface/transformers/pull/21301
| 1,556,865,815
|
PR_kwDOCUB6oc5Ihkqe
| 21,301
|
Fix TF `generate` (probably)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@ArthurZucker The PR #20944 failed the test\r\n```bash\r\ntests/models/encoder_decoder/test_modeling_tf_encoder_decoder.py::TFBertEncoderDecoderModelTest::test_bert2bert_summarization\r\n```\r\nwhile one commit before `7cb596fa` works well - assuming the changes in this PR is applied.\r\n\r\nWith #20944, the outputs from the above is somehow gibberish.\r\n\r\n**We can wait this PR being merged** , then could you take a look of this issue 🙏 ?\r\n~~(Or if you want to look it earlier - you just have to pull this branch)~~ Better to wait, as I am not sure if there are more recent commits affect this test.\r\n\r\n\r\nHere is the traceback\r\n```bash\r\nE AssertionError: Lists differ: ['sa sa sa university sa sa sigma sa sa th[501 chars] sa'] != [\"sae was founded in 1856, five years befo[236 chars]hs.\"]\r\nE \r\nE First differing element 0:\r\nE 'sa sa sa university sa sa sigma sa sa th[500 chars]a sa'\r\nE \"sae was founded in 1856, five years befo[235 chars]ths.\"\r\nE \r\nE Diff is 897 characters long. Set self.maxDiff to None to see it.\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Basically this is because there hast to be a `past` argument passed down instead of `pat_key_values`. I'll open another PR to fix these, but #21296 should be the fix ",
"> Basically this is because there hast to be a `past` argument passed down instead of `pat_key_values`. I'll open another PR to fix these, but #21296 should be the fix\r\n\r\nThank you. We are going to have 0 test failures soon!",
"Instead of adding this extra if to handle `max_length=None`, I'd like to keep disallowing `max_length=None` -- enabling it may allow users to enter uncharted territory when current length > model's maximum input length 😅 \r\n\r\nThe fix should be to remove `max_length=None` in the test -- the right value will be fetched from the config, like in the PT test.",
"@gante Thanks! Updated the PR :-)",
"@gante I merged this PR as it is. However, we could potentially improve the code in `generate` to validate (more) the arguments to avoid such failures. Will let you to decide as you know much more :-)"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
We have CI failures for `TFBertEncoderDecoderModelTest.test_bert2bert_summarization` and `TFGPT2EncoderDecoderModelTest.test_bert2gpt2_summarization`.
The error message is
```bash
> if generation_config.min_length is not None and generation_config.min_length > generation_config.max_length:
E TypeError: '>' not supported between instances of 'int' and 'NoneType'
```
These 2 tests pass `max_length=None` to `generate`:
```python
output_ids = model.generate(input_ids=input_dict["input_ids"], max_length=None).numpy().tolist()
```
and this line (in `generate`)
https://github.com/huggingface/transformers/blob/63b204eadd9829985ba13e7e4d51f905adfc2d5e/src/transformers/generation/tf_utils.py#L613
change `generation_config.max_length` from `20` (the default value) to `None`, and finally we get error at
https://github.com/huggingface/transformers/blob/63b204eadd9829985ba13e7e4d51f905adfc2d5e/src/transformers/generation/tf_utils.py#L719
This PR check if `generation_config.max_length is not None` before doing comparison - the 2 tests pass with this change.
But we need @gante to see if this is the right fix.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21301/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21301",
"html_url": "https://github.com/huggingface/transformers/pull/21301",
"diff_url": "https://github.com/huggingface/transformers/pull/21301.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21301.patch",
"merged_at": 1674748602000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21300
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21300/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21300/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21300/events
|
https://github.com/huggingface/transformers/issues/21300
| 1,556,831,172
|
I_kwDOCUB6oc5cy1vE
| 21,300
|
Adding NLLB-200 - MoE - 54.5B for no language left behind
|
{
"login": "PierreColombo",
"id": 22492839,
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PierreColombo",
"html_url": "https://github.com/PierreColombo",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"WDYT @ArthurZucker @younesbelkada given your work on MoEs?",
"Sure, we can add this to the to dos, @PierreColombo could you add the link to the open sourced checkpoints? ",
"Hi Thanks for your positive answer.\r\n\r\nCode is here: https://github.com/facebookresearch/fairseq/tree/nllb\r\n\r\nCheckpoints are here : https://tinyurl.com/nllb200moe54bmodel\r\n\r\nThanks !\r\n",
"Hi all,\r\nThis would be greatly appreciated!\r\nThanks",
"also cc @sheonhan re. NNLB",
"+1, would love to see it!",
"+1 here.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"unstale?",
"We went for the fairseq implementation :'(",
"Friendly ping @ArthurZucker ",
"Yes! @sheonhan mentioned wanting to take this, otherwise will gladly sprint !",
"Since I'm working on the Image Completion Transformer at the moment, I might be blocking the folks who want to use it asap, so you should go ahead! @ArthurZucker "
] | 1,674
| 1,679
| 1,679
|
NONE
| null |
### System Info
Hello @LysandreJik,
Thanks a lot for your work on no language left behind.
Is there any plan to add the 54.4B Model?
Kindest regards
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Improvement
### Expected behavior
Improvement
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21300/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21300/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21299
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21299/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21299/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21299/events
|
https://github.com/huggingface/transformers/pull/21299
| 1,556,684,892
|
PR_kwDOCUB6oc5Ig97x
| 21,299
|
[Hubert] Fix Hubert processing auto
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks!"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently on the `main` branch, the scripts provided on the docstring of `Hubert` fails:
```
from transformers import AutoProcessor, HubertModel
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1
hidden_states = model(input_values).last_hidden_statehidden_states = model(input_values).last_hidden_state
```
With the PR #21225 all custom `xxxProcessor` have been removed in favor of `AutoProcessor` in docstrings. Since `Hubert` was not included inside the processor automapping the script above lead into a bug, since if the model is not present in the auto mapping dictionary, the script will try to load a tokenizer: https://github.com/huggingface/transformers/blob/255257f3ea0862cbb92ea9fa1113cbee1898aadd/src/transformers/models/auto/processing_auto.py#L275 / Hence, `Wav2vec2CTCTokenizer` was loaded instead of `Wav2vec2Processor` that is supposed to be the target object to be loaded.
This PR fixes this by adding `hubert` inside the automapping class for `AutoProcessor`
This PR also fixes 2 failing doctests for `HubertModel`, link to failing job: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
cc @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21299/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21299",
"html_url": "https://github.com/huggingface/transformers/pull/21299",
"diff_url": "https://github.com/huggingface/transformers/pull/21299.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21299.patch",
"merged_at": 1674660992000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21298
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21298/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21298/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21298/events
|
https://github.com/huggingface/transformers/pull/21298
| 1,556,581,086
|
PR_kwDOCUB6oc5Ignlm
| 21,298
|
[Whisper] Add SpecAugment
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks very nice now that everything's living in the modelling file! Great that you've leveraged very closely from Wav2Vec2 as well. \r\n\r\nI see the problem that you're currently working around with the padded audio inputs! Here your passing an attention mask from the feature extractor to the model which tells the model where the audio inputs have been padded to 30s. When we mask using SpecAug, we don't want to mask any of these padded features, only the real audio inputs.\r\n\r\nWith regards to whether we should use an attention mask for SpecAug, it's hard to know because we don't have a reference implementation. However, my feeling is that we **should** use an attention mask and only pad the real audio inputs, not any of the padded zeros. It makes little sense to pass an audio of length 10s to the model and then mask the spectrogram from 20-25s (which would just be silence...)\r\n\r\nWDYT here @bofenghuang @ArthurZucker? Pass an attention mask and only compute SpecAug on the real audio inputs? Or ditch the attention mask and compute SpecAug uniformly across the 30s input (whether that input be audio or padded silence)?",
"Hi @sanchit-gandhi, thanks for the explantation! And I'm agree with you, here I tried to mask only real values using `attention_mask`",
"@bofenghuang is it ready for review? \r\n",
"@ArthurZucker yes please ! One validated, we could add this option to run_speech_recognition_seq2seq.py",
"Yes! Let's go for numpy! Especially given that current users would have a breaking change if they do not have librosa. ",
"@sanchit-gandhi @ArthurZucker thanks for the review ! Just add the test !",
"Can you make sure to fix the conflicts, and rebase on main to use the latest linter? (you need to do another `pip install -e \".[dev]\"`",
"@ArthurZucker thanks for the tips ! Think it's done",
"Will review again!",
"Done ! Thanks to all the reviews and the discussions @ArthurZucker @sanchit-gandhi @sgugger !",
"Thanks a lot for your contributions! And congrats on the PR 😉 ",
"@bofenghuang thanks for PR\r\n\r\nlooking forward to this (is it already available)\r\n> Update training script [run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py), adding attention_mask to prepare_dataset\r\n\r\nany tips / hint ,how to apply it during training also very helpful\r\n",
"Hey @acul3! You just need to set these config parameters according to your spec aug requirements: \r\n\r\nhttps://github.com/huggingface/transformers/blob/8c40ba73d8091ebe0bdc8da5b634bf7951d18f99/src/transformers/models/whisper/configuration_whisper.py#L139-L167\r\n\r\nThe rest will be taken care for you in the training script!\r\n\r\nThe easiest way of doing this is probably by first downloading the config to your local device and setting the SpecAug params:\r\n```python\r\nfrom transformers import WhisperConfig\r\n\r\nconfig = WhisperConfig.from_pretrained(\"openai/whisper-tiny\") # update to the checkpoint you want to fine-tune from\r\n\r\nconfig.apply_spec_augment = True\r\n... # set all the other spec aug params as required\r\n\r\nconfig.save_pretrained(\"/some/local/path/to/save\")\r\n```\r\n\r\nThen in the training script, either add the argument to your bash script:\r\n```\r\n--config_name=\"/some/local/path/to/save\" \\\r\n```\r\n\r\nOr load the config from the place you saved it if you're using a notebook:\r\n```python\r\nconfig = WhisperConfig.from_pretrained(\"/some/local/path/to/save\")\r\n```",
"@acul3 please see this PR https://github.com/huggingface/transformers/pull/21942 :)"
] | 1,674
| 1,678
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @sanchit-gandhi @ArthurZucker,
Thanks for pointing out the flaw in the other PR (https://github.com/huggingface/transformers/pull/21063)! Here I will add [SpecAugment](https://arxiv.org/abs/1904.08779) to [modeling_whisper.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py)
Several things have been done or to be done:
- [x] Return `attention_mask` by `WhisperFeatureExtractor`, which will be used to guide the mask function along the time axis
- [x] Rescale `attention_mask` from the sample level (48000) to the feature level (3000) by `hop_length` (160). It is done inside `WhisperFeatureExtractor` since the `hop_length` is defined there. But I'm not sure if returned `attention_mask` has other utilities
- [x] Copy `_compute_mask_indices` of wav2vec2, utility function to generate masks
- [x] Add `_mask_input_features` to mask `input_features`, referring to `_mask_hidden_states ` in wav2vec2
- [x] Add related parameters to the model config
- [x] Add test
- [ ] Update training script [run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py), adding `attention_mask` to `prepare_dataset`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21298/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/21298/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21298",
"html_url": "https://github.com/huggingface/transformers/pull/21298",
"diff_url": "https://github.com/huggingface/transformers/pull/21298.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21298.patch",
"merged_at": 1677233272000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21297
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21297/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21297/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21297/events
|
https://github.com/huggingface/transformers/pull/21297
| 1,556,574,278
|
PR_kwDOCUB6oc5IgmH2
| 21,297
|
[Doctest] Fix `Blenderbot` doctest
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually, as discussed with @ydshieh offline, there seems to be a discrepency between `BlenderbotTokenizer` & `BlenderbotTokenizerFast`. \r\nThe PR #21225 changed the docstring to use `AutoTokenizer` instead of `BlenderbotTokenizer`. This lead to loading `BlenderbotTokenizerFast`. You can reproduce the discrepency with the script below:\r\n```python\r\nfrom transformers import BlenderbotTokenizer, BlenderbotTokenizerFast, BlenderbotForConditionalGeneration, AutoTokenizer\r\n\r\nmname = \"facebook/blenderbot-400M-distill\"\r\nmodel = BlenderbotForConditionalGeneration.from_pretrained(mname)\r\n\r\ntokenizer = BlenderbotTokenizer.from_pretrained(mname)\r\ntokenizer_fast = BlenderbotTokenizerFast.from_pretrained(mname)\r\n\r\ndef generate(tokenizer):\r\n UTTERANCE = \"My friends are cool but they eat too many carbs.\"\r\n\r\n inputs = tokenizer([UTTERANCE], return_tensors=\"pt\")\r\n\r\n NEXT_UTTERANCE = (\r\n \"My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. \"\r\n \"Are they trying to lose weight or are they just trying to be healthier?</s> \"\r\n \"<s> I'm not sure.\"\r\n )\r\n inputs = tokenizer([NEXT_UTTERANCE], return_tensors=\"pt\")\r\n next_reply_ids = model.generate(**inputs)\r\n print(\"Bot: \", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])\r\n\r\ngenerate(tokenizer)\r\n>>> Bot: That's too bad. Have you tried encouraging them to change their eating habits? \r\ngenerate(tokenizer_fast)\r\n>>> Bot: I see. Well, it's good that they're trying to change their eating habits.\r\n``` \r\nI am not sure if this is a known bug or intended. Maybe the changes I proposed is not the correct fix here",
"Thanks for digging deeper @younesbelkada! Could you check what's the difference between the fast and slow tokenizer from the checkpoint used in this doc example? And compare the difference of `inputs = tokenizer([UTTERANCE], return_tensors=\"pt\") between these 2 tokenizers.\r\n`\r\n\r\nAnother similar issue (but not related to this one)\r\nhttps://github.com/huggingface/transformers/pull/21254",
"I think it's good as we want to default to fast tokenizers (which is the reason we switched to AutoTokenizer) so the fix is the right one in my opinion.",
"I agree - but just thinking if we should find out what's going wrong and potentially fix the inconsistency between these 2 tokenizers (or something in our codebase).\r\n\r\nThe fix is good for me, and you can merge @younesbelkada !",
"Thanks everyone!\r\nI will open an issue to describe the bug \r\nEDIT: https://github.com/huggingface/transformers/issues/21305"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the doctest `transformers.models.blenderbot.modeling_blenderbot.BlenderbotForConditionalGeneration.forward` . Link to failing job is here: https://github.com/huggingface/transformers/actions/runs/4002271138/jobs/6869333719
Updating the prediction with the correct result seems to be the correct fix. I am usure whether this was tested before so I cannot compare for now.
One thing that I suspect is that we get different results across different PT versions, but not sure
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21297/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21297",
"html_url": "https://github.com/huggingface/transformers/pull/21297",
"diff_url": "https://github.com/huggingface/transformers/pull/21297.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21297.patch",
"merged_at": 1674664109000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21296
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21296/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21296/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21296/events
|
https://github.com/huggingface/transformers/pull/21296
| 1,556,566,641
|
PR_kwDOCUB6oc5IgkgD
| 21,296
|
[CI-Daily] replace `past` in prepare inputs for generation
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
This will fix the failing test. It is a little nit, that escaped during #20944
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21296/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21296",
"html_url": "https://github.com/huggingface/transformers/pull/21296",
"diff_url": "https://github.com/huggingface/transformers/pull/21296.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21296.patch",
"merged_at": 1674667559000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21295
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21295/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21295/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21295/events
|
https://github.com/huggingface/transformers/pull/21295
| 1,556,380,406
|
PR_kwDOCUB6oc5If73q
| 21,295
|
Update `OneFormerModelIntegrationTest` expected values
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
The test failures are likely a hardware/environment difference between the contributor and our CI runners.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21295/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21295",
"html_url": "https://github.com/huggingface/transformers/pull/21295",
"diff_url": "https://github.com/huggingface/transformers/pull/21295.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21295.patch",
"merged_at": 1674664023000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21294
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21294/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21294/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21294/events
|
https://github.com/huggingface/transformers/pull/21294
| 1,556,331,725
|
PR_kwDOCUB6oc5IfxOc
| 21,294
|
Fix `EfficientFormer`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Fix `EfficientFormer`:
- correct checkpoints
- fix an device issue regarding `EfficientFormerSelfAttention.ab`, see [failed job run page](https://github.com/huggingface/transformers/actions/runs/3992425421/jobs/6848316487)
The error:
```bash
(line 136) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21294/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21294",
"html_url": "https://github.com/huggingface/transformers/pull/21294",
"diff_url": "https://github.com/huggingface/transformers/pull/21294.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21294.patch",
"merged_at": 1674659356000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21293
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21293/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21293/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21293/events
|
https://github.com/huggingface/transformers/issues/21293
| 1,556,319,468
|
I_kwDOCUB6oc5cw4zs
| 21,293
|
from transformers import T5Model -> No module named 'torch._C'
|
{
"login": "ndvbd",
"id": 845175,
"node_id": "MDQ6VXNlcjg0NTE3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/845175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ndvbd",
"html_url": "https://github.com/ndvbd",
"followers_url": "https://api.github.com/users/ndvbd/followers",
"following_url": "https://api.github.com/users/ndvbd/following{/other_user}",
"gists_url": "https://api.github.com/users/ndvbd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ndvbd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ndvbd/subscriptions",
"organizations_url": "https://api.github.com/users/ndvbd/orgs",
"repos_url": "https://api.github.com/users/ndvbd/repos",
"events_url": "https://api.github.com/users/ndvbd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ndvbd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Solved it by:\r\n\r\n```\r\npip3 uninstall torch\r\npip3 install torch\r\n\r\n```\r\nVery weird."
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
Prompt says to use "transformers-cli env", but it's not clear where is the documentation for installing transformers-cli on Ubuntu...
python version: 3.10.6
system: ubuntu 20 (no gpu, laptop)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import T5Model
### Expected behavior
Should give no errors, but for me it gives:
```
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1101, in __getattr__
value = getattr(module, name)
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1100, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py", line 1112, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):
No module named 'torch._C'
Process finished with exit code 1
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21293/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21292
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21292/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21292/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21292/events
|
https://github.com/huggingface/transformers/pull/21292
| 1,556,303,812
|
PR_kwDOCUB6oc5IfrQb
| 21,292
|
Moving to cleaner tokenizer version or `oneformer`.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Enables `oneformer` models on `image-segmentation` pipeline.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21292/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21292",
"html_url": "https://github.com/huggingface/transformers/pull/21292",
"diff_url": "https://github.com/huggingface/transformers/pull/21292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21292.patch",
"merged_at": 1674657970000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21291
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21291/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21291/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21291/events
|
https://github.com/huggingface/transformers/pull/21291
| 1,555,912,805
|
PR_kwDOCUB6oc5IeZ4u
| 21,291
|
add GPTSAN model (reopen)
|
{
"login": "tanreinama",
"id": 51933889,
"node_id": "MDQ6VXNlcjUxOTMzODg5",
"avatar_url": "https://avatars.githubusercontent.com/u/51933889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanreinama",
"html_url": "https://github.com/tanreinama",
"followers_url": "https://api.github.com/users/tanreinama/followers",
"following_url": "https://api.github.com/users/tanreinama/following{/other_user}",
"gists_url": "https://api.github.com/users/tanreinama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanreinama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanreinama/subscriptions",
"organizations_url": "https://api.github.com/users/tanreinama/orgs",
"repos_url": "https://api.github.com/users/tanreinama/repos",
"events_url": "https://api.github.com/users/tanreinama/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanreinama/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"oh... I still get an error that I don't understand. Do you know what is wrong? I pulled and merged from the latest main.",
"I will sync and pull main again.",
"Do you want a review? ",
"@ArthurZucker \r\nyes. this is ok.",
"@ArthurZucker can you review it or will you be late?",
"Reviewing now 😉 ",
"Still on the way: I have a few questions.",
"Feel free to ask! ",
"thanks.\r\n\r\nI was separated GPTSANJapaneseModel and GPTSANJapaneseForConditionalGeneration.\r\nRegarding the return value of GPTSANJapaneseForConditionalGeneration, using Seq2SeqMoEOutput like switch_transformers does not work.\r\nWell, this is not the encode_decode model.\r\n\r\n```\r\nreturn Seq2SeqMoEOutput(\r\n loss=loss,\r\n logits=lm_logits,\r\n encoder_z_loss=z_loss,\r\n encoder_aux_loss=aux_loss,\r\n past_key_values=outputs.past_key_values,\r\n encoder_last_hidden_state=outputs.last_hidden_state,\r\n encoder_hidden_states=outputs.hidden_states,\r\n encoder_attentions=outputs.attentions,\r\n encoder_router_logits=outputs.router_probs,\r\n )\r\n```\r\n↑ is said to be \"there is no attentions in the output\" in the unit test.\r\n\r\nUsing CausalLMOutputWithPast works.\r\n```\r\nreturn CausalLMOutputWithPast(\r\n loss=loss,\r\n logits=lm_logits,\r\n past_key_values=outputs.past_key_values,\r\n hidden_states=outputs.hidden_states,\r\n attentions=outputs.attentions,\r\n )\r\n```\r\nBut CausalLMOutputWithPast doesn't have z_loss or other switch transformer outputs.\r\nI can't seem to find a good fit one in modeling_outputs.py.\r\nIs it ok without switch transformer outputs?\r\n",
"ready to review.",
"Due to the time difference, the continuation will be tomorrow",
"Absolutely no problem! 😉 ",
"can review it.",
"@tanreinama the code looks much more cleaner now 🔥 \r\nLet's see the next review of @ArthurZucker but we wanted to thank you on your great efforts!\r\nI really like this model and would like to communicate about it on Twitter, can you share with us your social media handle? Thanks!",
"Oh... I don't do SNS. I don't have a Twitter or Instagram account (yeah, I'm a weirdo)\r\nI have only facebook. https://www.facebook.com/toshiyuki.sakamoto.75/",
"I found a few typo in comment. so I fixed it.",
"Reviewing again now",
"Ok, it's reviewable.",
"@ArthurZucker @sgugger\r\nI fixed the point in the comment. It's ready if checks are passed.",
"Congratulations! 🚀 This was a big model addition and the codebase is very clean now! \r\nWill try to share this new model on tweeter and see if we can reach our Japanese community! ",
"good timing",
"@ArthurZucker @sgugger\r\nok. I fixed it.",
"Congrats again on this work! and thanks for being a valuable contributor! 😉 🚀 ",
"Wow! I'm very happy! And thanks to the HuggingFace team. \r\nI couldn't have done it without your amazing and persistent support. It was my first experience committing to such a large repository, so I learned a lot. \r\nAnd I'm so excited. It's already night in Japan, but I might not be able to sleep😘"
] | 1,674
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# Model description
**Before PR was automatically closed as a result of sync and pull, so it will be reopened.**
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and works with both Test Generation and Masked Language Model.
To add this model to the transformer, I did the following:
Porting GPTSAN to PyTorch. Model conversion. Creating model cards in HuggingFace Hub. Porting generation code.
The model card has already been uploaded. (https://huggingface.co/Tanrei/GPTSAN-japanese/)
Tokenizer uses GPT-NeoX-Japanese, and only new vocabulary files are uploaded to the model card. Minor differences are absorbed within the generation algorithm in the model's source code.
GPTSAN repository is:
https://github.com/tanreinama/GPTSAN
Discussion of HuggingFace integration is:
https://github.com/tanreinama/GPTSAN/issues/2
Thanks to: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21291/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21291/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21291",
"html_url": "https://github.com/huggingface/transformers/pull/21291",
"diff_url": "https://github.com/huggingface/transformers/pull/21291.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21291.patch",
"merged_at": 1676888728000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21290
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21290/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21290/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21290/events
|
https://github.com/huggingface/transformers/pull/21290
| 1,555,713,297
|
PR_kwDOCUB6oc5Idul_
| 21,290
|
[`bnb`] Fine-tuning HF 8-bit models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As the training support has just been added in `bitsandbytes==0.37.0` I proposed new changes, I also added new tests\r\nThis PR is now ready for review"
] | 1,674
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR attempts to add the official support of fine-tuning 8-bit models using `transformers`, `bitsandbytes` and adaptors (such as LoRA), supported by `peft`.
With this PR, it will be possible to fine-tune large models with no cost, for e.g. it will be possible to fine-tune `opt-6.7b` in a single Google Colab instance. This would also enable fine-tuning Whisper and large flan-t5 in 8bit.
In order to perform this fine-tuning, a user needs to load the model with the flag `enable_memory_efficient_backward=True`, freeze the parameters of the model and use `peft` to inject adaptators inside the model.
The PR comes also with `Trainer` integration of this feature, that is supported at least in a single GPU setup.
Here is a script (based on an old notebook from @justheuristic) a user can try to run with this PR:
```
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"facebook/opt-6.7b",
load_in_8bit=True,
device_map='auto',
torch_dtype=torch.float16,
enable_memory_efficient_backward=True
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b")
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable() # reduce number of stored activations
model.model.decoder.project_in = lambda x: x.requires_grad_(True)
class CastOutputToFloat(nn.Sequential):
def forward(self, x): return super().forward(x).to(torch.float32)
model.lm_head = CastOutputToFloat(model.lm_head)
class LoRALayer(nn.Module):
"""Wraps a linear layer with LoRA-like adapter"""
def __init__(self, module: nn.Module, rank: int):
super().__init__()
self.module = module
self.adapter = nn.Sequential(nn.Linear(module.in_features, rank, bias=False),
nn.Linear(rank, module.out_features, bias=False))
small_std = (2. / (5 * min(module.in_features, module.out_features))) ** 0.5
nn.init.normal_(self.adapter[0].weight, std=small_std)
nn.init.zeros_(self.adapter[1].weight)
self.adapter.to(module.weight.device)
def forward(self, input, *args, **kwargs):
return self.module(input, *args, **kwargs) + self.adapter(input)
for name, module in model.named_modules():
if 'OPTAttention' in repr(type(module)):
module.q_proj = LoRALayer(module.q_proj, rank=16)
module.k_proj = LoRALayer(module.k_proj, rank=16)
module.v_proj = LoRALayer(module.v_proj, rank=16)
assert sum(isinstance(module, LoRALayer) for module in model.modules()) == 96
import transformers
from datasets import load_dataset
data = load_dataset("Abirate/english_quotes")
data = data.map(lambda samples: tokenizer(samples['quote']), batched=True)
trainer = transformers.Trainer(
model=model, train_dataset=data['train'],
args=transformers.TrainingArguments(
per_device_train_batch_size=4, gradient_accumulation_steps=4,
warmup_steps=250, max_steps=1000, learning_rate=2e-4, fp16=True,
logging_steps=1, output_dir='outputs'),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
```
## TODOs:
- [x] clear notebook
- [x] how to share adapters weights using `peft`
- [x] tests
cc @pacman100 @TimDettmers @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21290/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21290",
"html_url": "https://github.com/huggingface/transformers/pull/21290",
"diff_url": "https://github.com/huggingface/transformers/pull/21290.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21290.patch",
"merged_at": 1675352363000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21289
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21289/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21289/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21289/events
|
https://github.com/huggingface/transformers/issues/21289
| 1,555,636,135
|
I_kwDOCUB6oc5cuR-n
| 21,289
|
convert fast tokenizers to slow
|
{
"login": "ahmedlone127",
"id": 66001253,
"node_id": "MDQ6VXNlcjY2MDAxMjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/66001253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedlone127",
"html_url": "https://github.com/ahmedlone127",
"followers_url": "https://api.github.com/users/ahmedlone127/followers",
"following_url": "https://api.github.com/users/ahmedlone127/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedlone127/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedlone127/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedlone127/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedlone127/orgs",
"repos_url": "https://api.github.com/users/ahmedlone127/repos",
"events_url": "https://api.github.com/users/ahmedlone127/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedlone127/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I don't think it's possible to get the sentencepiece model from the `tokenizer.json` file but maybe @Narsil knows a way.",
"hey @Narsil can you please give some insight on this?",
"You could try and create inverse scripts for the conversion you found. But it's not going to be trivial.\n\n\nYou need to create the protobuf sentencepiece expects.\n\nNot sure I can provide much more guidance.\n\nWhy do you want slow tokenizers if I may ask? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"hey @Narsil Thanks for the reply but I found a fix for my issue :) ",
"Awesome. Do you mind explaining a little more or giving links for potential readers that would want to do the same? ",
"For Sure!\r\n\r\nI noticed that you guys have code for converting a spm model ( A slow tokenizer ) to a tokenizer.json (fast tokenizer). I also noticed for some models you guys did not upload the SPM model even though it was an SPM based tokenizer. To get the SPM model from the tokenizer.json that was uploaded I had to figure out how to manually create an SPM model that had identical information as what's stored in the tokenizer.json\r\n\r\nFor example I had to copy the vocabulary , precompiled_charsmap , and other special tokens and manully edit a blank SPM file ( it already had the correct architecture and some dummy data that I removed while editing). Once all the information was copied over to the SPM file it was working as expected.\r\n\r\nhere is a notebook demonstrating the process \r\n\r\nhttps://colab.research.google.com/drive/1kfC_iEuU0upVQ5Y3rnnl5VSngSPuiSQI?usp=sharing\r\n",
"@ahmedlone127 @Narsil \r\nHey guys, so ive been training my tokenizers using spm. But however i am stuck as i am unable to figure out how to convert my sentencpiece.model to huggingface tokenizer (perferably fast tokenizer). \r\n\r\ncould you guys please link me all the resources on how could i do this ? ",
"Everything you need is here: https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py\r\n\r\nThere is no simple tutorial, there are many configurations in `tokenizers` that could achieve what you want, with various tradeoffs.\r\nWhat I recommend is running a diverse set of utf-8 + running all special tokens combinations that might be useful in your test suite to verify IDs do match.\r\n\r\n"
] | 1,674
| 1,701
| 1,677
|
NONE
| null |
### Feature request
Recently noticed that the models being uploaded now are only their fast versions and the sentencepeice model (that's included in the slow version) is missing. I need the sentence peice model of some tokenizers for a personal project and wanted to know what's the best way to go about that. After I looked through the current code on the repository I saw that there were a lot of methods for handling Conversion from Slow to Fast tokenization so I think it should be possible the other way around too. After a bit of research the only quick and dirty way I could think of was creating a utility script for converting the json files of the fast tokenizer to a the spe model format for a slow tokenizer because I think the information in both is the same so the mechanics should be similar too.
### Motivation
I looked through the tokenizers and saw that most of them getting uploaded don't have slow tokenizers.
### Your contribution
If there is any way I can help I would love to know , just need some guaidence on how to implement this!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21289/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21288
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21288/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21288/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21288/events
|
https://github.com/huggingface/transformers/pull/21288
| 1,555,518,339
|
PR_kwDOCUB6oc5IdEGz
| 21,288
|
Fix `TrainingArguments.label_names` docs to reflect the correct default value behaviour
|
{
"login": "fredtcaroli",
"id": 7407656,
"node_id": "MDQ6VXNlcjc0MDc2NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7407656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredtcaroli",
"html_url": "https://github.com/fredtcaroli",
"followers_url": "https://api.github.com/users/fredtcaroli/followers",
"following_url": "https://api.github.com/users/fredtcaroli/following{/other_user}",
"gists_url": "https://api.github.com/users/fredtcaroli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fredtcaroli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fredtcaroli/subscriptions",
"organizations_url": "https://api.github.com/users/fredtcaroli/orgs",
"repos_url": "https://api.github.com/users/fredtcaroli/repos",
"events_url": "https://api.github.com/users/fredtcaroli/events{/privacy}",
"received_events_url": "https://api.github.com/users/fredtcaroli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This was never flagged as a breaking change (indeed I only found this because it broke one of my scripts). I wonder if I should add \"🚨 🚨 🚨\" to the PR name to indicate a breaking change",
"The breaking change is in the PR that changed the default to `label_names` a while ago, not in this one :-) "
] | 1,674
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Fixes #21287
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21288/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21288",
"html_url": "https://github.com/huggingface/transformers/pull/21288",
"diff_url": "https://github.com/huggingface/transformers/pull/21288.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21288.patch",
"merged_at": 1674589704000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21287
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21287/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21287/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21287/events
|
https://github.com/huggingface/transformers/issues/21287
| 1,555,470,283
|
I_kwDOCUB6oc5ctpfL
| 21,287
|
[docs] TrainingArguments default label_names is not what is described in the documentation
|
{
"login": "fredsensibill",
"id": 77297340,
"node_id": "MDQ6VXNlcjc3Mjk3MzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/77297340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredsensibill",
"html_url": "https://github.com/fredsensibill",
"followers_url": "https://api.github.com/users/fredsensibill/followers",
"following_url": "https://api.github.com/users/fredsensibill/following{/other_user}",
"gists_url": "https://api.github.com/users/fredsensibill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fredsensibill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fredsensibill/subscriptions",
"organizations_url": "https://api.github.com/users/fredsensibill/orgs",
"repos_url": "https://api.github.com/users/fredsensibill/repos",
"events_url": "https://api.github.com/users/fredsensibill/events{/privacy}",
"received_events_url": "https://api.github.com/users/fredsensibill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Indeed. Do you want to open a PR to fix the documentation?"
] | 1,674
| 1,674
| 1,674
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: macOS-12.6.1-arm64-arm-64bit
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger, @stevhliu and @MKhalusova
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Create a model with a `forward` that has more than one label. For example:
```
def forward(
self,
input_ids,
bbox,
attention_mask,
token_type_ids,
labels,
reference_labels
)
```
2. Create a trainer for your model with `trainer = Trainer(model, ...)`. Make sure to not set `label_names` and let it default.
3. Check `trainer.label_names` and see that it returns `["labels", "reference_labels"]`
### Expected behavior
[The documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.label_names) states that:
> Will eventually default to ["labels"] except if the model used is one of the XxxForQuestionAnswering in which case it will default to ["start_positions", "end_positions"].
[This PR](https://github.com/huggingface/transformers/pull/16526) changed the behaviour that the documentation describes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21287/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.