url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/20685
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20685/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20685/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20685/events
|
https://github.com/huggingface/transformers/issues/20685
| 1,484,942,265
|
I_kwDOCUB6oc5Ygmu5
| 20,685
|
Training ConformerCTC suitable for online inference
|
{
"login": "pfeatherstone",
"id": 45853521,
"node_id": "MDQ6VXNlcjQ1ODUzNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/45853521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pfeatherstone",
"html_url": "https://github.com/pfeatherstone",
"followers_url": "https://api.github.com/users/pfeatherstone/followers",
"following_url": "https://api.github.com/users/pfeatherstone/following{/other_user}",
"gists_url": "https://api.github.com/users/pfeatherstone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pfeatherstone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pfeatherstone/subscriptions",
"organizations_url": "https://api.github.com/users/pfeatherstone/orgs",
"repos_url": "https://api.github.com/users/pfeatherstone/repos",
"events_url": "https://api.github.com/users/pfeatherstone/events{/privacy}",
"received_events_url": "https://api.github.com/users/pfeatherstone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co) for such questions.",
"Ok sorry",
"@sgugger I added a post [here](https://discuss.huggingface.co/t/conformerctc-for-streaming/27480). i would greatly appreciate your input."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### Feature request
Provide a mechanism (or docs if already possible) for training a Conformer model with CTC loss function, such that when inferring live using blocked buffered data, you get the same output as if passing the whole data in one go. Also, discuss if this is resilient to sample offsets.
### Motivation
I would like to use a Conformer model trained with CTC loss live using buffered data coming of a sensor.
### Your contribution
Maybe if i had guidance
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20685/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20684
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20684/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20684/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20684/events
|
https://github.com/huggingface/transformers/pull/20684
| 1,484,911,895
|
PR_kwDOCUB6oc5EzhUL
| 20,684
|
Enable bf16 option for XLA devices
|
{
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I don't see how the gradient scaling path is disabled, could you share more information on that? A scaler is still defined at line 591.\r\n\r\nLine 568 disables the code section that enables grad scaler when XLA device is detected (is_torch_tpu_available()).",
"This PR accidentally disabled gradient scaling when using FP16 on XLA devices. ",
"Indeed. Do you want to make a PR with a fix @Lokiiiiii ?"
] | 1,670
| 1,679
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
XLA devices like TPU and NeuronCore supports BF16 natively. This PR enables --bf16 option to work for XLA devices.
Since BF16 doesn't require gradient scaling, gradient scaling path is disabled for XLA devices.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20684/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20684",
"html_url": "https://github.com/huggingface/transformers/pull/20684",
"diff_url": "https://github.com/huggingface/transformers/pull/20684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20684.patch",
"merged_at": 1670520880000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20683
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20683/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20683/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20683/events
|
https://github.com/huggingface/transformers/pull/20683
| 1,484,900,952
|
PR_kwDOCUB6oc5Eze4Q
| 20,683
|
Add `keep_in_fp32_modules` support
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"What about adding hooks on each converted module, that will take care of converting the input / output to the correct `dtype` ?",
"_The documentation is not available anymore as the PR was closed or merged._",
"As suggested in #20287 / model loaded in bfloat16 should keep their weights in `bfloat16` and not cast them in `fp32`. This is addressed in e3498da",
"Thanks @younesbelkada and @sgugger !! Tested this locally; can confirm this works with patch 1&2 from https://github.com/huggingface/transformers/issues/20287#issuecomment-1342219429\r\n\r\nThe only problem I encountered is that in: \r\n\r\nhttps://github.com/huggingface/transformers/blob/0f75387d633619b8c0cf6955b07dac54d2e17473/src/transformers/modeling_utils.py#L2326\r\n\r\nYou get an error as `keep_in_fp32_modules` is an unexpected keyword to the underlying model class (locally i just added it quickly to test). Do you want to add this in so people can use it in their model class to determine where to apply patches like 1&2? Or alternatively don't pass it on and then people can just query the dtype.",
"Thanks so much @larsmennen for confirming that the tests pass! We should be close merging this 💪 \r\nI think that your failing test should be fixed with my latest commit ( cb89c42 ) but I am not sure, could you try again with the latest commit? 🙏 ",
"hmm that doesn't fix it. I think you just need to pop the argument from model_kwargs, otherwise it gets passed to the underlying model (i'm assuming you don't want that? but cmiiw)\r\n\r\nI.e. after\r\n\r\nhttps://github.com/huggingface/transformers/blob/7d47df2e52c6b55d8f19e5b2bd8b5e472a4f0a82/src/transformers/modeling_utils.py#L1981\r\n\r\nif you add\r\n\r\n```py\r\nkeep_in_fp32_modules = kwargs.pop(\"keep_in_fp32_modules\", None)\r\n```\r\n\r\nI tested w/ that modification on top of [7d47df2](https://github.com/huggingface/transformers/pull/20683/commits/7d47df2e52c6b55d8f19e5b2bd8b5e472a4f0a82) and that works! Thanks for the quick action @younesbelkada ! 🙏 ",
"@larsmennen how are you loading your model ? The description above is slightly misleading as initially the plan was to add a kwarg when loading the model as follows:\r\n```\r\nfrom transformers import T5ForConditionalGeneration\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-small\", torch_dtype=torch.float16, keep_in_fp32_modules=[\"wo\"])\r\n```\r\nbut now this is not needed, you should just load your model like:\r\n```\r\nfrom transformers import T5ForConditionalGeneration\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-small\", device_map=\"auto\", load_in_8bit=True])\r\n```",
"@younesbelkada ah i see! I was passing the kwarg yes, so that explains.",
"@larsmennen this PR will be merged as soon as all the tests will be green ! \r\nWould you mind opening a PR addressing your suggestions (patch 1 & 2 from the discussion at #20287 )? ",
"All slow tests from T5 (and BLOOM just in case we didn't break anything else) pass 🟢 \r\nMerging once the CI tests are green",
"I tried this one, latest version of transformers (27.4), cuda 10.2 and I get this error:\r\n\r\n`model1a_CPU = T5ForConditionalGeneration.from_pretrained(model_path, low_cpu_mem_usage=True,torch_dtype=torch.float16, keep_in_fp32_modules=[\"wo\"]).to(\"cuda\") \r\nTypeError: __init__() got an unexpected keyword argument 'keep_in_fp32_modules'`\r\n\r\n\r\n",
"`keep_in_fp32_modules` is not an argument you can pass to `from_pretrained`, this is all done internally.",
"You need to do somthing like:\r\n\r\n```python\r\nfrom transformers import T5ForConditionalGeneration\r\n\r\nT5ForConditionalGeneration._keep_in_fp32_modules = [\"wo\"]\r\n\r\n# your code here\r\n```",
"Except this is already done for T5 ;-)"
] | 1,670
| 1,680
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR partially addresses #20287 - although half-precision and int8 conversion work extremely well for most of the models, for some architectures (e.g. T5) the casting leads in a drastic performance degradation.
This can be fixed by manually force-casting some modules in `float32`. For FLAN-T5, @larsmennen and @navjotts have found out that keeping only these weights in `fp32` enables to run largest models in fp16 or int8 with no performance degradation.
This PR introduces a new utils in `from_pretrained` method, termed as `keep_in_fp32_modules` that partially addresses this issue.
How this util works? For T5:
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-small", torch_dtype=torch.float16, keep_in_fp32_modules=["wo"])
print(model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype)
>>> torch.float32
```
When using `keep_in_fp32_modules` , `low_cpu_mem_usage` needs to be force-set to `True`. This is because if `low_cpu_mem_usage=False`, it is the function from pytorch `_load_from_state_dict` that is called under the hood on each sub-module. This function calls `copy_` from Pytorch which seems to keep the tensor in its native `dtype` regardless the `dtype` of the input module
```
import torch
param = torch.Tensor([0.1, 0.2, 0.3]).to(torch.float16)
to_copy_param = torch.Tensor([0.2, 0.1, 0.3]).to(torch.float32)
param.copy_(to_copy_param)
print(param.dtype)
>>> torch.float16
```
Keeping this as a draft for now as this util needs to be manually patched with fixes such as https://github.com/huggingface/transformers/issues/20287#issuecomment-1342219429 , otherwise users will encounter issues about incompatible `dtype` between input and weights
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20683/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20683",
"html_url": "https://github.com/huggingface/transformers/pull/20683",
"diff_url": "https://github.com/huggingface/transformers/pull/20683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20683.patch",
"merged_at": 1670929198000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20682
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20682/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20682/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20682/events
|
https://github.com/huggingface/transformers/pull/20682
| 1,484,863,062
|
PR_kwDOCUB6oc5EzWgb
| 20,682
|
Feature: Adding support for MultiWorkerMirroredStrategy in TensorFlow Training Args
|
{
"login": "Lokiiiiii",
"id": 36520926,
"node_id": "MDQ6VXNlcjM2NTIwOTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/36520926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lokiiiiii",
"html_url": "https://github.com/Lokiiiiii",
"followers_url": "https://api.github.com/users/Lokiiiiii/followers",
"following_url": "https://api.github.com/users/Lokiiiiii/following{/other_user}",
"gists_url": "https://api.github.com/users/Lokiiiiii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lokiiiiii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lokiiiiii/subscriptions",
"organizations_url": "https://api.github.com/users/Lokiiiiii/orgs",
"repos_url": "https://api.github.com/users/Lokiiiiii/repos",
"events_url": "https://api.github.com/users/Lokiiiiii/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lokiiiiii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20682). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
# What does this PR do?
Adding support for MultiWorkerMirroredStrategy in TensorFlow Training Args. This is an existing stable Distribution strategy provided by TensorFlow.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @jplu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20682/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20682",
"html_url": "https://github.com/huggingface/transformers/pull/20682",
"diff_url": "https://github.com/huggingface/transformers/pull/20682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20682.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20681
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20681/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20681/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20681/events
|
https://github.com/huggingface/transformers/pull/20681
| 1,484,845,318
|
PR_kwDOCUB6oc5EzSgB
| 20,681
|
Whilelist Transformers private method in DummyObject
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"You can't do that kind of check without triggerring recursion errors (cause it calls methods like `__getattribute__` inside that method ;-) )."
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #20671
As reported in #20671, calling `AutoModel.from_config` on a model with a missing specific soft dependency does not raise the appropriate error. This is because this method ends up calling `ModelClass._from_config` and the `DummyObject` class does not raise the error on all private attributes (basically because we need the `__xxx__` attribute to stay the same). This PR whitelists `_from_config` to fix the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20681/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20681",
"html_url": "https://github.com/huggingface/transformers/pull/20681",
"diff_url": "https://github.com/huggingface/transformers/pull/20681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20681.patch",
"merged_at": 1670516351000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20680
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20680/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20680/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20680/events
|
https://github.com/huggingface/transformers/pull/20680
| 1,484,790,811
|
PR_kwDOCUB6oc5EzGOj
| 20,680
|
Fix expected values for TF-ESM tests
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
MEMBER
| null |
I computed the expected values for these tests on my local machine with TF32 enabled - my bad! This replaces them with the correct float32 expected outputs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20680/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20680",
"html_url": "https://github.com/huggingface/transformers/pull/20680",
"diff_url": "https://github.com/huggingface/transformers/pull/20680.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20680.patch",
"merged_at": 1670513170000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20679
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20679/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20679/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20679/events
|
https://github.com/huggingface/transformers/pull/20679
| 1,484,748,443
|
PR_kwDOCUB6oc5Ey8t3
| 20,679
|
[`ViTHybrid`] Fix `accelerate` slow tests
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes failing `ViTHybrid` `accelerate` tests
Uses the same procedure as DPTHybrid for `backbone_featmap_shape` - Now all slow tests should pass :-)
cc @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20679/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20679",
"html_url": "https://github.com/huggingface/transformers/pull/20679",
"diff_url": "https://github.com/huggingface/transformers/pull/20679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20679.patch",
"merged_at": 1670517573000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20678
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20678/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20678/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20678/events
|
https://github.com/huggingface/transformers/pull/20678
| 1,484,715,822
|
PR_kwDOCUB6oc5Ey1bY
| 20,678
|
added model resources for CPMAnt
|
{
"login": "pioliverse",
"id": 119836898,
"node_id": "U_kgDOBySQ4g",
"avatar_url": "https://avatars.githubusercontent.com/u/119836898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pioliverse",
"html_url": "https://github.com/pioliverse",
"followers_url": "https://api.github.com/users/pioliverse/followers",
"following_url": "https://api.github.com/users/pioliverse/following{/other_user}",
"gists_url": "https://api.github.com/users/pioliverse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pioliverse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pioliverse/subscriptions",
"organizations_url": "https://api.github.com/users/pioliverse/orgs",
"repos_url": "https://api.github.com/users/pioliverse/repos",
"events_url": "https://api.github.com/users/pioliverse/events{/privacy}",
"received_events_url": "https://api.github.com/users/pioliverse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi there, thanks for your PR! It contains a lot of modifications that have nothing to do with the title (bad rebase?) so you might need to open a fresh PR :-)",
"> Hi there, thanks for your PR! It contains a lot of modifications that have nothing to do with the title (bad rebase?) so you might need to open a fresh PR :-)\r\nShould I pass all checks?\r\n"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20678/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20678",
"html_url": "https://github.com/huggingface/transformers/pull/20678",
"diff_url": "https://github.com/huggingface/transformers/pull/20678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20678.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20677
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20677/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20677/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20677/events
|
https://github.com/huggingface/transformers/pull/20677
| 1,484,713,457
|
PR_kwDOCUB6oc5Ey041
| 20,677
|
Bump certifi from 2021.10.8 to 2022.12.7 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
Bumps [certifi](https://github.com/certifi/python-certifi) from 2021.10.8 to 2022.12.7.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/9e9e840925d7b8e76c76fdac1fab7e6e88c1c3b8"><code>9e9e840</code></a> 2022.12.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b81bdb269f1edb791bcd4ec8a9d0c053758f961a"><code>b81bdb2</code></a> 2022.09.24</li>
<li><a href="https://github.com/certifi/python-certifi/commit/939a28ffc57b1613770f572b584745c7b6d43e7d"><code>939a28f</code></a> 2022.09.14</li>
<li><a href="https://github.com/certifi/python-certifi/commit/aca828a78e73235a513dff9ebc181a47ef7dbf7b"><code>aca828a</code></a> 2022.06.15.2</li>
<li><a href="https://github.com/certifi/python-certifi/commit/de0eae12a6d5794a4c1e33052af6717707ce1fcc"><code>de0eae1</code></a> Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b8eb5e9af9143b22b7f651942b393e369ed4c52a"><code>b8eb5e9</code></a> 2022.06.15.1</li>
<li><a href="https://github.com/certifi/python-certifi/commit/47fb7ab715965684e035292d2ad3386aabdc4d25"><code>47fb7ab</code></a> Fix deprecation warning on Python 3.11 (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/199">#199</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b0b48e059995f455ac1e79b3ad373ad4ef355516"><code>b0b48e0</code></a> fixes <a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/198">#198</a> -- update link in license</li>
<li><a href="https://github.com/certifi/python-certifi/commit/9d514b4cad79357071c89d7dc4dc1b4df72bb997"><code>9d514b4</code></a> 2022.06.15</li>
<li><a href="https://github.com/certifi/python-certifi/commit/4151e8849481f396537c34812068e89b32731e52"><code>4151e88</code></a> Add py.typed to MANIFEST.in to package in sdist (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/196">#196</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2021.10.08...2022.12.07">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20677/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20677",
"html_url": "https://github.com/huggingface/transformers/pull/20677",
"diff_url": "https://github.com/huggingface/transformers/pull/20677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20677.patch",
"merged_at": 1670516111000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20676
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20676/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20676/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20676/events
|
https://github.com/huggingface/transformers/pull/20676
| 1,484,673,775
|
PR_kwDOCUB6oc5Eyr_O
| 20,676
|
fix text config and model loading.
|
{
"login": "jongjyh",
"id": 37979232,
"node_id": "MDQ6VXNlcjM3OTc5MjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/37979232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jongjyh",
"html_url": "https://github.com/jongjyh",
"followers_url": "https://api.github.com/users/jongjyh/followers",
"following_url": "https://api.github.com/users/jongjyh/following{/other_user}",
"gists_url": "https://api.github.com/users/jongjyh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jongjyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jongjyh/subscriptions",
"organizations_url": "https://api.github.com/users/jongjyh/orgs",
"repos_url": "https://api.github.com/users/jongjyh/repos",
"events_url": "https://api.github.com/users/jongjyh/events{/privacy}",
"received_events_url": "https://api.github.com/users/jongjyh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20676). All of your documentation changes will be reflected on that endpoint."
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20676/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20676",
"html_url": "https://github.com/huggingface/transformers/pull/20676",
"diff_url": "https://github.com/huggingface/transformers/pull/20676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20676.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20675
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20675/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20675/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20675/events
|
https://github.com/huggingface/transformers/pull/20675
| 1,484,622,938
|
PR_kwDOCUB6oc5Eygfd
| 20,675
|
[Backbones] Improve out features
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR makes sure backbones by default return the feature map of the last stage in case `config.out_features = None`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20675/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20675",
"html_url": "https://github.com/huggingface/transformers/pull/20675",
"diff_url": "https://github.com/huggingface/transformers/pull/20675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20675.patch",
"merged_at": 1670573693000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20674
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20674/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20674/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20674/events
|
https://github.com/huggingface/transformers/issues/20674
| 1,484,571,126
|
I_kwDOCUB6oc5YfMH2
| 20,674
|
How to write a custom configuration for hugging face model for Token Classification
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to help debug your code.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
**Model description**
I add simple custom `pytorch-crf` layer on top of `TokenClassification model` for `NER`. It will make the model more robust.
I train the model and I get the error:
***** Running training *****
Num examples = 4
Num Epochs = 2
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 2
Gradient Accumulation steps = 1
Total optimization steps = 4
TypeError: __init__() missing 3 required positional arguments: 'id2label', 'label2id', and 'num_labels'
**Code**
from torchcrf import CRF
model_checkpoint = "spanbert"
tokenizer = BertTokenizer.from_pretrained(model_checkpoint,add_prefix_space=True)
bert_model = BertForTokenClassification.from_pretrained(
model_checkpoint,id2label=id2label,label2id=label2id)
bert_model.config.output_hidden_states=True
class BertClassifierConfig(PretrainedConfig):
model_type="BertForTokenClassification"
def __init__(self,id2label ,label2id,num_labels,**kwargs):
self.num_labels=num_labels
self.id2label=id2label
self.label2id=label2id
self.output_hidden_states=True
super().__init__(**kwargs)
**Model**
class BertForTokenClassification(PreTrainedModel):
config_class =BertClassifierConfig
def __init__(self, config,bert_model, num_labels):
super(BertForTokenClassification, self).__init__(config)
self.bert = bert_model
self.dropout = nn.Dropout(0.25)
self.classifier = nn.Linear(768, num_labels)
self.crf = CRF(num_labels, batch_first = True)
def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask)
sequence_output = torch.stack((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4])).mean(dim=0)
sequence_output = self.dropout(sequence_output)
emission = self.classifier(sequence_output) # [32,256,17]
labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])
if labels is not None:
loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return [loss, prediction]
else:
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return prediction
**Saving**
configuration = BertClassifierConfig(id2label ,label2id,num_labels=len(label2id))
model = BertForTokenClassification(configuration,bert_model, num_labels=len(label2id))
model.to(device)
args = TrainingArguments(
"test0000",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=2,
weight_decay=0.01,
per_device_train_batch_size=2,
# per_device_eval_batch_size=32
fp16=True
# bf16=True #Ampere GPU
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
# eval_dataset=train_data,
# data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
**Saving**
trainer.train()
trainer.save_model("modeltest")
AutoConfig.register("BertForTokenClassification", BertClassifierConfig)
AutoModel.register(BertClassifierConfig, BertForTokenClassification)
**ERROR**
***** Running training *****
Num examples = 4
Num Epochs = 2
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 2
Gradient Accumulation steps = 1
Total optimization steps = 4
TypeError: __init__() missing 3 required positional arguments: 'id2label', 'label2id', and 'num_labels'
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20674/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20673
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20673/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20673/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20673/events
|
https://github.com/huggingface/transformers/pull/20673
| 1,484,512,161
|
PR_kwDOCUB6oc5EyHXf
| 20,673
|
Bump certifi from 2020.6.20 to 2022.12.7 in /examples/research_projects/visual_bert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
[//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2020.6.20 to 2022.12.7.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/9e9e840925d7b8e76c76fdac1fab7e6e88c1c3b8"><code>9e9e840</code></a> 2022.12.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b81bdb269f1edb791bcd4ec8a9d0c053758f961a"><code>b81bdb2</code></a> 2022.09.24</li>
<li><a href="https://github.com/certifi/python-certifi/commit/939a28ffc57b1613770f572b584745c7b6d43e7d"><code>939a28f</code></a> 2022.09.14</li>
<li><a href="https://github.com/certifi/python-certifi/commit/aca828a78e73235a513dff9ebc181a47ef7dbf7b"><code>aca828a</code></a> 2022.06.15.2</li>
<li><a href="https://github.com/certifi/python-certifi/commit/de0eae12a6d5794a4c1e33052af6717707ce1fcc"><code>de0eae1</code></a> Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b8eb5e9af9143b22b7f651942b393e369ed4c52a"><code>b8eb5e9</code></a> 2022.06.15.1</li>
<li><a href="https://github.com/certifi/python-certifi/commit/47fb7ab715965684e035292d2ad3386aabdc4d25"><code>47fb7ab</code></a> Fix deprecation warning on Python 3.11 (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/199">#199</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b0b48e059995f455ac1e79b3ad373ad4ef355516"><code>b0b48e0</code></a> fixes <a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/198">#198</a> -- update link in license</li>
<li><a href="https://github.com/certifi/python-certifi/commit/9d514b4cad79357071c89d7dc4dc1b4df72bb997"><code>9d514b4</code></a> 2022.06.15</li>
<li><a href="https://github.com/certifi/python-certifi/commit/4151e8849481f396537c34812068e89b32731e52"><code>4151e88</code></a> Add py.typed to MANIFEST.in to package in sdist (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/196">#196</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2020.06.20...2022.12.07">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20673/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20673",
"html_url": "https://github.com/huggingface/transformers/pull/20673",
"diff_url": "https://github.com/huggingface/transformers/pull/20673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20673.patch",
"merged_at": 1670516143000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20672
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20672/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20672/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20672/events
|
https://github.com/huggingface/transformers/pull/20672
| 1,484,469,030
|
PR_kwDOCUB6oc5Ex9jx
| 20,672
|
Bump certifi from 2020.6.20 to 2022.12.7 in /examples/research_projects/lxmert
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
Bumps [certifi](https://github.com/certifi/python-certifi) from 2020.6.20 to 2022.12.7.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/9e9e840925d7b8e76c76fdac1fab7e6e88c1c3b8"><code>9e9e840</code></a> 2022.12.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b81bdb269f1edb791bcd4ec8a9d0c053758f961a"><code>b81bdb2</code></a> 2022.09.24</li>
<li><a href="https://github.com/certifi/python-certifi/commit/939a28ffc57b1613770f572b584745c7b6d43e7d"><code>939a28f</code></a> 2022.09.14</li>
<li><a href="https://github.com/certifi/python-certifi/commit/aca828a78e73235a513dff9ebc181a47ef7dbf7b"><code>aca828a</code></a> 2022.06.15.2</li>
<li><a href="https://github.com/certifi/python-certifi/commit/de0eae12a6d5794a4c1e33052af6717707ce1fcc"><code>de0eae1</code></a> Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b8eb5e9af9143b22b7f651942b393e369ed4c52a"><code>b8eb5e9</code></a> 2022.06.15.1</li>
<li><a href="https://github.com/certifi/python-certifi/commit/47fb7ab715965684e035292d2ad3386aabdc4d25"><code>47fb7ab</code></a> Fix deprecation warning on Python 3.11 (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/199">#199</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/b0b48e059995f455ac1e79b3ad373ad4ef355516"><code>b0b48e0</code></a> fixes <a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/198">#198</a> -- update link in license</li>
<li><a href="https://github.com/certifi/python-certifi/commit/9d514b4cad79357071c89d7dc4dc1b4df72bb997"><code>9d514b4</code></a> 2022.06.15</li>
<li><a href="https://github.com/certifi/python-certifi/commit/4151e8849481f396537c34812068e89b32731e52"><code>4151e88</code></a> Add py.typed to MANIFEST.in to package in sdist (<a href="https://github-redirect.dependabot.com/certifi/python-certifi/issues/196">#196</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2020.06.20...2022.12.07">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20672/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20672",
"html_url": "https://github.com/huggingface/transformers/pull/20672",
"diff_url": "https://github.com/huggingface/transformers/pull/20672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20672.patch",
"merged_at": 1670516094000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20671
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20671/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20671/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20671/events
|
https://github.com/huggingface/transformers/issues/20671
| 1,484,375,542
|
I_kwDOCUB6oc5YecX2
| 20,671
|
Calling `AutoModel.from_config()` method for a model requiring timm does not raise ImportError although it should
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Indeed, I can see why and it's an easy fix. Will make a PR in a couple of hours!",
"Can you try the PR mentioned above?",
"Works well thanks for the fix!"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.11.0.dev0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu)
- Jax version: 0.3.14
- JaxLib version: 0.3.14
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`pip uninstall timm`, and then:
```python
from transformers import AutoModel, AutoConfig
cfg = AutoConfig.from_pretrained("hf-internal-testing/tiny-random-detr")
model = AutoModel.from_config(cfg)
```
raising:
```
Traceback (most recent call last):
File "<tmp 1>", line 18, in <module>
model = AutoModel.from_config(cfg)
File "/home/fxmarty/hf_internship/transformers/src/transformers/models/auto/auto_factory.py", line 410, in from_config
return model_class._from_config(config, **kwargs)
File "/home/fxmarty/hf_internship/transformers/src/transformers/utils/import_utils.py", line 1008, in __getattribute__
return super().__getattribute__(key)
AttributeError: type object 'DetrModel' has no attribute '_from_config'
```
### Expected behavior
It should raise:
```
ImportError:
DetrModel requires the timm library but it was not found in your environment. You can install it with pip:
`pip install timm`. Please note that you may need to restart your runtime after installation.
```
as in https://github.com/huggingface/transformers/blob/main/src/transformers/utils/dummy_timm_and_vision_objects.py#L78
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20671/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20670
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20670/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20670/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20670/events
|
https://github.com/huggingface/transformers/issues/20670
| 1,484,355,681
|
I_kwDOCUB6oc5YeXhh
| 20,670
|
T5 for Q&A produces truncated sentence
|
{
"login": "junyongyou",
"id": 13484072,
"node_id": "MDQ6VXNlcjEzNDg0MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13484072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junyongyou",
"html_url": "https://github.com/junyongyou",
"followers_url": "https://api.github.com/users/junyongyou/followers",
"following_url": "https://api.github.com/users/junyongyou/following{/other_user}",
"gists_url": "https://api.github.com/users/junyongyou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junyongyou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junyongyou/subscriptions",
"organizations_url": "https://api.github.com/users/junyongyou/orgs",
"repos_url": "https://api.github.com/users/junyongyou/repos",
"events_url": "https://api.github.com/users/junyongyou/events{/privacy}",
"received_events_url": "https://api.github.com/users/junyongyou/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It is so strange that the format of code does not look correct, even I have put them in ``.",
"Hi there, you should use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only :-)",
"> Hi there, you should use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only :-)\r\n\r\nHi, I am so sorry about it. I have actually asked the same question in the forums, but didn't get answers. So I just want to try my luck here. I will close the issue."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions.
For example, I set both the max_length, max_input_length, max_output_length to 128.
How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question?
Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue?
Any suggestions are highly appreciated.
`
import pytorch_lightning as pl
from torch.utils.data import DataLoader
import torch
import numpy as np
import time
from pathlib import Path
from transformers import (
Adafactor,
T5ForConditionalGeneration,
T5Tokenizer,
get_linear_schedule_with_warmup
)
from torch.utils.data import RandomSampler
from question_answering.utils import *
class T5FineTuner(pl.LightningModule):
def __init__(self, hyparams):
super(T5FineTuner, self).__init__()
self.hyparams = hyparams
self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path)
self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path)
if self.hyparams.freeze_embeds:
self.freeze_embeds()
if self.hyparams.freeze_encoder:
self.freeze_params(self.model.get_encoder())
# assert_all_frozen()
self.step_count = 0
self.output_dir = Path(self.hyparams.output_dir)
n_observations_per_split = {
'train': self.hyparams.n_train,
'validation': self.hyparams.n_val,
'test': self.hyparams.n_test
}
self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()}
self.em_score_list = []
self.subset_score_list = []
data_folder = r'C:\Datasets\MedQuAD-master'
self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder)
def freeze_params(self, model):
for param in model.parameters():
param.requires_grad = False
def freeze_embeds(self):
try:
self.freeze_params(self.model.model.shared)
for d in [self.model.model.encoder, self.model.model.decoder]:
self.freeze_params(d.embed_positions)
self.freeze_params(d.embed_tokens)
except AttributeError:
self.freeze_params(self.model.shared)
for d in [self.model.encoder, self.model.decoder]:
self.freeze_params(d.embed_tokens)
def lmap(self, f, x):
return list(map(f, x))
def is_logger(self):
return self.trainer.proc_rank <= 0
def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None):
return self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
labels=labels
)
def _step(self, batch):
labels = batch['target_ids']
labels[labels[:, :] == self.tokenizer.pad_token_id] = -100
outputs = self(
input_ids = batch['source_ids'],
attention_mask=batch['source_mask'],
labels=labels,
decoder_attention_mask=batch['target_mask']
)
loss = outputs[0]
return loss
def ids_to_clean_text(self, generated_ids):
gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
return self.lmap(str.strip, gen_text)
def _generative_step(self, batch):
t0 = time.time()
generated_ids = self.model.generate(
batch["source_ids"],
attention_mask=batch["source_mask"],
use_cache=True,
decoder_attention_mask=batch['target_mask'],
max_length=128,
num_beams=2,
early_stopping=True
)
preds = self.ids_to_clean_text(generated_ids)
targets = self.ids_to_clean_text(batch["target_ids"])
gen_time = (time.time() - t0) / batch["source_ids"].shape[0]
loss = self._step(batch)
base_metrics = {'val_loss': loss}
summ_len = np.mean(self.lmap(len, generated_ids))
base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=targets)
em_score, subset_match_score = calculate_scores(preds, targets)
self.em_score_list.append(em_score)
self.subset_score_list.append(subset_match_score)
em_score = torch.tensor(em_score, dtype=torch.float32)
subset_match_score = torch.tensor(subset_match_score, dtype=torch.float32)
base_metrics.update(em_score=em_score, subset_match_score=subset_match_score)
# rouge_results = self.rouge_metric.compute()
# rouge_dict = self.parse_score(rouge_results)
return base_metrics
def training_step(self, batch, batch_idx):
loss = self._step(batch)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def training_epoch_end(self, outputs):
avg_train_loss = torch.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'avg_train_loss': avg_train_loss}
# return {'avg_train_loss': avg_train_loss, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs}
def validation_step(self, batch, batch_idx):
return self._generative_step(batch)
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
if len(self.em_score_list) <= 2:
average_em_score = sum(self.em_score_list) / len(self.em_score_list)
average_subset_match_score = sum(self.subset_score_list) / len(self.subset_score_list)
else:
latest_em_score = self.em_score_list[:-2]
latest_subset_score = self.subset_score_list[:-2]
average_em_score = sum(latest_em_score) / len(latest_em_score)
average_subset_match_score = sum(latest_subset_score) / len(latest_subset_score)
average_em_score = torch.tensor(average_em_score, dtype=torch.float32)
average_subset_match_score = torch.tensor(average_subset_match_score, dtype=torch.float32)
tensorboard_logs.update(em_score=average_em_score, subset_match_score=average_subset_match_score)
self.target_gen = []
self.prediction_gen = []
return {
'avg_val_loss': avg_loss,
'em_score': average_em_score,
'subset_match_socre': average_subset_match_score,
'log': tensorboard_logs,
'progress_bar': tensorboard_logs
}
def configure_optimizers(self):
model = self.model
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": self.hyparams.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = Adafactor(optimizer_grouped_parameters, lr=self.hyparams.learning_rate, scale_parameter=False,
relative_step=False)
self.opt = optimizer
return [optimizer]
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure=None,
on_tpu=False, using_native_amp=False, using_lbfgs=False):
optimizer.step(closure=optimizer_closure)
optimizer.zero_grad()
self.lr_scheduler.step()
def get_tqdm_dict(self):
tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]}
return tqdm_dict
def train_dataloader(self):
n_samples = self.n_obs['train']
train_dataset = get_dataset(tokenizer=self.tokenizer, data=self.train_data, num_samples=n_samples,
args=self.hyparams)
sampler = RandomSampler(train_dataset)
dataloader = DataLoader(train_dataset, sampler=sampler, batch_size=self.hyparams.train_batch_size,
drop_last=True, num_workers=4)
# t_total = (
# (len(dataloader.dataset) // (self.hyparams.train_batch_size * max(1, self.hyparams.n_gpu)))
# // self.hyparams.gradient_accumulation_steps
# * float(self.hyparams.num_train_epochs)
# )
t_total = 100000
scheduler = get_linear_schedule_with_warmup(
self.opt, num_warmup_steps=self.hyparams.warmup_steps, num_training_steps=t_total
)
self.lr_scheduler = scheduler
return dataloader
def val_dataloader(self):
n_samples = self.n_obs['validation']
validation_dataset = get_dataset(tokenizer=self.tokenizer, data=self.val_data, num_samples=n_samples,
args=self.hyparams)
sampler = RandomSampler(validation_dataset)
return DataLoader(validation_dataset, shuffle=False, batch_size=self.hyparams.eval_batch_size, sampler=sampler, num_workers=4)
def test_dataloader(self):
n_samples = self.n_obs['test']
test_dataset = get_dataset(tokenizer=self.tokenizer, data=self.test_data, num_samples=n_samples, args=self.hyparams)
return DataLoader(test_dataset, batch_size=self.hyparams.eval_batch_size, num_workers=4)
def on_save_checkpoint(self, checkpoint):
save_path = self.output_dir.joinpath("best_tfmr")
self.model.config.save_step = self.step_count
self.model.save_pretrained(save_path)
self.tokenizer.save_pretrained(save_path)
import os
import argparse
import pytorch_lightning as pl
from question_answering.t5_closed_book import T5FineTuner
if __name__ == '__main__':
os.environ['REQUESTS_CA_BUNDLE'] = r'C:\ProgramData\NORCE\cer\NORCE_CA.cer'
# os.environ["PL_TORCH_DISTRIBUTED_BACKEND"] = "gloo"
# nltk.download('punkt')
args_dict = dict(
output_dir="", # path to save the checkpoints
model_name_or_path='t5-large',
tokenizer_name_or_path='t5-large',
max_input_length=128,
max_output_length=256,
freeze_encoder=False,
freeze_embeds=False,
learning_rate=1e-5,
weight_decay=0.0,
adam_epsilon=1e-8,
warmup_steps=0,
train_batch_size=4,
eval_batch_size=4,
num_train_epochs=2,
gradient_accumulation_steps=10,
n_gpu=1,
resume_from_checkpoint=None,
val_check_interval=0.5,
n_val=4000,
n_train=-1,
n_test=-1,
early_stop_callback=False,
fp_16=False, # if you want to enable 16-bit training then install apex and set this to true
opt_level='O1',
# you can find out more on optimisation levels here https://nvidia.github.io/apex/amp.html#opt-levels-and-properties
max_grad_norm=1.0, # if you enable 16-bit training then set this to a sensible value, 0.5 is a good default
seed=101,
)
args_dict.update({'output_dir': 't5_large_MedQuAD_256', 'num_train_epochs': 100,
'train_batch_size': 8, 'eval_batch_size': 8, 'learning_rate': 1e-3})
# 'resume_from_checkpoint': 't5_trivia_qa_closedbook/checkpointepoch=53.ckpt'})
args = argparse.Namespace(**args_dict)
checkpoint_callback = pl.callbacks.ModelCheckpoint(dirpath=args.output_dir, monitor="em_score", mode="max", save_top_k=1)
## If resuming from checkpoint, add an arg resume_from_checkpoint
train_params = dict(
accumulate_grad_batches=args.gradient_accumulation_steps,
gpus=args.n_gpu,
max_epochs=args.num_train_epochs,
# early_stop_callback=False,
precision=16 if args.fp_16 else 32,
# amp_level=args.opt_level,
# resume_from_checkpoint=args.resume_from_checkpoint,
gradient_clip_val=args.max_grad_norm,
checkpoint_callback=checkpoint_callback,
val_check_interval=args.val_check_interval,
# accelerator='dp'
# logger=wandb_logger,
# callbacks=[LoggingCallback()],
)
model = T5FineTuner(args)
trainer = pl.Trainer(**train_params)
trainer.fit(model)
`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20670/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20669
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20669/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20669/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20669/events
|
https://github.com/huggingface/transformers/issues/20669
| 1,484,311,901
|
I_kwDOCUB6oc5YeM1d
| 20,669
|
Progress Bar for large model loading
|
{
"login": "vvvm23",
"id": 44398246,
"node_id": "MDQ6VXNlcjQ0Mzk4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvvm23",
"html_url": "https://github.com/vvvm23",
"followers_url": "https://api.github.com/users/vvvm23/followers",
"following_url": "https://api.github.com/users/vvvm23/following{/other_user}",
"gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions",
"organizations_url": "https://api.github.com/users/vvvm23/orgs",
"repos_url": "https://api.github.com/users/vvvm23/repos",
"events_url": "https://api.github.com/users/vvvm23/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvvm23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Sounds like a reasonable request. This should be done by the PR linked above if you want to try it!",
"I might be using this wrong, but I've taken the following steps and don't see any changes 🤔 :\r\n```\r\nfrom transformers import BloomForCausalLM\r\nimport torch\r\n\r\nmodel = BloomForCausalLM.from_pretrained('bigscience/bloom-7b1', cache_dir='bloom-7b1-ckpt', torch_dtype=torch.float16)\r\n```\r\nafter cloning `transformers`, running checkout on `large_model_progress`, then `pip install -e .`\r\n\r\nIs there some edge case with the use of `cache_dir`?",
"The PR has not been merged into the main branch yet, so you need to checkout the branch of the PR before trying.",
"I had built the correct branch, the issue was me not being patient enough, as the progress bar did eventually appear.\r\n\r\nSeems there is some processing going on between downloading and actually loading the shards. I am not sure what is being done, but the PR works ~\r\n\r\nLGTM 🤗 "
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### Feature request
Add progress bars for large model loading from cache files.
### Motivation
Most of the time, model loading time will be dominated by download speed. However, for very large models we will often first download the checkpoints, then during runtime simply load them from cache. For models like Bloom however, it can take upwards of 100 minutes to load the model into RAM. During this time, there is no feedback to the user, even with verbosity set to debug. This can be frustrating as the only way to check progress is by checking system utilisation through `top`.
### Your contribution
Happy to help if I am pointed to the relevant file or files! I don't think the progress bar would need to be extremely accurate, just some indication that something is happening.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20669/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20669/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20668
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20668/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20668/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20668/events
|
https://github.com/huggingface/transformers/pull/20668
| 1,484,072,475
|
PR_kwDOCUB6oc5Ewi0j
| 20,668
|
Add AltCLIP
|
{
"login": "jongjyh",
"id": 37979232,
"node_id": "MDQ6VXNlcjM3OTc5MjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/37979232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jongjyh",
"html_url": "https://github.com/jongjyh",
"followers_url": "https://api.github.com/users/jongjyh/followers",
"following_url": "https://api.github.com/users/jongjyh/following{/other_user}",
"gists_url": "https://api.github.com/users/jongjyh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jongjyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jongjyh/subscriptions",
"organizations_url": "https://api.github.com/users/jongjyh/orgs",
"repos_url": "https://api.github.com/users/jongjyh/repos",
"events_url": "https://api.github.com/users/jongjyh/events{/privacy}",
"received_events_url": "https://api.github.com/users/jongjyh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20668). All of your documentation changes will be reflected on that endpoint."
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20668/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20668",
"html_url": "https://github.com/huggingface/transformers/pull/20668",
"diff_url": "https://github.com/huggingface/transformers/pull/20668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20668.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20667
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20667/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20667/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20667/events
|
https://github.com/huggingface/transformers/pull/20667
| 1,483,873,534
|
PR_kwDOCUB6oc5Ev1sn
| 20,667
|
Albert resource
|
{
"login": "JuheonChu",
"id": 35699839,
"node_id": "MDQ6VXNlcjM1Njk5ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuheonChu",
"html_url": "https://github.com/JuheonChu",
"followers_url": "https://api.github.com/users/JuheonChu/followers",
"following_url": "https://api.github.com/users/JuheonChu/following{/other_user}",
"gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions",
"organizations_url": "https://api.github.com/users/JuheonChu/orgs",
"repos_url": "https://api.github.com/users/JuheonChu/repos",
"events_url": "https://api.github.com/users/JuheonChu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuheonChu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Closing this issue and reopening the issue at [Issue 20697](https://github.com/huggingface/transformers/pull/20697)."
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 20055
## Before submitting
- This PR adds resources on ALBERT model based on the materials outlined in #20055.
## Who can review?
@stevhliu
Co-authored by: @Adia Wu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20667/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20667",
"html_url": "https://github.com/huggingface/transformers/pull/20667",
"diff_url": "https://github.com/huggingface/transformers/pull/20667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20667.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20666
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20666/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20666/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20666/events
|
https://github.com/huggingface/transformers/issues/20666
| 1,483,747,072
|
I_kwDOCUB6oc5YcC8A
| 20,666
|
Generating with Flax fails when using padding
|
{
"login": "lhao499",
"id": 23612416,
"node_id": "MDQ6VXNlcjIzNjEyNDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/23612416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhao499",
"html_url": "https://github.com/lhao499",
"followers_url": "https://api.github.com/users/lhao499/followers",
"following_url": "https://api.github.com/users/lhao499/following{/other_user}",
"gists_url": "https://api.github.com/users/lhao499/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhao499/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhao499/subscriptions",
"organizations_url": "https://api.github.com/users/lhao499/orgs",
"repos_url": "https://api.github.com/users/lhao499/repos",
"events_url": "https://api.github.com/users/lhao499/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhao499/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi and @gante ",
"Hey @lhao499!\r\n\r\n> using builtin |endoftext| as pad token works with padding, but using customized token <|pad|> doesn't work\r\n\r\nNote that GPT2 was not pre-trained with padding tokens. We can use the `<|endoftext|>` token as a substitute, but really we should specify an [attention mask](https://huggingface.co/transformers/glossary.html#attention-mask) to the model so that it doesn't attend to padded indices, therefore ignoring the value of the token. It's good to see that you've done this in your Colab for the GPT2 examples! (`model.generate(**inputs, ...)`)\r\n\r\nIn using a customised token (such as `<|pad|>`), you are providing the model with a format entirely different to that seen during pre-training. We cannot expect our model to understand this different format given that it has never seen it before. This is likely the reason for the unexpected behaviour when using `<|pad|>` as the padding token.\r\n\r\nAs for the OPT generation, it looks like this is fixed by setting the padding side: https://colab.research.google.com/drive/1pcwTRU3snLjz8wTJx_Z5t4Z7IXnxHnDI?usp=sharing",
"Hi Sanchit, when attention mask is provided, it's expected that customized token won't be attended at all, so effectively there is no differences between pretraining and inference formats? \r\n\r\nFor the OPT, it looks like you linked my Colab, where the OPT generation does not work. ",
"Hey @lhao499! Sorry for the delay in getting back to you. I checked against PyTorch GPT-2, and we see the same phenomenon here, so we can exclude it as being a Flax specific issue: https://colab.research.google.com/drive/1qK2t8YNKLnX-oednqiVRDSFkFs5DPnY4#scrollTo=eGKOy0NbzCAW\r\n\r\ncc'ing @gante here who might be able to provide some more insight!\r\n\r\nContext: when we pass an attention mask for auto-regressive generation, it's expected that any padded tokens won't be attended to. Meaning we should be able to specify any arbitrary token as our pad token? \r\n\r\nFor GPT-2, we see that generation works when the pad token is set to the default pad token. But it breaks when we set it to some arbitrary pad token.",
"Hey @lhao499 @sanchit-gandhi 👋 \r\n\r\nIn practice, adding new tokens and using the model straight away has very unpredictable results that depend on the framework used. For instance, the current version of JAX has the problem flagged above. However, if you use an older version of JAX, everything seems to work fine 🤷 If you try to do the same in TF CPU it might work, whereas on TF GPU it will crash unless you explicitly expand the vocabulary. PT also works if you expand the vocabulary, although with bad results.\r\n\r\nAdding a new token corresponds to initializing a random entry in the embedding matrix, which has unforeseeable consequences. I highly advise against it unless you fine-tune the model afterward.\r\n\r\nSee [this colab](https://colab.research.google.com/drive/1Qly3125Q2happG1dGqdOpl1Q8MVNP_Nq?usp=sharing) for examples -- if you really want to go down this path, you might get away with an older Jax version ;)\r\n\r\nP.S.: this Jax finding was a happy coincidence, my local desktop env had an older version Jax version which happened to work 👀 That's how unreliable this strategy is.",
"Thanks @sanchit-gandhi @gante for looking into the issues. \r\n\r\nHi @gante, \r\n\r\nIt is a surprising finding that adding new tokens has such unpredictable results depends framework and hardware. Thanks for trying different frameworks. \r\n\r\nIt is a good strategy to resize the embedding matrix and fine-tune the model after adding new padding tokens. It would be great if transformers library could get rid of the strange results and/or raise errors. \r\n\r\nThe attention mask still doesn't work as expected though. For instance, in OPT, even using builtin <pad> padding token only leads to repeating`<s>`, e.g., `<pad></s>My cat is cute<s><s><s><s><s><s><s><s><s><s><s><s><s><s>`, as shown in the last block of https://colab.research.google.com/drive/1pcwTRU3snLjz8wTJx_Z5t4Z7IXnxHnDI?usp=sharing",
"👍 will have a look at OPT",
"@lhao499 there was indeed a problem with the attention masking in OPT, causing all but the longest input to fail. #21150 fixes it :)"
] | 1,670
| 1,674
| 1,674
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.2 (cpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Generation with GPT-2 and OPT doesn't work when using padding token.
Specifically, the issues are
1. In GPT-2, using builtin |endoftext| as pad token works with padding, but using customized token <|pad|> doesn't work, the generation only repeats `!`, e.g., `<|pad|> My cat is cute!!!!!!!!!!!!!!!`.
2. In GPT-2, despite pad_token_id is optional in generate() function, one has to provide pad_token_id=tokenizer.pad_token_id otherwise error.
3. In OPT, even using builtin <pad> token as padding token doesn't work, the generation only repeats `<s>`, e.g., `<pad></s>My cat is cute<s><s><s><s><s><s><s><s><s><s><s><s><s><s>`.
The colab to reproduce the problems 1-3 is [here](https://colab.research.google.com/drive/1pcwTRU3snLjz8wTJx_Z5t4Z7IXnxHnDI?usp=sharing).
A related issue to problem 2 is #18884.
### Expected behavior
Expect the generation works with padding. E.g., `<pad></s>My cat is super cute and I love her so much. I love her so much. I love`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20666/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20665
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20665/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20665/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20665/events
|
https://github.com/huggingface/transformers/issues/20665
| 1,483,505,819
|
I_kwDOCUB6oc5YbICb
| 20,665
|
beam_search 中记录 beam_indices 的问题
|
{
"login": "jicoder-nwpu",
"id": 46379875,
"node_id": "MDQ6VXNlcjQ2Mzc5ODc1",
"avatar_url": "https://avatars.githubusercontent.com/u/46379875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jicoder-nwpu",
"html_url": "https://github.com/jicoder-nwpu",
"followers_url": "https://api.github.com/users/jicoder-nwpu/followers",
"following_url": "https://api.github.com/users/jicoder-nwpu/following{/other_user}",
"gists_url": "https://api.github.com/users/jicoder-nwpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jicoder-nwpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jicoder-nwpu/subscriptions",
"organizations_url": "https://api.github.com/users/jicoder-nwpu/orgs",
"repos_url": "https://api.github.com/users/jicoder-nwpu/repos",
"events_url": "https://api.github.com/users/jicoder-nwpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jicoder-nwpu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please follow the template of the issue. No one can help you without a description of the problem and a clear reproducer."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### System Info
some beam_indices become 0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
if return_dict_in_generate and output_scores:
beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices))))
### Expected behavior
if return_dict_in_generate and output_scores:
beam_indices = tuple((beam_indices[i] + (beam_idx[i],) for i in range(len(beam_indices))))
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20665/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20664
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20664/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20664/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20664/events
|
https://github.com/huggingface/transformers/issues/20664
| 1,483,397,759
|
I_kwDOCUB6oc5Yatp_
| 20,664
|
[TF] Save finetuned-model without huggingface-hub login
|
{
"login": "goreng2",
"id": 45035457,
"node_id": "MDQ6VXNlcjQ1MDM1NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/45035457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goreng2",
"html_url": "https://github.com/goreng2",
"followers_url": "https://api.github.com/users/goreng2/followers",
"following_url": "https://api.github.com/users/goreng2/following{/other_user}",
"gists_url": "https://api.github.com/users/goreng2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goreng2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goreng2/subscriptions",
"organizations_url": "https://api.github.com/users/goreng2/orgs",
"repos_url": "https://api.github.com/users/goreng2/repos",
"events_url": "https://api.github.com/users/goreng2/events{/privacy}",
"received_events_url": "https://api.github.com/users/goreng2/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 ",
"I also find the problem that finetuned-model doesn't sync with hub when give only 1 epoch or at the last epoch",
"Hi @goreng2, you're correct that right now the callback expects a HF login. This is because that callback is designed for uploading models to the hub. If you just want to save the model locally, you can try either:\r\n\r\n1) The [ModelCheckpoint callback](https://keras.io/api/callbacks/model_checkpoint/) in Keras to save the weights every epoch if you just want to save/resume training.\r\n\r\n2) The `model.save_pretrained()` method if you want to save the model locally after training and reload it with `from_pretrained` afterwards.\r\n\r\nAre you specifically interested in saving the model like `save_pretrained()` every epoch? We could add that, but let us know if you think that'd be useful for you first, or the solutions above are enough!",
"Hi @Rocketknight1 ! Thanks for your comment.\r\n\r\nI want to use Huggingface's `pipeline` API for inference. I think `pipeline` perhaps can receive only `.h5` model\r\n\r\nWhen I tried `ModelCheckpoint callback`, It returns `ckpt` files. It can't be used in `pipeline`.\r\nFor convert `ckpt` to `.h5`, I need to write model architecture (in my case `ELECTRA`) But It's so difficult and complex to me 😥\r\nI tried to convert `ckpt` to `pth (PyTorch)` But It doesn't work... Maybe [this code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py) only works in converting TF1 to PyTorch\r\n\r\nWhen I tried `model.save('my_model.h5')`, Error msg raised. Maybe Something format is not match\r\n\r\nI don't test `model.save_pretrained()` yet, It returns `.h5`?",
"Ah, yes. The `.ckpt` files from `ModelCheckpoint` are only useful for saving/resuming training, and you won't be able to use them in pipelines.\r\n\r\nThe way TF models on HuggingFace work is that they're built on top of Keras models. `model.save()` and `ModelCheckpoint` are both part of Keras. However, if you want to save the model to load with other HuggingFace tools, you should use `save_pretrained()`. This is our method and doesn't exist in base Keras models. It saves the model as `.h5`, but also adds a `config.json` that will allow the `pipeline` API and other methods like `from_pretrained` to initialize the model correctly.\r\n\r\nTry just doing this:\r\n```\r\nmodel.save_pretrained(\"my_model\")\r\npipe = pipeline(\"text-classification\", model=\"my_model\")\r\n```\r\nThough of course, make sure to change `text-classification` to the task you want to do!",
"@Rocketknight1 \r\nHi, Thanks for your answer 😀\r\n\r\nI tested it and It worked!\r\nI got `tf_model.h5`, `config.json` and success to run `pipeline`!\r\n\r\nBut It's not perfect for inference, Because `model.save_pretrained(\"output_folder\")` returns only 2 files that I mentioned upper.\r\n\r\n`tokenizer.json`, `tokenizer_config.json` and so on are also needed for inference\r\n\r\nSo, How about make `model.save_pretrained(\"output_folder\")` return other files about tokenizer?",
"@goreng2 Sorry for the delay! Yes, you will need to save the tokenizer to the same directory with `tokenizer.save_pretrained()` in order to load the whole directory with a pipeline."
] | 1,670
| 1,674
| 1,673
|
NONE
| null |
### Feature request
[TF] Save finetuned-model in local without huggingface-hub login
### Motivation
in TF, We need to login for saving finetuned-model.
```
from transformers.keras_callbacks import PushToHubCallback
push_to_hub_callback = PushToHubCallback(
output_dir="my_awesome_model",
tokenizer=tokenizer,
)
```
But I don't want to sync in my hub yet. Firstly, I want to save my models in local and test them
I checked that works in PyTorch, But It's not in Tensorflow
### Your contribution
I think we need to add argument whether to login or not
https://github.com/huggingface/transformers/blob/0526a075c567d7508205fe6054310f3d132b3227/src/transformers/keras_callbacks.py#L267
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20664/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20663
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20663/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20663/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20663/events
|
https://github.com/huggingface/transformers/issues/20663
| 1,483,024,966
|
I_kwDOCUB6oc5YZSpG
| 20,663
|
Adding Pix2Struct to transformers
|
{
"login": "arnaudstiegler",
"id": 26485052,
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudstiegler",
"html_url": "https://github.com/arnaudstiegler",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Really cool!\r\n\r\nRelated (sorry first read Pix2Struct as Pix2Seq): I have a working implementation of Pix2Seq: https://github.com/NielsRogge/transformers/tree/add_pix2seq. It even works with the generate method to autoregressively generate bounding boxes. However I didn't add it yet as training was quite cumbersome with a lot of hacks. Another reason why I didn't add it yet is because it's slow, you generate one token at a time, whereas object detection often has a real-time requirement",
"Yeah, interested to help out if there's someone working on the integration on Pix2Struct!\r\n\r\nRe. Pix2Seq, sorry to hear it's hard to integrate. I actually think that Pix2Seq is about to become a lot more relevant for Document Processing because of those image-to-text models (like Donut and Pix2Struct): without a bounding box for the prediction (since you only get text as output), it becomes a lot harder to QA the model results since it takes a lot more time to map a string to its original position on the document than it is to check the position of a bounding box. Exciting times ahead :) ",
"I'll cc @younesbelkada and @ArthurZucker here as they have extensive experience with the T5x code base, on which Pix2Struct is based.\r\n\r\nOriginal checkpoints can be found here: https://console.cloud.google.com/storage/browser/pix2struct-data?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false",
"\r\n> Original checkpoints can be found here: https://console.cloud.google.com/storage/browser/pix2struct-data?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false\r\n\r\n\r\n@NielsRogge These are checkpoints for the ai2d task , correct? I do not see checkpoints for refexp or other tasks in the shared GS folder.\r\n\r\nI am working on reproducing the test results in the original repo and hitting a string of build errors. Working thought them one at a time. Will share results if/when I get through all. Opened a few issues in the repo and hoping to hear back from the authors as well.\r\n",
"> Yeah, interested to help out if there's someone working on the integration on Pix2Struct!\r\n> \r\n> Re. Pix2Seq, sorry to hear it's hard to integrate. I actually think that Pix2Seq is about to become a lot more relevant for Document Processing because of those image-to-text models (like Donut and Pix2Struct): without a bounding box for the prediction (since you only get text as output), it becomes a lot harder to QA the model results since it takes a lot more time to map a string to its original position on the document than it is to check the position of a bounding box. Exciting times ahead :)\r\n\r\nMy understanding is that some of the pix2struct tasks use bounding boxes. For example refexp uses the rico dataset (uibert extension), which includes bounding boxes for UI objects.\r\n\r\nOne potential way to automate QA for UI tasks is to take bounding boxes from a test set, feed to the `Widget Captioning` task and then use the captions as input to the `refexp` task. Ideally we will end up with the original bounding boxes.\r\n\r\nA{test set ground truth bounding boxes} -> Widget Captioning -> refexp -> B{predicted bounding boxes)\r\nA=B ?\r\n\r\n\r\n",
"Not sure if this is the right place for this question, but let me try. Is anyone working on fine tuning multi-modal transformers that are already in the hugging face hub on UI tasks? Based on [this paper](https://paperswithcode.com/paper/grounding-natural-language-instructions-can), it seems like LayoutML might be a good candidate to train on Rico, RicoSCA or UIBert.",
"Quick update. I am testing the idea of using Document Understanding models like Donut on UI tasks that pix2struct targets. Space, model, and datasets are now on HF Hub. Colab notebook and other links can be found in the space page below:\r\nhttps://huggingface.co/spaces/ivelin/ui-refexp\r\n",
"Hi there, I have a working implementation in https://github.com/huggingface/transformers/pull/21400 will keep you posted.",
"Amazing! Can't wait to try it out",
"> Hi there, I have a working implementation in #21400 will keep you posted.\r\n\r\nAwesome. Can't wait to compare performance to Donut.",
"The model has been merged! 🎉 \r\nYou can find all the weights here: https://huggingface.co/models?search=pix2struct \r\nand a fine-tuning notebook here: https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb\r\nLet us know if you face into any issue! ",
"Amazing work! Thank you for sharing, @younesbelkada .\r\n\r\nIn the list of pre-trained models, I did not see one for the RefExp task. Do you know if someone is working on that already?\r\n",
"Ah indeed I forgot to add them, will add them and post a message here once it's done",
"hey @younesbelkada , thank you for great work, i'm not sure if this is a proper place to ask, but i haven't find a better one.\r\nIs it possible to use pix2struct for widget captioning task, with bounding box as an additional input?",
"@Misterion777 [the forums](https://discuss.huggingface.co/) would be the right place for this question :) ",
"> @Misterion777 [the forums](https://discuss.huggingface.co/) would be the right place for this question :)\r\n\r\nyeah, I just wanted to contact directly the author of the HF implementation, but I'll post there, thanks! :)",
"> Ah indeed I forgot to add them, will add them and post a message here once it's done\r\n\r\n@younesbelkada I am not seeing the RefExp, is there a ticket where I can see the progress?",
"It is probably not being worked on, feel free to open a PR 🤗 ",
"Hi @igortoliveira ,\r\nI still didn't had time to look at it, for adding the support you just need to add the bounding box support for Pix2Struct and the conversion script should stay the same"
] | 1,670
| 1,685
| 1,679
|
CONTRIBUTOR
| null |
### Model description
Image-to-Text encoder decoder presented in: https://arxiv.org/abs/2210.03347
It is in spirit similar to the [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model that has been added to transformers not so long ago, but it has outperformed it on pretty much all benchmarks (thanks to different pre-training and slightly different encoders and decoders, more weights).
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Github repo: https://github.com/google-research/pix2struct
Pretrained checkpoints are available on the repo, along with the code to fine-tune the model. The only thing is that everything is in Jax, so might take a while to convert
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20663/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20663/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20662
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20662/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20662/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20662/events
|
https://github.com/huggingface/transformers/pull/20662
| 1,482,959,799
|
PR_kwDOCUB6oc5EsjhK
| 20,662
|
Clarify return_tensor and return_text parameters
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> LGTM thanks! @Narsil maybe a clear `ValueError` could be raised when sanitizing parameters when both `return_text` and `return_tensors` are `True`?\r\n\r\nhttps://github.com/huggingface/transformers/pull/20729"
] | 1,670
| 1,670
| 1,670
|
MEMBER
| null |
This PR fixes #20615 by clarifying that setting `return_tensors=True` will not return the decoded text, and you can't get a combination of `generated_text` and `generated_token_ids`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20662/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20662",
"html_url": "https://github.com/huggingface/transformers/pull/20662",
"diff_url": "https://github.com/huggingface/transformers/pull/20662.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20662.patch",
"merged_at": 1670865374000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20661
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20661/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20661/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20661/events
|
https://github.com/huggingface/transformers/pull/20661
| 1,482,677,476
|
PR_kwDOCUB6oc5ErjEp
| 20,661
|
Fix load from PT-formatted checkpoint in composite TF models
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
This PR fixes the slow test `TFViT2GPT2EncoderDecoderModelTest::test_real_model_save_load_from_pretrained` which was broken by the new `safetensors` integration. The main problem was that this model loads a GPT-2 as its decoder, which has a safetensors checkpoint formatted in a PyTorch-like format, and that model was loaded with wrong weight names.
Moving the variable scope code before we try to load PyTorch-like checkpoints fixes the issued.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20661/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20661",
"html_url": "https://github.com/huggingface/transformers/pull/20661",
"diff_url": "https://github.com/huggingface/transformers/pull/20661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20661.patch",
"merged_at": 1670509988000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20660
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20660/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20660/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20660/events
|
https://github.com/huggingface/transformers/pull/20660
| 1,482,605,552
|
PR_kwDOCUB6oc5ErSvT
| 20,660
|
Add BackboneMixin
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Oh by the way, this is definition of a mixin in Python and the base class should be called `BackboneMixin` :-)"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Add `BackboneMixin` with a method `forward_with_filtered_kwargs`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20660/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20660",
"html_url": "https://github.com/huggingface/transformers/pull/20660",
"diff_url": "https://github.com/huggingface/transformers/pull/20660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20660.patch",
"merged_at": 1670514948000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20659
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20659/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20659/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20659/events
|
https://github.com/huggingface/transformers/issues/20659
| 1,482,521,052
|
I_kwDOCUB6oc5YXXnc
| 20,659
|
Replicating SQuAD results on T5
|
{
"login": "lucky-bai",
"id": 123435,
"node_id": "MDQ6VXNlcjEyMzQzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/123435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucky-bai",
"html_url": "https://github.com/lucky-bai",
"followers_url": "https://api.github.com/users/lucky-bai/followers",
"following_url": "https://api.github.com/users/lucky-bai/following{/other_user}",
"gists_url": "https://api.github.com/users/lucky-bai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucky-bai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucky-bai/subscriptions",
"organizations_url": "https://api.github.com/users/lucky-bai/orgs",
"repos_url": "https://api.github.com/users/lucky-bai/repos",
"events_url": "https://api.github.com/users/lucky-bai/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucky-bai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.\r\nNote that we did not try to replicate the result of this paper with this script :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### System Info
Hi, I'm trying to replicate the SQuAD experiment in the [T5 paper](https://arxiv.org/abs/1910.10683). I'm following the paper's recommended hyperparameters for finetuning:
* AdaFactor optimizer
* Batch size 128 (I'm doing 16 per GPU on 8xRTX 3090 GPUs)
* 2^18 steps for fine-tuning (which is around 300 epochs)
* Max sequence length 512
* Learning rate 0.001
I'm running the following:
```run_seq2seq_qa.py --model_name_or_path t5-base --dataset_name squad --context_column context --question_column question --answer_column answers --do_train --do_eval --per_device_train_batch_size 16 --optim adafactor --learning_rate 0.001 --num_train_epochs 300 --evaluation_strategy epoch --max_seq_length 512 --predict_with_generate --output_dir /tmp/t5_squad/ --overwrite_output_dir```
After 4 epochs, the validation Exact Match score is 79.054 and F1 is 86.895. After 4 epochs, the model starts to overfit and the performance decreases. However, the paper reports 85.44 EM and 92.08 F1 score on T5-base (Table 14).
Has anyone been able to reproduce the official paper results or am I missing anything?
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See above
### Expected behavior
Should get around 85.44 EM and 92.08 F1 score on this task.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20659/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20658
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20658/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20658/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20658/events
|
https://github.com/huggingface/transformers/issues/20658
| 1,482,507,471
|
I_kwDOCUB6oc5YXUTP
| 20,658
|
LayoutLM Cuda Memory Error
|
{
"login": "WaterKnight1998",
"id": 41203448,
"node_id": "MDQ6VXNlcjQxMjAzNDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WaterKnight1998",
"html_url": "https://github.com/WaterKnight1998",
"followers_url": "https://api.github.com/users/WaterKnight1998/followers",
"following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}",
"gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions",
"organizations_url": "https://api.github.com/users/WaterKnight1998/orgs",
"repos_url": "https://api.github.com/users/WaterKnight1998/repos",
"events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/WaterKnight1998/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It's advised to run the same code on CPU to get a more understandable error message",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### System Info
torch==1.13.0+cu116
transformers==4.24.0
### Who can help?
@philschmid
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to train LayoutLMv1 following this guide https://www.philschmid.de/fine-tuning-layoutlm
However, when I execute `trainer.train()` I get this error:
```python
RuntimeError Traceback (most recent call last)
<command-1206590250403017> in <module>
----> 1 trainer.train()
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1499 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1500 )
-> 1501 return inner_training_loop(
1502 args=args,
1503 resume_from_checkpoint=resume_from_checkpoint,
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1747 tr_loss_step = self.training_step(model, inputs)
1748 else:
-> 1749 tr_loss_step = self.training_step(model, inputs)
1750
1751 if (
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs)
2506
2507 with self.compute_loss_context_manager():
-> 2508 loss = self.compute_loss(model, inputs)
2509
2510 if self.args.n_gpu > 1:
/databricks/python/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
2538 else:
2539 labels = None
-> 2540 outputs = model(**inputs)
2541 # Save past state if it exists
2542 # TODO: this needs to be fixed and made cleaner later.
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, input_ids, bbox, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1190 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1191
-> 1192 outputs = self.layoutlm(
1193 input_ids=input_ids,
1194 bbox=bbox,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, input_ids, bbox, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict)
825 inputs_embeds=inputs_embeds,
826 )
--> 827 encoder_outputs = self.encoder(
828 embedding_output,
829 extended_attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
494 )
495 else:
--> 496 layer_outputs = layer_module(
497 hidden_states,
498 attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
379 # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
380 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
--> 381 self_attention_outputs = self.attention(
382 hidden_states,
383 attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
306 output_attentions: Optional[bool] = False,
307 ) -> Tuple[torch.Tensor]:
--> 308 self_outputs = self.self(
309 hidden_states,
310 attention_mask,
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/transformers/models/layoutlm/modeling_layoutlm.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
170 output_attentions: Optional[bool] = False,
171 ) -> Tuple[torch.Tensor]:
--> 172 mixed_query_layer = self.query(hidden_states)
173
174 # If this is instantiated as a cross-attention module, the keys
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
/databricks/python/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
112
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
115
116 def extra_repr(self) -> str:
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
```
I can train LayoutLMv3 without any memory issue.
### Expected behavior
I shouldn't get any memory issue as this model is smaller.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20658/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20657
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20657/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20657/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20657/events
|
https://github.com/huggingface/transformers/pull/20657
| 1,482,298,229
|
PR_kwDOCUB6oc5EqNUV
| 20,657
|
[`BiT`] Small patch fix
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a tiny issue that you can encounter if you load `BiT` in fp16.
`diffusers` uses this model under the hood for Depth Estimation inpainting and users get this error:
```
593
594 layer_dropouts = [
--> 595 x.tolist() for x in torch.linspace(0, config.drop_path_rate, sum(config.depths), dtype=torch.float32).split(config.depths)
596 ]
597
RuntimeError: "linspace_cpu" not implemented for 'Half'
```
However on `diffusers` side this can be fixed by installing `accelerate` and load the pipeline with `low_cpu_mem_usage=True`. But better to fix it to avoid any misleading issue
cc @sgugger @patil-suraj
Otherwise to reproduce:
```
import torch
from transformers import BitModel
model = BitModel.from_pretrained("google/bit-50", torch_dtype=torch.float16)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20657/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20657",
"html_url": "https://github.com/huggingface/transformers/pull/20657",
"diff_url": "https://github.com/huggingface/transformers/pull/20657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20657.patch",
"merged_at": 1670499694000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20656
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20656/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20656/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20656/events
|
https://github.com/huggingface/transformers/pull/20656
| 1,482,242,906
|
PR_kwDOCUB6oc5EqBKM
| 20,656
|
Fix gpt2 fp16 training when tracing is enabled
|
{
"login": "JingyaHuang",
"id": 44135271,
"node_id": "MDQ6VXNlcjQ0MTM1Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/44135271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JingyaHuang",
"html_url": "https://github.com/JingyaHuang",
"followers_url": "https://api.github.com/users/JingyaHuang/followers",
"following_url": "https://api.github.com/users/JingyaHuang/following{/other_user}",
"gists_url": "https://api.github.com/users/JingyaHuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JingyaHuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JingyaHuang/subscriptions",
"organizations_url": "https://api.github.com/users/JingyaHuang/orgs",
"repos_url": "https://api.github.com/users/JingyaHuang/repos",
"events_url": "https://api.github.com/users/JingyaHuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/JingyaHuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"A little bit more context on the issue, I previously fixed the tracing issue in #18017, but it will harm the performance due to host<->device synchronization, which has been targeted in #20061, but cause the tracing once again failed.\r\n\r\nIt seems that we can't guarantee the tracing correctness and inference performance with the same line of code while using PyTorch at the same time, that's why in the PR, I distinguish two cases to solve it:\r\n* Case 1: Tracing\r\n* Case 2: Inference with PyTorch",
"Also @michaelbenayoun I saw this: https://github.com/huggingface/transformers/pull/18017#issuecomment-1197597894, does the current modeling won't have an issue while doing mixed-precision training for torch.fx?\r\n ",
"Feel the same, If/else removed!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
With the PR #20061, the tracing will fail during mixed-precision training, as the dtype for the inputs of a where node are not the same, which is invalid while reusing the ONNX model for inference.
The node:
https://github.com/huggingface/transformers/blob/3ac040bca1efbf5cfe9604a5b2a10a5392917c20/src/transformers/models/gpt2/modeling_gpt2.py#L201
Error message:
```
======================================================================
ERROR: test_ort_trainer (__main__.TestORTTrainer) (model_name='gpt2', dataset_name='sst2', inference_with_ort=False)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_onnxruntime_train.py", line 131, in test_ort_trainer
train_result = trainer.train()
File "/workspace/optimum/onnxruntime/trainer.py", line 349, in train
return inner_training_loop(
File "/workspace/optimum/onnxruntime/trainer.py", line 615, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2523, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2555, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py", line 371, in _forward
return ortmodule._torch_module.forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py", line 351, in _forward
return torch_module_ort._execution_manager(torch_module_ort.is_training()).forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py", line 273, in forward
self._fallback_manager.handle_exception(
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_fallback.py", line 162, in handle_exception
raise exception
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py", line 210, in forward
self._initialize_graph_builder()
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_graph_execution_manager.py", line 478, in _initialize_graph_builder
self._graph_builder.initialize(self._onnx_models.exported_model.SerializeToString(), grad_builder_config)
RuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:731 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(float) and tensor(float16) in node (Where_223).
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20656/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20656",
"html_url": "https://github.com/huggingface/transformers/pull/20656",
"diff_url": "https://github.com/huggingface/transformers/pull/20656.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20656.patch",
"merged_at": 1670507760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20655
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20655/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20655/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20655/events
|
https://github.com/huggingface/transformers/pull/20655
| 1,482,032,721
|
PR_kwDOCUB6oc5EpR52
| 20,655
|
[Trainer] Corrects typing of Trainer __init__ args
|
{
"login": "julianmack",
"id": 32888280,
"node_id": "MDQ6VXNlcjMyODg4Mjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/32888280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julianmack",
"html_url": "https://github.com/julianmack",
"followers_url": "https://api.github.com/users/julianmack/followers",
"following_url": "https://api.github.com/users/julianmack/following{/other_user}",
"gists_url": "https://api.github.com/users/julianmack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julianmack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julianmack/subscriptions",
"organizations_url": "https://api.github.com/users/julianmack/orgs",
"repos_url": "https://api.github.com/users/julianmack/repos",
"events_url": "https://api.github.com/users/julianmack/events{/privacy}",
"received_events_url": "https://api.github.com/users/julianmack/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Corrects typing for Trainer class. Updates typing to match change made in https://github.com/huggingface/transformers/pull/19158/ and fixes a few other typing issues while I'm there
Unless I'm missing something these changes take the typing to parity with both class docstring and implementation
Who can review: @sgugger
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20655/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20655",
"html_url": "https://github.com/huggingface/transformers/pull/20655",
"diff_url": "https://github.com/huggingface/transformers/pull/20655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20655.patch",
"merged_at": 1670425060000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20654
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20654/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20654/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20654/events
|
https://github.com/huggingface/transformers/pull/20654
| 1,482,023,354
|
PR_kwDOCUB6oc5EpPyz
| 20,654
|
[NAT, DiNAT] Add backbone class
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the `NatBackbone` and `DinatBackbone` classes, to be used for #20577.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20654/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20654",
"html_url": "https://github.com/huggingface/transformers/pull/20654",
"diff_url": "https://github.com/huggingface/transformers/pull/20654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20654.patch",
"merged_at": 1670947619000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20653
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20653/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20653/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20653/events
|
https://github.com/huggingface/transformers/issues/20653
| 1,482,016,815
|
I_kwDOCUB6oc5YVcgv
| 20,653
|
Add Whisper large V2 model
|
{
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker ",
"Hey! The `large-v2` is already converted, we are just waiting for OpenAI's approval for the release 😉 ",
"Cool!",
"FYI : https://huggingface.co/openai/whisper-large-v2 ",
"Not really sure if we want to change the `large` for `large-v2`, not really backward compatible"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### Model description
It seems openAI has just released a V2 of its large Whisper model.
- The "large-v2" model is trained for more epochs with regularization and shows improved performance compared to the previous large.
- It has the same architecture as the original large model.
- When `load_model("large")` is called, the "large-v2" model will be loaded.
More, here: https://github.com/openai/whisper/commit/4179ed2475cc84cba66868b516232ef1b74dacdf
I can upload it to the hub. Cc: @younesbelkada @patrickv
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20653/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20652
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20652/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20652/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20652/events
|
https://github.com/huggingface/transformers/pull/20652
| 1,481,994,638
|
PR_kwDOCUB6oc5EpJVs
| 20,652
|
[Whisper] Fix forced decoder ids
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"fyi @sgugger, the final fix we hope 🤞",
"Yes! Let me clarify! \r\n\r\nWhen training, we need to encode a sentence to a sequence of label ids. Here, we need to append the 'special' beginning of sentence tokens to the label ids. This is so that the model learns to predict the correct 'special' tokens for the generation process. For a full list of the tokens added, see this PR: https://github.com/huggingface/transformers/pull/19921\r\n\r\nOne of these tokens is the `<|startoftranscript|>` token. This is consistent with other tokenisers in the library, such as the BART tokeniser:\r\n```python\r\nfrom transformers import BartTokenizer\r\n\r\ntokenizer = BartTokenizer.from_pretrained(\"facebook/bart-base\")\r\ninput_str = \"the cat\"\r\ntokens = tokenizer(input_str).input_ids\r\nprint(tokenizer.decode(tokens))\r\n```\r\n**Print Output:**\r\n```\r\n<s>the cat</s>\r\n```\r\n\r\nNow, it doesn't matter for training whether or not we append the decoder start token id to the start of our label sequence, because we cut it in our data collator:\r\n\r\nhttps://github.com/huggingface/transformers/blob/3ac040bca1efbf5cfe9604a5b2a10a5392917c20/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L249\r\n\r\nSo, adding the decoder start token id is more for making the tokeniser user friendly and consistent with other tokenisers in the library.\r\n",
"@sanchit-gandhi Thanks. Just want to point out: For `bart`, yes, we have bos `<s>` (id `0`). But it is not the **decoder** start token (which is `</s>` for bart, with id `2`) - it is just the start of the sentence (not ready for generation). The `labels` has `bos` but not `decoder_start_token`. The labels will be shifted and prepended with `</s>` to become decoder input ids.\r\n\r\nIn Whisper, I understand we want to be user-friendly. And as you have cut it in data collator, it is fine. But IMO, this is something a bit different from our NLP models (i.e. Bart here). Hopefully I understand it correctly.\r\n\r\n"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
The Whisper tokenizer has a property `self.prefix_tokens` that returns the token ids appended to the start of label sequence:
```
<|startoftranscript|> <|lang_id|> <|task|> <|notimestamps|> ...
```
In the PR https://github.com/huggingface/transformers/pull/20589, the method `get_decoder_prompt_ids` was copied from the Whisper processor to the Whisper tokenizer, where it then made use of the tokenizer property `self.prefix_tokens`. The method `get_decoder_prompt_ids` is used to set the tokens that are forced at the beginning of the generation process.
However, the forced decoder ids **should not** contain the `<|startoftranscript|>` token: this is the `decoder_start_token_id` that we use as token 0 when we start generation. If we include `<|startoftranscript|>` in our forced decoder ids, we'll get a double generation of `<|startoftranscript|>`. Thus, we only want to set the following tokens in the `forced_decoder_ids`:
```
<|lang_id|> <|task|> <|notimestamps|> ...
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20652/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20652",
"html_url": "https://github.com/huggingface/transformers/pull/20652",
"diff_url": "https://github.com/huggingface/transformers/pull/20652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20652.patch",
"merged_at": 1670431453000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20651
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20651/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20651/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20651/events
|
https://github.com/huggingface/transformers/pull/20651
| 1,481,986,420
|
PR_kwDOCUB6oc5EpHgO
| 20,651
|
[Trainer] add error when passing `8bit`models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Before this PR, any user could load an 8bit model and pass it to a Trainer, which is wrong. In fact, it is not possible to train an 8bit model (yet). Therefore we should raise an error until this will be supported in the future
Related: https://github.com/huggingface/transformers/issues/20348#issuecomment-1335106257
cc @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20651/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20651/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20651",
"html_url": "https://github.com/huggingface/transformers/pull/20651",
"diff_url": "https://github.com/huggingface/transformers/pull/20651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20651.patch",
"merged_at": 1670423457000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20650
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20650/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20650/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20650/events
|
https://github.com/huggingface/transformers/issues/20650
| 1,481,972,308
|
I_kwDOCUB6oc5YVRpU
| 20,650
|
[New Model] UDOP: Unifying Vision, Text, and Layout for Universal Document Processing
|
{
"login": "WaterKnight1998",
"id": 41203448,
"node_id": "MDQ6VXNlcjQxMjAzNDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WaterKnight1998",
"html_url": "https://github.com/WaterKnight1998",
"followers_url": "https://api.github.com/users/WaterKnight1998/followers",
"following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}",
"gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions",
"organizations_url": "https://api.github.com/users/WaterKnight1998/orgs",
"repos_url": "https://api.github.com/users/WaterKnight1998/repos",
"events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/WaterKnight1998/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"@NielsRogge as you implemented Donut, you might be interested :)",
"Let's hope they open-source :)",
"@NielsRogge they added the code here https://github.com/microsoft/i-Code/tree/main/i-Code-Doc",
"Hi @NielsRogge, can I help in this implementation?",
"@NielsRogge here you have the weights: https://huggingface.co/ZinengTang/Udop/tree/main",
"@WaterKnight1998 Is the model accessible now?",
"> @WaterKnight1998 Is the model accessible now?\r\n\r\nNo, the PR from @raghavanone was closed. @NielsRogge is working on opening a PR with a refactor of UDop code as it was not very good.\r\n\r\nI saw he has a branch for this: https://github.com/NielsRogge/transformers/tree/add_udop"
] | 1,670
| 1,678
| null |
NONE
| null |
### Model description
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
UDOP Paper: https://arxiv.org/abs/2212.02623
UDOP Repo: https://github.com/microsoft/UDOP
UDOP Model Weights: https://huggingface.co/ZinengTang/Udop/tree/main
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20650/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20650/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20649
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20649/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20649/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20649/events
|
https://github.com/huggingface/transformers/pull/20649
| 1,481,887,386
|
PR_kwDOCUB6oc5EoxE4
| 20,649
|
[`ViTHybrid`] + [`BiT`] cleaner `__init__`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, this can be addressed in a future PR"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
a function in `BiT` is not needed, thus this PR makes the codebase less error-prone.
Before that, to get the feature maps size from the backbone model, `ViTHybrid` assumed that the backbone has the method `_get_feature_map` which is not the case for all backbones.
Related #20645
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20649/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20649",
"html_url": "https://github.com/huggingface/transformers/pull/20649",
"diff_url": "https://github.com/huggingface/transformers/pull/20649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20649.patch",
"merged_at": 1670423738000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20648
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20648/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20648/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20648/events
|
https://github.com/huggingface/transformers/pull/20648
| 1,481,824,401
|
PR_kwDOCUB6oc5Eoi3I
| 20,648
|
Add UperNet
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review, I'm waiting for the authors to respond regarding the creation of an organization on the hub."
] | 1,670
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the classic [UperNet](https://arxiv.org/abs/1807.10221) framework to Transformers.
Many papers that introduce a new vision backbone, such as BEiT, ConvNeXt, Swin,... benchmark their model on downstream tasks such as semantic segmentation and object detection. All of these papers use the UperNet framework (introduced in 2018) when evaluating their backbone on semantic segmentation.
Hence, this PR implements this framework, making use of the new [AutoBackbone API](#20229) to make the following possible:
```
from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
```
In the code above, we're instantiating the UperNet framework with Swin Transformer as backbone. The code looks equivalent for another backbone, like ConvNeXt:
```
from transformers import ConvNextBackbone, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = ConvNextBackbone(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
```
To do:
- [ ] looking into supporting `from_pretrained` of backbones => will be done in a follow-up PR
- [x] make sure UperNetImageProcessor does exact same preprocessing
- [x] make UperNetImageProcessor also take `segmentation_maps` as optional input
- [x] add image processor tests
- [x] convert all checkpoints + update organization
- [x] fix integration tests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20648/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20648",
"html_url": "https://github.com/huggingface/transformers/pull/20648",
"diff_url": "https://github.com/huggingface/transformers/pull/20648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20648.patch",
"merged_at": 1673858354000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20647
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20647/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20647/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20647/events
|
https://github.com/huggingface/transformers/pull/20647
| 1,481,689,331
|
PR_kwDOCUB6oc5EoEQV
| 20,647
|
Add batch of resources
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for your review! It's unclear to me why the \"build PR documentation\" check is failing, thought it had to do with the pipeline tags, but it's still failing. Any insight would be greatly appreciated",
"You will have to isolate which file triggers the issue and then which line inside that file by trial and error I'm afraid. That's one of the reason smaller PRs are easier to deal with :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a batch of resources, primilarly for all image classifiers.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20647/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20647",
"html_url": "https://github.com/huggingface/transformers/pull/20647",
"diff_url": "https://github.com/huggingface/transformers/pull/20647.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20647.patch",
"merged_at": 1673972336000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20646
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20646/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20646/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20646/events
|
https://github.com/huggingface/transformers/pull/20646
| 1,481,628,488
|
PR_kwDOCUB6oc5En2lh
| 20,646
|
[pipeline] fix Whisper test
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Test already fixed in https://github.com/huggingface/transformers/pull/20588"
] | 1,670
| 1,687
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes Whisper pipeline test.
Previously, we suppressed the hyphen and apostrophe tokens from Whisper generation, meaning they were alway predicted with zero probability. This meant that these tokens could never be predicted. With the Hub PR https://huggingface.co/openai/whisper-large/discussions/12, these tokens were removed from the set of suppressed tokens, meaning they can now (correctly) be predicted with non-zero probability.
We get the correct contraction now in the Italian prediction: "allo universo" -> "all'universo"
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @ydshieh @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20646/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20646",
"html_url": "https://github.com/huggingface/transformers/pull/20646",
"diff_url": "https://github.com/huggingface/transformers/pull/20646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20646.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20645
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20645/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20645/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20645/events
|
https://github.com/huggingface/transformers/pull/20645
| 1,481,534,958
|
PR_kwDOCUB6oc5EnhmF
| 20,645
|
Add `dpt-hybrid` support
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a bunch! Fortunately it was already on the config file :D https://huggingface.co/Intel/dpt-hybrid-midas/blob/main/config.json#L277 but will open a PR to remove the `embedding_type` as it is not needed anymore",
"The config file has been modified, merging!"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds `DPT-hybrid` support in `transformers`
Currently only DPT is supported. This PR leverages `AutoBackbone` from @NielsRogge to replace the embedding layer from `DPT` to support `DPT-hybrid`
Fixes #20435
Model weights: https://huggingface.co/Intel/dpt-hybrid-midas
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20645/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20645",
"html_url": "https://github.com/huggingface/transformers/pull/20645",
"diff_url": "https://github.com/huggingface/transformers/pull/20645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20645.patch",
"merged_at": 1670428915000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20644
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20644/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20644/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20644/events
|
https://github.com/huggingface/transformers/issues/20644
| 1,481,450,965
|
I_kwDOCUB6oc5YTSXV
| 20,644
|
ONNX encoder decoder exchange invoke issue
|
{
"login": "umanniyaz",
"id": 33204214,
"node_id": "MDQ6VXNlcjMzMjA0MjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/33204214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/umanniyaz",
"html_url": "https://github.com/umanniyaz",
"followers_url": "https://api.github.com/users/umanniyaz/followers",
"following_url": "https://api.github.com/users/umanniyaz/following{/other_user}",
"gists_url": "https://api.github.com/users/umanniyaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/umanniyaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/umanniyaz/subscriptions",
"organizations_url": "https://api.github.com/users/umanniyaz/orgs",
"repos_url": "https://api.github.com/users/umanniyaz/repos",
"events_url": "https://api.github.com/users/umanniyaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/umanniyaz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@NielsRogge @mht-sharma \r\n\r\nNew issue raised here: (https://github.com/huggingface/transformers/issues/20644)",
"Added a draft PR in optimum for easing inference. https://github.com/huggingface/optimum/pull/588",
"What about this : \r\n**Adds ORTModelForVision2Seq for inference (In progress...)**\r\n\r\nLike I said I TrOCR model in encoder_onnx and decoder_onnx , how can I invoke these two models together on ONNX runtime for fast inference.\r\n\r\n@mht-sharma \r\n",
"Hi @umanniyaz could you try the following code for inference for testing.\r\n\r\nhttps://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39",
"> What about this : **Adds ORTModelForVision2Seq for inference (In progress...)**\r\n> \r\n> Like I said I TrOCR model in encoder_onnx and decoder_onnx , how can I invoke these two models together on ONNX runtime for fast inference.\r\n> \r\n> @mht-sharma\r\n\r\nI am waiting for a few PRs to merge before this and would work on the inference. Should be available by next week.",
"> Hi @umanniyaz could you try the following code for inference for testing.\r\n> \r\n> https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39\r\n\r\n@mht-sharma Inference using these helper classes is still bad, I don't see any decrease in latency, plus performance in terms of text recognition decreases",
"Hi @umanniyaz could you share which device are you using for inference. If you are using GPU, you need to add the iobinding to observe the speedup. \r\n\r\nFor CPU inference, it may differ on the kind of CPU you are using. Currently the ORT inference uses the torch for the generation, hence, there can be a resource crunch between the torch and ORT which may lead to a slowdown. You may need to set the appropriate `intra-op-threads` and `torch threads` to observe the speedup. https://github.com/microsoft/onnxruntime/issues/13808",
"> Hi @umanniyaz could you share which device are you using for inference. If you are using GPU, you need to add the iobinding to observe the speedup.\r\n> \r\n> For CPU inference, it may differ on the kind of CPU you are using. Currently the ORT inference uses the torch for the generation, hence, there can be a resource crunch between the torch and ORT which may lead to a slowdown. You may need to set the appropriate `intra-op-threads` and `torch threads` to observe the speedup. [microsoft/onnxruntime#13808](https://github.com/microsoft/onnxruntime/issues/13808)\r\n\r\n@mht-sharma I am using Intel i9 10th generation using this class for a Django REST API - CPU,not using GPU ,my GPU configuration stands Nvidia Quadro RTX-4000 Max Q design.\r\n\r\nCan you provide with CPU and with GPU implementation,i just need to see where it speeds up.\r\n\r\nIs onnx runtime for task image-to-text in Optimum pipelines for coming anytime soon",
"@mht-sharma @NielsRogge I just tried the above things in you inference_testing code for GPU with iobinding and on CPU by adding intra_op_threads for parrallel execution and noticed change in inference , but the accuracy of TR-OCR on changing to respective Encoder_model.onnx and Decoder_model.onnx suffers,it gives bad results than Original,like whitespaces between text are ignored and CER in text recognition increases in case of using above ONNX models\r\n\r\n\r\n",
"Hi @umanniyaz , \r\n\r\n`Is ONNX runtime for task image-to-text in Optimum pipelines for coming anytime soon` - Things got little delayed due to the NYE. I would work on the ONNXRuntime pipeline in optimum in the coming week.\r\n\r\nThe decrease in accuracy may not be because of adding `iobinding` or `intra_op_threads`. Let me know if it is otherwise. The drop in accuracy is on both CPU and GPU (CUDAExecutionProvider)?\r\n\r\nCould you share which `atol` you have used for the ONNX export.",
"@mht-sharma There is a consistent decrease in accuracy irrespective of CPU, GPU intra op threads or iobinding, For onnx export I have utilised your latest PR on Vision EncoderDecoder Model conversion as mentioned,you can send the TrOCR conversion again,further i used this:\r\n\r\npython -m transformers.onnx --model=microsoft/trocr-base-printed --feature=vision2seq-lm models_trocr_base --atol 1e-3\r\n\r\n\r\nNote: Need to use Tr-OCR-Base Printed ",
"> Hi @umanniyaz ,\r\n> \r\n> `Is ONNX runtime for task image-to-text in Optimum pipelines for coming anytime soon` - Things got little delayed due to the NYE. I would work on the ONNXRuntime pipeline in optimum in the coming week.\r\n> \r\n> The decrease in accuracy may not be because of adding `iobinding` or `intra_op_threads`. Let me know if it is otherwise. The drop in accuracy is on both CPU and GPU (CUDAExecutionProvider)?\r\n> \r\n> Could you share which `atol` you have used for the ONNX export.\r\n\r\nHi @mht-sharma --atol 1e-3 gives inaccurate results then actual model,using 1e-4 onwards not feasible can you please tell value for atol for converting models",
"> Hi @umanniyaz ,\r\n> \r\n> `Is ONNX runtime for task image-to-text in Optimum pipelines for coming anytime soon` - Things got little delayed due to the NYE. I would work on the ONNXRuntime pipeline in optimum in the coming week.\r\n> \r\n> The decrease in accuracy may not be because of adding `iobinding` or `intra_op_threads`. Let me know if it is otherwise. The drop in accuracy is on both CPU and GPU (CUDAExecutionProvider)?\r\n> \r\n> Could you share which `atol` you have used for the ONNX export.\r\n\r\nAny updates on this? @mht-sharma ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> @mht-sharma @NielsRogge I just tried the above things in you inference_testing code for GPU with iobinding and on CPU by adding intra_op_threads for parrallel execution and noticed change in inference , but the accuracy of TR-OCR on changing to respective Encoder_model.onnx and Decoder_model.onnx suffers,it gives bad results than Original,like whitespaces between text are ignored and CER in text recognition increases in case of using above ONNX models\r\n> \r\n> \r\n\r\nMr @umanniyaz Can you share me your code inference trocr onnx with iobinding, thanks a lot for your help !!! "
] | 1,670
| 1,694
| 1,677
|
NONE
| null |
### System Info
TR-OCR Model
Encoder - BeiT -encoder.onnx
Decoder - Roberta large- decoder.onnx
System config:
Intel i9 11 gen
Nvidia Quadro RTX 4000 Max Q design - 16GB
Dependencies version:
onnx == 1.12.0
onnx-runtime == 1.13.1
torch == 1.13.0
transformers == 4.24.0
torchvision ==0.14.0
Issue:
Unable to start ONNX inference sessions with Tr-OCR ONNX conversions-> encoder.onnx & decoder.onnx with **ORTModelForVision2Seq** , model.generate() -> raising this error:
```
model.generate(pixel_values.to('cpu'))
```
Traceback (most recent call last):
```
File "<string>", line 1, in <module> File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\transformers\generation_utils.py", line 1339, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\transformers\generation_utils.py", line 583, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\torch\nn\modules\module.py", line 1188, in _call_impl if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks File "C:\Users\110769\Anaconda3\envs\ocr2\lib\site-packages\torch\nn\modules\module.py", line 1265, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'ORTEncoder' object has no attribute '_backward_hooks'
```
Whole Code Snippet :
```
class ORTEncoder(nn.Module):
"""
Encoder model for ONNX Runtime inference.
Arguments:
session (`onnxruntime.InferenceSession`):
The ONNX Runtime inference session associated to the encoder.
"""
def __init__(
self, session: onnxrt.InferenceSession, device: torch.device, main_input_name: str = "input_ids"
):
self.session = session
self._device = device
self.main_input_name = main_input_name
self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.session.get_inputs())}
self.output_names = {output_key.name: idx for idx, output_key in enumerate(self.session.get_outputs())}
class ORTDecoder(nn.Module):
"""
Encoder model for ONNX Runtime inference.
Arguments:
session (`onnxruntime.InferenceSession`):
The ONNX Runtime inference session associated to the encoder.
"""
def __init__(
self, session: onnxrt.InferenceSession, device: torch.device, main_input_name: str = "input_ids"
):
self.session = session
self._device = device
self.main_input_name = main_input_name
self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.session.get_inputs())}
self.output_names = {output_key.name: idx for idx, output_key in enumerate(self.session.get_outputs())}
class ORTModelForVision2Seq(VisionEncoderDecoderModel):
def __init__(self, *args, **kwargs):
config = AutoConfig.from_pretrained('microsoft/trocr-base-printed')
super().__init__(config)
self._device = "cpu"
self.encoder = ORTEncoder(onnxrt.InferenceSession(encoder_path,providers=["CPUExecutionProvider"]),device='cpu')
self.decoder = ORTDecoder(onnxrt.InferenceSession(decoder_path,providers=["CPUExecutionProvider"]),device='cpu')
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None,
**kwargs,
) -> Seq2SeqLMOutput:
# Encode if needed : first prediction pass
if encoder_outputs is None:
encoder_outputs = self.encoder(pixel_values=pixel_values)
# Decode
decoder_attention_mask = decoder_input_ids.new_ones(decoder_input_ids.shape)
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_outputs.last_hidden_state,
)
return Seq2SeqLMOutput(
logits=decoder_outputs.logits,
)
def prepare_inputs_for_generation(self, input_ids, attention_mask=None, encoder_outputs=None, **kwargs):
return {
"decoder_input_ids": input_ids,
"decoder_atttention_mask": input_ids,
"encoder_outputs": encoder_outputs,
}
model = ORTModelForVision2Seq()
start = time.time()
img = Image.open(r'PATH').convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed')
pixel_values = processor(images=img, return_tensors="pt").pixel_values
model.config.decoder_start_token_id = 2
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.vocab_size = model.config.decoder.vocab_size
generated_ids = model.generate(pixel_values.to(device))
end = time.time()
```
How can I run encoder/decoder with wrapped ORT instead invoking two concurrent sessions for Encoder and decoder in loop
@mht-sharma @NielsRogge
encoder_path -> takes encoder ONNX
decoder_path -> take decdoer ONNX
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Run above code snippet model.generate() invokes error
### Expected behavior
ONNX inference session should be invoked using ORTVisionseq2seq ,when given encoder and decoder. Also Model.generate() not a valid function for generating IDS
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20644/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20643
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20643/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20643/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20643/events
|
https://github.com/huggingface/transformers/issues/20643
| 1,481,396,578
|
I_kwDOCUB6oc5YTFFi
| 20,643
|
Use encoder_last_hidden_states instead of tokens as input to do beam-search on text generation (BART cases)
|
{
"login": "etsurin",
"id": 59410307,
"node_id": "MDQ6VXNlcjU5NDEwMzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/59410307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/etsurin",
"html_url": "https://github.com/etsurin",
"followers_url": "https://api.github.com/users/etsurin/followers",
"following_url": "https://api.github.com/users/etsurin/following{/other_user}",
"gists_url": "https://api.github.com/users/etsurin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/etsurin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etsurin/subscriptions",
"organizations_url": "https://api.github.com/users/etsurin/orgs",
"repos_url": "https://api.github.com/users/etsurin/repos",
"events_url": "https://api.github.com/users/etsurin/events{/privacy}",
"received_events_url": "https://api.github.com/users/etsurin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
```
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_token_id
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
return shifted_input_ids
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large")
model.config.is_encoder_decoder = False
model.eval()
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
text = 'the team had decided to replace the rubber with plastic due to the budget limit .</s>when evaluating the material of the remote control , marketing admitted that sponginess was what most users desired , which was the feel given by rubber .</s>project manager agreed .</s>however, project manager pointed out that a plastic remote control was no worse than other remote controls in the market , so it would not be a step-back at least .</s>okey. That\'s great. '
inputs = tokenizer(text, return_tensors = "pt")["input_ids"]
decoder_input_ids = shift_tokens_right(inputs, model.model.config.pad_token_id, model.model.config.decoder_start_token_id)
encoder_outputs = model.model.encoder(inputs)
print(encoder_outputs)
decoder_outputs = model.model.decoder(input_ids = decoder_input_ids, encoder_hidden_states= encoder_outputs[0]).last_hidden_state
sepa_logits = model.lm_head(decoder_outputs) + model.final_logits_bias
logits = model(inputs).logits #You can get logits = sepa_logits
```
I want to do some experiments on text summarization tasks by separating the BART model and modifing its encoder outputs. I notice that the pipeline of standard generate() function is text tokens ->(encoder) ->encoder outputs->(decoder + beam search) -> output tokens. Instead of the tokens, I want to take the encoder last hidden states, which size is [batch_size, sequence length, 1024] as the inputs to generate the text by using the bart decoder part only to do beam search. However, I don't know how to modify the generate() function to implement it.
The code above is to make sure that the separation is all right and I want to take encoder_outputs[0] as input (now it's the direct output of the input tokens, but later I want to do some modification, that's why I say I can't use the tokens2tokens generate function. ) , then use the decoder part to generate output tokens via beam search.
I believe that it's possible in theory but the generate() funcation is quite complicated and I need some hints on how to modify it. Thanks!
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20643/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20642
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20642/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20642/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20642/events
|
https://github.com/huggingface/transformers/pull/20642
| 1,481,253,075
|
PR_kwDOCUB6oc5EmijO
| 20,642
|
pin TF 2.11 in docker files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Same as #20635 but for dockerfiles for GH actions.
(I already built the images and re-launch the daily CI)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20642/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20642",
"html_url": "https://github.com/huggingface/transformers/pull/20642",
"diff_url": "https://github.com/huggingface/transformers/pull/20642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20642.patch",
"merged_at": 1670424409000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20641
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20641/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20641/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20641/events
|
https://github.com/huggingface/transformers/pull/20641
| 1,481,250,773
|
PR_kwDOCUB6oc5EmiBz
| 20,641
|
Speed up git-lfs detection on error
|
{
"login": "xloem",
"id": 279585,
"node_id": "MDQ6VXNlcjI3OTU4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/279585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xloem",
"html_url": "https://github.com/xloem",
"followers_url": "https://api.github.com/users/xloem/followers",
"following_url": "https://api.github.com/users/xloem/following{/other_user}",
"gists_url": "https://api.github.com/users/xloem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xloem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xloem/subscriptions",
"organizations_url": "https://api.github.com/users/xloem/orgs",
"repos_url": "https://api.github.com/users/xloem/repos",
"events_url": "https://api.github.com/users/xloem/events{/privacy}",
"received_events_url": "https://api.github.com/users/xloem/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
Prevent read and discard of entire checkpoint file.
# What does this PR do?
Mutates an error handler that checks only 7 bytes, to only read those 7 bytes rather than an entire checkpoint file.
Fixes # (issue)
Issue not opened. I encountered a memory allocation crash here when exploring disk offloading.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20641/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20641",
"html_url": "https://github.com/huggingface/transformers/pull/20641",
"diff_url": "https://github.com/huggingface/transformers/pull/20641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20641.patch",
"merged_at": 1670424662000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20640
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20640/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20640/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20640/events
|
https://github.com/huggingface/transformers/pull/20640
| 1,481,149,178
|
PR_kwDOCUB6oc5EmLA_
| 20,640
|
Convert the data type of embeddings and masks to bfloat16 for torch amp
|
{
"login": "CaoE",
"id": 23565213,
"node_id": "MDQ6VXNlcjIzNTY1MjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23565213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaoE",
"html_url": "https://github.com/CaoE",
"followers_url": "https://api.github.com/users/CaoE/followers",
"following_url": "https://api.github.com/users/CaoE/following{/other_user}",
"gists_url": "https://api.github.com/users/CaoE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaoE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaoE/subscriptions",
"organizations_url": "https://api.github.com/users/CaoE/orgs",
"repos_url": "https://api.github.com/users/CaoE/repos",
"events_url": "https://api.github.com/users/CaoE/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaoE/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR!\r\nWe're not really interested in adding optimizations like this in each model file, as if we were to do this for all possible hardwares and dtypes, the code would be unreadable. Since each model is defined in its own file, it's easy for a user to customize the code for their specific need (like here for bfloat16).",
"@sgugger Thank you for your comments! Yes, adding optimizations like this in each model file is not general. Users can customize the code for their need based on existing models. I posted the PR for testing and discussing, and also want to see if there is any way in huggingface to avoid such additional data type conversions since for some tasks like masked-language-modeling+bert-base-cased there may **be 30% performance drop**.",
"@sgugger May I know if there is any way in huggingface to avoid such additional data type conversions ? Thanks for your any advice !"
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### Motivation
Add a attribute `use_torch_bfloat16_embeddings` in `PretrainedConfig` to indicate if bfloat16 data type for embeddings and masks is used and convert the data type of embeddings and masks to bfloat16 accordingly.
This will reduce the number of data type conversion between float and bfloat16 when running models with `torch.cpu.amp.autocast(dtype=torch.bfloat16)` and improve the performance with little accuracy regression. This is because there are many residual modules in models and thus result in data type promotion by binary operations implemented by tensoriterator in PyTorch.
For example: out = tensor1 + tensor2
If data type of tensor1 is float and tensor2 is bfloat16, pytorch will convert tensor2 to float and get float output. When running models using amp for bfloat16, the conversion will results in additional `to` operations, which will reduce performance.
### Testing
- Number of `to` operations
Model | wo/ bf16 embedding and masks| w/ bf16 embedding and masks
-- | -- | --
albert | 22 | 11
bert | 49 | 10
bart | 65 | 38
gpt2 | 56 | 29
distilbert | 40 | 19
roberta | 54 | 15
- Accuracy testing
Model | fp32 | amp bf16 | amp bf16 w/ bf16 embedding
-- | -- | -- | --
masked-language-modeling+bert-base-cased | 0.4819 | 0.4818 | 0.4819
masked-language-modeling+distilbert-base-cased | 0.3143 | 0.3158 | 0.3152
multiple-choice+distilbert-base-cased | 0.246 | 0.2461 | 0.2454
multiple-choice+google-electra-base-discriminator | 0.1193 | 0.1194 | 0.1201
text-classification+google-electra-base-generator | 0.6901 | 0.6838 | 0.6838
token-classification+google-electra-base-generator | 0.0414 | 0.0411 | 0.041
token-classification+gpt2 | 0.0379 | 0.0379 | 0.0379
albert | 0.453431373 | 0.428921569 | 0.446078431
distilbert | 0.681372549 | 0.681372549 | 0.681372549
roberta | 0.683823529 | 0.683823529 | 0.683823529
xlm-roberta | 0.637254902 | 0.637254902 | 0.639705882
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20640/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20640",
"html_url": "https://github.com/huggingface/transformers/pull/20640",
"diff_url": "https://github.com/huggingface/transformers/pull/20640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20640.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20639
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20639/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20639/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20639/events
|
https://github.com/huggingface/transformers/pull/20639
| 1,480,740,627
|
PR_kwDOCUB6oc5EktJ-
| 20,639
|
Added type hints to modeling_tf_encoder_decoder.py
|
{
"login": "Batese2001",
"id": 69521504,
"node_id": "MDQ6VXNlcjY5NTIxNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/69521504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Batese2001",
"html_url": "https://github.com/Batese2001",
"followers_url": "https://api.github.com/users/Batese2001/followers",
"following_url": "https://api.github.com/users/Batese2001/following{/other_user}",
"gists_url": "https://api.github.com/users/Batese2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Batese2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Batese2001/subscriptions",
"organizations_url": "https://api.github.com/users/Batese2001/orgs",
"repos_url": "https://api.github.com/users/Batese2001/repos",
"events_url": "https://api.github.com/users/Batese2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Batese2001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This pull request adds type hints for modeling_tf_encoder_decoder.py as outlined in Issue #16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20639/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20639",
"html_url": "https://github.com/huggingface/transformers/pull/20639",
"diff_url": "https://github.com/huggingface/transformers/pull/20639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20639.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20638
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20638/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20638/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20638/events
|
https://github.com/huggingface/transformers/issues/20638
| 1,480,712,680
|
I_kwDOCUB6oc5YQeHo
| 20,638
|
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
|
{
"login": "vitthal-bhandari",
"id": 51982356,
"node_id": "MDQ6VXNlcjUxOTgyMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/51982356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitthal-bhandari",
"html_url": "https://github.com/vitthal-bhandari",
"followers_url": "https://api.github.com/users/vitthal-bhandari/followers",
"following_url": "https://api.github.com/users/vitthal-bhandari/following{/other_user}",
"gists_url": "https://api.github.com/users/vitthal-bhandari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitthal-bhandari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitthal-bhandari/subscriptions",
"organizations_url": "https://api.github.com/users/vitthal-bhandari/orgs",
"repos_url": "https://api.github.com/users/vitthal-bhandari/repos",
"events_url": "https://api.github.com/users/vitthal-bhandari/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitthal-bhandari/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to help debug your code. We also have a [step by step guide](https://huggingface.co/course/chapter8/4?fw=pt) to help debug issues with the `Trainer`.\r\n\r\nIn this instance you did not convert your labels from strings to integers, so the data collator cannot build a batch. Also you shouldn't share your huggingface token in a notebook like this, I recommend you invalidate it :-)",
"> Please use the [forums](https://discuss.huggingface.co/) to help debug your code. We also have a [step by step guide](https://huggingface.co/course/chapter8/4?fw=pt) to help debug issues with the `Trainer`.\r\n> \r\n> In this instance you did not convert your labels from strings to integers, so the data collator cannot build a batch. Also you shouldn't share your huggingface token in a notebook like this, I recommend you invalidate it :-)\r\n\r\nThank you for the suggestions @sgugger \r\nI do have a quick question - shouldn't the below snippet take care of converting labels to ids and back?\r\n```\r\n id2label=id2label, \r\n label2id=label2id, \r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am having the same issue without labels. It works for the first 24 iterations but then suddenly it stops padding. I have two images `[image1 PIL, image2 PIL]` and two sentences `[sentence1, sentence2]`.\r\n`inputs = preprocess([image1 PIL, image2 PIL], [sentence1, sentence2], return_tensors=\"pt\", padding=True, truncation=True).to(device)`\r\n\r\nThe first iterations produce the correct output:\r\n`[[101, 1037, 6302, 1997, 1037, 3287, 5093, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1037, 6302, 1997, 1037, 2931, 5093, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]\r\n`\r\n\r\nThen suddenly it does not:\r\n`[[101, 1037, 6302, 1997, 1037, 13755, 2492, 1012, 102], [101, 1037, 6302, 1997, 1037, 3103, 15909, 2492, 1012, 102]]\r\n`\r\n\r\nBoth have padding='True'. The error is:\r\n\r\n`Traceback (most recent call last):\r\n File \"anaconda3/envs/SRI/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 718, in convert_to_tensors\r\n tensor = as_tensor(value)\r\nValueError: expected sequence of length 9 at dim 1 (got 10)`\r\n\r\nI cannot quite figure out why it suddently stops padding to the same lenghth. I have even tried setting the max length and the same thing happens. I have tested the text to make sure there are no changes there and the images. "
] | 1,670
| 1,675
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes (Tesla T4)
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger maybe you could help?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# Information
I am using the implementation of text classification given in official [documentation ](https://huggingface.co/docs/transformers/tasks/sequence_classification)from huggingface and one given by @lewtun in his book.
I retrained an instance of sentence-transformers using contrastive loss on an unsupervised data dump and now want to finetune the above model on a labeled, binary dataset.
[This ](https://github.com/huggingface/transformers/issues/15505)issue is similar, and I followed the fix but to no help.
# To reproduce
1. Run [this notebook](https://colab.research.google.com/drive/1VMl5l1O4lrgSMiGTh4yKIWEY2XGUgSIm?usp=sharing)
2. Trainer.train() should produce the following error:
```
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in convert_to_tensors(self, tensor_type, prepend_batch_axis)
716 if not is_tensor(value):
--> 717 tensor = as_tensor(value)
718
ValueError: too many dimensions 'str'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
9 frames
[<ipython-input-75-ce45916ac715>](https://localhost:8080/#) in <module>
7 )
8
----> 9 trainer.train()
[/usr/local/lib/python3.8/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1525 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1526 )
-> 1527 return inner_training_loop(
1528 args=args,
1529 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.8/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1747
1748 step = -1
-> 1749 for step, inputs in enumerate(epoch_iterator):
1750
1751 # Skip past any already trained steps if resuming training
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in __next__(self)
679 # TODO(https://github.com/pytorch/pytorch/issues/76750)
680 self._reset() # type: ignore[call-arg]
--> 681 data = self._next_data()
682 self._num_yielded += 1
683 if self._dataset_kind == _DatasetKind.Iterable and \
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self)
719 def _next_data(self):
720 index = self._next_index() # may raise StopIteration
--> 721 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
722 if self._pin_memory:
723 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index)
50 else:
51 data = self.dataset[possibly_batched_index]
---> 52 return self.collate_fn(data)
[/usr/local/lib/python3.8/dist-packages/transformers/data/data_collator.py](https://localhost:8080/#) in __call__(self, features)
247
248 def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:
--> 249 batch = self.tokenizer.pad(
250 features,
251 padding=self.padding,
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose)
3015 batch_outputs[key].append(value)
3016
-> 3017 return BatchEncoding(batch_outputs, tensor_type=return_tensors)
3018
3019 def create_token_type_ids_from_sequences(
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in __init__(self, data, encoding, tensor_type, prepend_batch_axis, n_sequences)
208 self._n_sequences = n_sequences
209
--> 210 self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
211
212 @property
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in convert_to_tensors(self, tensor_type, prepend_batch_axis)
731 "Please see if a fast version of this tokenizer is available to have this feature available."
732 )
--> 733 raise ValueError(
734 "Unable to create tensor, you should probably activate truncation and/or padding with"
735 " 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your"
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
### Expected behavior
The model should train without failure,
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20638/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20637
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20637/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20637/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20637/events
|
https://github.com/huggingface/transformers/pull/20637
| 1,480,705,354
|
PR_kwDOCUB6oc5EklDH
| 20,637
|
added model resources for xlm-roberta
|
{
"login": "hazrulakmal",
"id": 24774385,
"node_id": "MDQ6VXNlcjI0Nzc0Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/24774385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hazrulakmal",
"html_url": "https://github.com/hazrulakmal",
"followers_url": "https://api.github.com/users/hazrulakmal/followers",
"following_url": "https://api.github.com/users/hazrulakmal/following{/other_user}",
"gists_url": "https://api.github.com/users/hazrulakmal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hazrulakmal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hazrulakmal/subscriptions",
"organizations_url": "https://api.github.com/users/hazrulakmal/orgs",
"repos_url": "https://api.github.com/users/hazrulakmal/repos",
"events_url": "https://api.github.com/users/hazrulakmal/events{/privacy}",
"received_events_url": "https://api.github.com/users/hazrulakmal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes [20055](https://github.com/huggingface/transformers/issues/20055)
- I created a link to task guides for casual language modeling and text classification. I think they are useful and applicable but not directly related to the xlm-roberta model class per say.
- For casual language modeling, should I write it under "text-generation" pipeline tag or create a subheader like multiple choice?
- I've checked notebooks from the community but so far none of them do a tutorial on xlm-roberta. Hopefully, there'll be one soon!
- I've also found a few blog posts related to roberta but not xlm-roberta. should we include them? cus they are technically the same architecture just that one is multi-lingual and the other is not
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR improves the docs of xlm-roberta by adding common and most used resources
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
@stevhliu please check the work and let me know if I need to do any changes. thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20637/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20637",
"html_url": "https://github.com/huggingface/transformers/pull/20637",
"diff_url": "https://github.com/huggingface/transformers/pull/20637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20637.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20636
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20636/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20636/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20636/events
|
https://github.com/huggingface/transformers/issues/20636
| 1,480,691,310
|
I_kwDOCUB6oc5YQY5u
| 20,636
|
CLIP not releasing GPU memory after each inference batch
|
{
"login": "corbt",
"id": 176426,
"node_id": "MDQ6VXNlcjE3NjQyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/176426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/corbt",
"html_url": "https://github.com/corbt",
"followers_url": "https://api.github.com/users/corbt/followers",
"following_url": "https://api.github.com/users/corbt/following{/other_user}",
"gists_url": "https://api.github.com/users/corbt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/corbt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/corbt/subscriptions",
"organizations_url": "https://api.github.com/users/corbt/orgs",
"repos_url": "https://api.github.com/users/corbt/repos",
"events_url": "https://api.github.com/users/corbt/events{/privacy}",
"received_events_url": "https://api.github.com/users/corbt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Maybe it's missing a `torch.no_grad`? Not sure though, cc @amyeroberts and @ArthurZucker if you have time to dive into this a bit more :-)",
"@sgugger thanks for the tip; I think that's the source of the issue! When I wrap my code in a `with torch.no_grad():` context it starts releasing GPU memory correctly. Not going to close the issue just yet since I'm not sure whether this *should* be the caller's responsibility or not when calling `get_image_features`, but at any rate the problem is solved for me. 🙂",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import CLIPTokenizerFast, CLIPProcessor, CLIPModel
import torch
model_id = 'openai/clip-vit-base-patch32'
device = 'cuda'
tokenizer = CLIPTokenizerFast.from_pretrained(model_id)
processor = CLIPProcessor.from_pretrained(model_id)
model = CLIPModel.from_pretrained(model_id).to(device)
images = glob.glob('/data/index/abo/images/small/*/*.jpg')
dataset = Dataset.from_dict({'image': images}).cast_column('image', Image())
for i in range(0, len(dataset), 500):
print(i)
batch = processor(
text=None,
images=dataset[i:i+500]['image'],
return_tensors='pt'
)['pixel_values'].to(device)
model.get_image_features(batch)
```
### Expected behavior
Each time I call `model.get_image_features(batch)` about 20GB of GPU memory is consumed. However, the GPU memory is never cleared, so I quickly run into a `CUDA out of memory` error. This memory also isn't cleared if I manually call `torch.cuda.empty_cache()`.
It's possible I'm missing a step, but to me it looks like there may be a bug in the model code causing it not to free GPU memory after it finishes inference on a batch?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20636/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20635
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20635/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20635/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20635/events
|
https://github.com/huggingface/transformers/pull/20635
| 1,480,403,630
|
PR_kwDOCUB6oc5EjgJv
| 20,635
|
Pin TensorFlow to the next release
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging without approval to fix main since all tests are passing."
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Pin TensorFlow to the next release, which should fix the current errors on the CI when trying to install `tensorflow-text`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20635/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20635",
"html_url": "https://github.com/huggingface/transformers/pull/20635",
"diff_url": "https://github.com/huggingface/transformers/pull/20635.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20635.patch",
"merged_at": 1670369339000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20634
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20634/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20634/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20634/events
|
https://github.com/huggingface/transformers/pull/20634
| 1,480,193,639
|
PR_kwDOCUB6oc5EiwKt
| 20,634
|
Migrate torchdynamo to torch.compile
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
This PR migrates the current integration with PyTorch 2.0 to use the entry point they introduced: `torch.compile`. As a consequence, the `torchdynamo` argument is deprecated to the profit of `torch_compile_backend` and `torch_compile_mode`. Setting either will trigger a model compilation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20634/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20634",
"html_url": "https://github.com/huggingface/transformers/pull/20634",
"diff_url": "https://github.com/huggingface/transformers/pull/20634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20634.patch",
"merged_at": 1670516333000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20633
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20633/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20633/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20633/events
|
https://github.com/huggingface/transformers/pull/20633
| 1,480,123,602
|
PR_kwDOCUB6oc5EigK-
| 20,633
|
Fix link to speech encoder decoder model in speech recognition readme
|
{
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20633). All of your documentation changes will be reflected on that endpoint."
] | 1,670
| 1,691
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Current README documentation aims to `https://huggingface.co/docs/transformers/main/en/model_doc/speechencoderdecoder#speech-encoder-decoder-models`, which redirects to a 404 Not found. The actual link seems to be `https://huggingface.co/docs/transformers/main/en/model_doc/speech-encoder-decoder#speech-encoder-decoder-models`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20633/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20633",
"html_url": "https://github.com/huggingface/transformers/pull/20633",
"diff_url": "https://github.com/huggingface/transformers/pull/20633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20633.patch",
"merged_at": 1670359601000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20632
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20632/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20632/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20632/events
|
https://github.com/huggingface/transformers/pull/20632
| 1,480,057,402
|
PR_kwDOCUB6oc5EiQ5a
| 20,632
|
fix natten installation
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
I made a mistake in #20546 and it ended up with
```bash
# For `dinat` model
RUN python3 -m pip install --no-cache-dir natten
RUN python3 -m pip install --no-cache-dir natten -f https://shi-labs.com/natten/wheels/$CUDA/
```
so the CUDA version was not installed (due to `Requirement already satisfied`)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20632/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20632",
"html_url": "https://github.com/huggingface/transformers/pull/20632",
"diff_url": "https://github.com/huggingface/transformers/pull/20632.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20632.patch",
"merged_at": 1670361787000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20631
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20631/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20631/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20631/events
|
https://github.com/huggingface/transformers/pull/20631
| 1,480,015,061
|
PR_kwDOCUB6oc5EiHP4
| 20,631
|
Add missing is_decoder parameter
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
MEMBER
| null |
This PR fixes #20452 by adding the missing `is_decoder` parameter to the `BertConfig` docstring and other model docs with the same issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20631/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20631/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20631",
"html_url": "https://github.com/huggingface/transformers/pull/20631",
"diff_url": "https://github.com/huggingface/transformers/pull/20631.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20631.patch",
"merged_at": 1670357939000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20630
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20630/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20630/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20630/events
|
https://github.com/huggingface/transformers/pull/20630
| 1,479,980,027
|
PR_kwDOCUB6oc5Eh_Rm
| 20,630
|
Fixed num_channels!=3 normalization training
|
{
"login": "layjain",
"id": 43300660,
"node_id": "MDQ6VXNlcjQzMzAwNjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/43300660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/layjain",
"html_url": "https://github.com/layjain",
"followers_url": "https://api.github.com/users/layjain/followers",
"following_url": "https://api.github.com/users/layjain/following{/other_user}",
"gists_url": "https://api.github.com/users/layjain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/layjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/layjain/subscriptions",
"organizations_url": "https://api.github.com/users/layjain/orgs",
"repos_url": "https://api.github.com/users/layjain/repos",
"events_url": "https://api.github.com/users/layjain/events{/privacy}",
"received_events_url": "https://api.github.com/users/layjain/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Not exactly, the issue with your CircleCI permissions, the tests won't run.\r\n\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"I have been now stuck on this for a while. I refreshed the permissions and re-ran the CircleCI checks, and I get the error:\r\n\"Resource class docker for xlarge is not available for your project, or is not a valid resource class. This message will often appear if the pricing plan for this project does not support docker use.\"\r\n\r\n\r\n",
"You might need to push an empty commit to re-trigger the tests after refreshing your permissions.",
"Hi @layjain let me know whether you could pick this up :)",
"FYI I pushed an empty commit to trigger CI",
"Hi @NielsRogge , I have fixed the CircleCI permissions, can this be merged.",
"@layjain The CI is currently running under your profile and not the Hugging Face profile, and as such our tests are mostly not run (there should be 22 checks here). If you rebase your PR on main you will see a new check failing (we added a fix to detect this recently)."
] | 1,670
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
Fixes #20580 and #19913
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20630/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20630/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20630",
"html_url": "https://github.com/huggingface/transformers/pull/20630",
"diff_url": "https://github.com/huggingface/transformers/pull/20630.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20630.patch",
"merged_at": 1673978780000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20628
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20628/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20628/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20628/events
|
https://github.com/huggingface/transformers/issues/20628
| 1,479,954,815
|
I_kwDOCUB6oc5YNlF_
| 20,628
|
`past_time_features` attribute for TimeSeriesTransformer is not optional
|
{
"login": "simonMoisselin",
"id": 20187820,
"node_id": "MDQ6VXNlcjIwMTg3ODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/20187820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonMoisselin",
"html_url": "https://github.com/simonMoisselin",
"followers_url": "https://api.github.com/users/simonMoisselin/followers",
"following_url": "https://api.github.com/users/simonMoisselin/following{/other_user}",
"gists_url": "https://api.github.com/users/simonMoisselin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonMoisselin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonMoisselin/subscriptions",
"organizations_url": "https://api.github.com/users/simonMoisselin/orgs",
"repos_url": "https://api.github.com/users/simonMoisselin/repos",
"events_url": "https://api.github.com/users/simonMoisselin/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonMoisselin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @kashif and @NielsRogge ",
"Thank you for the issue! My feeling was that, since the transformer is a permutation equivariant layer, time features should be mandatory. For the case when you do not have date times, you can add positional encoding of a size of your choosing.\r\n\r\nWhat are your thoughts about this @simonMoisselin ?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Is it possible to reopen this. \r\n\r\nI'm personally in favor of having the interface support both datasets that have {past,future}_time_features and those that do not contain them.\r\n\r\nHowever, it's not call. But would it be possible to update the documentation, if it's not changed? In its current state it is misleading. https://huggingface.co/docs/transformers/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel.forward.past_observed_mask\r\n",
"thanks, @nathanhack just to confirm, the discussion above was about the `past_time_features` being required and the like you have is for the observation mask... can you kindly clarify?",
"Correct. The link I gave was a mistake. past_observation_mask is right below past_time_feature. When I copied the link my page was correctly displaying past_time_features. Which is clearly wrong. Thank you for catching it and I'm sorry it was confusing. The correct link should have been:\r\n\r\nhttps://huggingface.co/docs/transformers/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel.forward.past_time_features",
"Not that this is the right place but, but lags_sequence also says it's optional but it can't be None and can't be an empty list as it will cause an exception on the following line: https://github.com/huggingface/transformers/blob/75a208ef66c0176fc12a4c98922728ced5befbf9/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L1445\r\nself.config.context_length + max(self.config.lags_sequence)",
"@kashif we should probably make lags sequence optional as these are just additional features",
"so `lags_sequence` is set to optional since if you do not specify it, it will default to a pre-specified array, namely [1, 2, 3, 4, 5, 6, 7]. I believe you can just ignore this option, and everything should work... It serves to offset the input so that we train to predict the next time step, as well as the \"output dim size\" of a \"token embedding\" so that the input vectors have some dimension to them (especially in the univariate setting) and also allows us to trade-off sequence length with feature size. If you do not want lags you can set the `lags_sequence=[1]` for example.",
"> Correct. The link I gave was a mistake. past_observation_mask is right below past_time_feature. When I copied the link my page was correctly displaying past_time_features. Which is clearly wrong. Thank you for catching it and I'm sorry it was confusing. The correct link should have been:\r\n> \r\n> https://huggingface.co/docs/transformers/model_doc/time_series_transformer#transformers.TimeSeriesTransformerModel.forward.past_time_features\r\n\r\nFixed in PR #21020 "
] | 1,670
| 1,676
| 1,674
|
NONE
| null |
### System Info
Hello,
I am trying to use `TimeSeriesTransformer` with `past_time_features=None` but I don't see anything in the code taking into account when this parameter is not defined, for example inside the method `create_network_inputs`:
https://github.com/huggingface/transformers/blob/v4.25.1/src/transformers/models/time_series_transformer/modeling_time_series_transformer.py#L1566
```
# time feature
time_feat = (
torch.cat(
(
past_time_features[:, self._past_length - self.config.context_length :, ...],
future_time_features,
),
dim=1,
)
if future_values is not None
else past_time_features[:, self._past_length - self.config.context_length :, ...]
)
```
We should either update the documentation to make it mandatory, or update `create_network_inputs` and construct the inputs differently if `past_Time_features` is not present?
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
number_qty = 3
number_features = 100
prediction_length = 10
context_length = 10
nrows = 15
static_real_features = np.zeros((nrows, 100))
future_values = ...
past_values = ...
configuration = TimeSeriesTransformerConfig(input_size=number_qty,
num_static_real_features=number_features,
prediction_length=prediction_length,
context_length=context_length,
# past_time_features
)
model = TimeSeriesTransformerModel(configuration)
# model
model.forward(past_values,
static_real_features=static_real_features,
future_values=future_values,
past_time_features=None,
past_observed_mask=None,
static_categorical_features=None
)
```
### Expected behavior
We expect the `forward` method to works without past_time_features defined
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20628/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20627
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20627/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20627/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20627/events
|
https://github.com/huggingface/transformers/issues/20627
| 1,479,925,922
|
I_kwDOCUB6oc5YNeCi
| 20,627
|
When Pillow is not installed, importing from transformers.image_transforms raises an unclear NameError
|
{
"login": "convoliution",
"id": 7754936,
"node_id": "MDQ6VXNlcjc3NTQ5MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7754936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/convoliution",
"html_url": "https://github.com/convoliution",
"followers_url": "https://api.github.com/users/convoliution/followers",
"following_url": "https://api.github.com/users/convoliution/following{/other_user}",
"gists_url": "https://api.github.com/users/convoliution/gists{/gist_id}",
"starred_url": "https://api.github.com/users/convoliution/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/convoliution/subscriptions",
"organizations_url": "https://api.github.com/users/convoliution/orgs",
"repos_url": "https://api.github.com/users/convoliution/repos",
"events_url": "https://api.github.com/users/convoliution/events{/privacy}",
"received_events_url": "https://api.github.com/users/convoliution/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @amyeroberts ",
"Thanks for raising @convoliution ! I agree both the imports and error message could be improved and your suggestion. \r\n\r\nIt's highlighted another thing that needs to be addressed: adding center_crop to the `transformers` [module init](https://github.com/huggingface/transformers/blob/4f78bcb2871e0c51bec55edb87aadcaedce58069/src/transformers/__init__.py#L745). (I realised that if we import `rescale` using `from transformers.image_transforms import rescale` we get the same `ChannelDimension` error). \r\n\r\nI'll open up PRs to add to the init and to address the imports issue. ",
"^--- the second PR will need to be merged in to be fully resolved. ",
"Closing as the issue is now resolved: all image transforms can be safely imported and raise a clear error if Pillow is not installed in the environment if required. \r\n\r\n@convoliution Thanks again for raising. One change to note is that some transforms that were previously importable directly from `transformers` can now only be imported through the `image_transforms` module e.g.: \r\n`from transformers.image_transforms import rescale` c.f. #20704 \r\n"
] | 1,670
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: macOS-11.6.8-x86_64-i386-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Create and activate clean Python environment
```sh
python3 -m venv venv
source venv/bin/activate
```
2. Install `transformers` and its direct dependencies
```sh
pip install transformers
```
3. Attempt to import `transformers.image_transforms` or [one of its publicly-documented members](https://huggingface.co/docs/transformers/internal/image_processing_utils#transformers.image_transforms.center_crop)
```sh
python -c 'from transformers.image_transforms import center_crop'
```
4. Encounter a `NameError`
```
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/miliu/venv/lib/python3.10/site-packages/transformers/image_transforms.py", line 51, in <module>
def to_channel_dimension_format(image: np.ndarray, channel_dim: Union[ChannelDimension, str]) -> np.ndarray:
NameError: name 'ChannelDimension' is not defined
```
### Expected behavior
Rather than a `NameError` on import (caused by [`to_channel_dimension_format()`'s signature type annotation containing `ChannelDimension`](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/image_transforms.py#L51), which is [conditionally imported](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/image_transforms.py#L29) only [when `PIL` is available](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/utils/import_utils.py#L566-L567)), I would've expected something like `transformers.rescale`'s user experience, where it helpfully recommends installing `Pillow` when one attempts to use it:
```python
>>> from transformers import rescale
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
>>> rescale()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/miliu/venv/lib/python3.10/site-packages/transformers/utils/dummy_vision_objects.py", line 14, in rescale
requires_backends(rescale, ["vision"])
File "/Users/miliu/venv/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 997, in requires_backends
raise ImportError("".join(failed))
ImportError:
rescale requires the PIL library but it was not found in your environment. You can install it with pip:
`pip install pillow`. Please note that you may need to restart your runtime after installation.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20627/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20627/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20626
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20626/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20626/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20626/events
|
https://github.com/huggingface/transformers/pull/20626
| 1,479,922,090
|
PR_kwDOCUB6oc5Ehx-N
| 20,626
|
add in layer tf clip text tokenizer
|
{
"login": "piEsposito",
"id": 47679710,
"node_id": "MDQ6VXNlcjQ3Njc5NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/47679710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piEsposito",
"html_url": "https://github.com/piEsposito",
"followers_url": "https://api.github.com/users/piEsposito/followers",
"following_url": "https://api.github.com/users/piEsposito/following{/other_user}",
"gists_url": "https://api.github.com/users/piEsposito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piEsposito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piEsposito/subscriptions",
"organizations_url": "https://api.github.com/users/piEsposito/orgs",
"repos_url": "https://api.github.com/users/piEsposito/repos",
"events_url": "https://api.github.com/users/piEsposito/events{/privacy}",
"received_events_url": "https://api.github.com/users/piEsposito/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Just need to figure out where to append `eos_token` and `bos_token` within the tokenizers.",
"cc @Rocketknight1 so it's on your radar when the PR is ready :-) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20626). All of your documentation changes will be reflected on that endpoint.",
"Clip `</w>` formatting is too cursed, I'm thinking of jump ship and do the tokenizer from Roberta instead haha.",
"I was gonna jump ship, but then this absolute beast @pedrogengo came in and found a magic way to make it work. Making him a coauthor of the PR bc of that.\r\n\r\nWe are not supporting batches (yet).",
"We now have batch tokenization implemented and working :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"We are still working on it!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Did this ever get resolved?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,707
| 1,707
|
CONTRIBUTOR
| null |
# What does this PR do?
- Adds in layer `TFCLIPTokenizer` to enable serialization and serving it with TF Serving
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Addresses first step of #19992
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -> https://github.com/huggingface/transformers/issues/19992
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20626/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20626",
"html_url": "https://github.com/huggingface/transformers/pull/20626",
"diff_url": "https://github.com/huggingface/transformers/pull/20626.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20626.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20625
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20625/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20625/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20625/events
|
https://github.com/huggingface/transformers/pull/20625
| 1,479,730,176
|
PR_kwDOCUB6oc5EhF3G
| 20,625
|
Fix donut image processor
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks @amyeroberts . LGTM, but I don't see any change related to\r\n> \r\n> ```\r\n> Resolve bug where size wasn't passed to do_align_axis\r\n> ```\r\n> \r\n> Do I miss anything?\r\n\r\nNope - I've pushed it now :) ",
"@sgugger @ydshieh This also uncovered another sneaky bug when resizing: \r\n\r\n* When resizing, the image is coverted to `PIL.Image.Image` from numpy. The channel dimension format of the input image, before resizing inferred is also inferred. \r\n* When the image is converted back to numpy the image is always in `\"ChannelDimension.LAST\"` format\r\n* A final `to_channel_dimension_format` call is made to make sure the output resized image is in the same channel dimension format as the input.\r\n* In `to_channel_dimension_format` the input image (resized in this case) channel dimension format is inferred and compared to the requested format. \r\n* If the `height` dimensions is of size 3 or 1, then format is incorrectly inferred as `ChannelDimension.FIRST`\r\n* This resulted in images in the incorrect format being returned after resizing\r\n\r\nFor practical purposes, this doesn't cause an issue as it's very unlikely an image has a height dimension of 3. However, it results in flaky tests and is a bug. \r\n\r\nI've added an optional `input_channel_dimension` argument to `to_channel_dimension_format` which resolves this and additional tests for our `resize` functionality which previously failed and now pass with this update. "
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
This PR addresses failing integration tests for the Donut image processor which involves four main changes:
* Resolve bug where `size` wasn't passed to `do_align_axis`
* Remove a bug in the `get_resize_output_image_size` function which wouldn't take account of `max_size` ([inherited from previous resize without fixing](https://github.com/huggingface/transformers/blob/7586a1a367f5974e099e1be2fa8a751aa766179f/src/transformers/image_utils.py#L451))
* Update logic for getting output size in `thumbnail` method - ensuring the image dimensions are never increased.
* Update test values to reflect changes in resizing logic for thumbnail creation - see notes below.
### Changing resizing logic for `thumbnail` method
The DonutFeatueExtractor used the [Pillow thumbnail functionality](https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/src/transformers/models/donut/feature_extraction_donut.py#LL109C18-L109C18) to resize images which was [replaced with reusing `resize`](https://github.com/huggingface/transformers/blob/bf9a5882a7125a6050aaad0f52257f07df062d6a/src/transformers/models/donut/image_processing_donut.py#L226) in the image_transforms library. This was done primarily as `image.thumbnail` modifies in place and uses [Pillow's resize](https://github.com/python-pillow/Pillow/blob/1e28c8cffd8492af6bf5df2045e7ffe08b124033/src/PIL/Image.py#LL2538C13-L2538C13) with some additional logic for calculating the output size. Unlike `resize` which will resize an image to the requested `(height, width)`, `thumbnail` will produce an image which is no larger than the original image or requested size i.e. it will scale down an image preserving the aspect ratio c.f. [Pillow docs](https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.thumbnail).
This is a similar behaviour to torchvision when resizing:
* the shortest image edge is resized to `size` (int for torchvision, `min(requested_heigh, requsted_width)` for Pillow)
* the other edge is resized to preserve the aspect ratio
* if the longest edge > `max_size`, the longest edge is resized to `max_size` and the shortest edge resized to preserve the aspect ratio.
The calculation of the other dimension to preserve the aspect ratio is slightly different between the libraries. In pytorch the length of the edge is found [using `int` to round](https://github.com/pytorch/vision/blob/511924c1ced4ce0461197e5caa64ce5b9e558aab/torchvision/transforms/functional.py#L383), whereas Pillow [rounds to the value which produces an aspect ratio closest to the original image](https://github.com/python-pillow/Pillow/blob/1e28c8cffd8492af6bf5df2045e7ffe08b124033/src/PIL/Image.py#L2505). The torchvision resizing logic is replicated in our image transforms library [here](https://github.com/huggingface/transformers/blob/ae1cffaf3cd42d0ab1d7529e3b3118725bca0bcf/src/transformers/image_transforms.py#L155).
In the test [`tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::DonutModelIntegrationTest::test_inference_docvqa`](https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py#L816), the input image to the `thumbnail` method has dimension `(3713, 1920)`. The requested size is `(2560, 1920)`. `image.thumbnail` will resize to `(2560, 1373)` and our resizing logic (matching torchvision) will resize to `(2560, 1374)`.
As using torchvision resizing logic is more consistent with the rest of the library; Donut is the only model in the library that used the Pillow thumbnail functionality, and is more experimental than other models; I considered this to be an acceptable change.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20625/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20625",
"html_url": "https://github.com/huggingface/transformers/pull/20625",
"diff_url": "https://github.com/huggingface/transformers/pull/20625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20625.patch",
"merged_at": 1670526641000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20624
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20624/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20624/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20624/events
|
https://github.com/huggingface/transformers/issues/20624
| 1,479,716,982
|
I_kwDOCUB6oc5YMrB2
| 20,624
|
Whisper doesn't compute positional embeddings properly when given batches of prompt tokens
|
{
"login": "andyehrenberg",
"id": 32784181,
"node_id": "MDQ6VXNlcjMyNzg0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/32784181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyehrenberg",
"html_url": "https://github.com/andyehrenberg",
"followers_url": "https://api.github.com/users/andyehrenberg/followers",
"following_url": "https://api.github.com/users/andyehrenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/andyehrenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyehrenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyehrenberg/subscriptions",
"organizations_url": "https://api.github.com/users/andyehrenberg/orgs",
"repos_url": "https://api.github.com/users/andyehrenberg/repos",
"events_url": "https://api.github.com/users/andyehrenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyehrenberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker ",
"Thanks for opening this good issue 🤗 I'll have a proper look, I think you insight is pretty good. ",
"I have similar issue while using whisper with `padding=True` and I got this error:\r\n\r\n```\r\nRuntimeError: The size of tensor a (359) must match the size of tensor b (1500) at non-singleton dimension 1\r\n```\r\n\r\nHowever, there isn't any issue if I use `padding=max_length`.",
"The padding in whisper should always be set to `max_length` , and you should not really modify it. We should probably prevent people from using just `True`. ",
"@hannan72's issue is separate to what I'm describing. But yes, padding should always be `max_length` - the issue I'm describing arises as a result of pad tokens being added to shorter sequences in batches (and won't raise any errors - it's just that Whisper's handling of multiple sequence lengths under the hood is flawed and would be fixed by computing `position_ids` based off `attention_mask`).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Any update on this issue?",
"@samuelazran See https://github.com/huggingface/transformers/pull/21455 - feel free to run with that and give any fixes. My Flax PR also shows how to handle this.",
"> @samuelazran See #21455 - feel free to run with that and give any fixes. My Flax PR also shows how to handle this.\r\n\r\nThank you! I will test it.\r\nCould you provide a code example of using prompts for training / inference?\r\nI have an implementation but not sure yet:\r\nhttps://discuss.huggingface.co/t/adding-prompt-context-to-whisper-with-huggingface-transformers/31070",
"just like @samuelazran, would really like to see the example and the #21455 in, being able to use 🤗 transformers directly is very helpful compared to using the external (original) library.",
"Prompting is an ongoing PR here: https://github.com/huggingface/transformers/pull/22496\r\n\r\nRegarding #21455 - I think this should be handled by the aforementioned PR"
] | 1,670
| 1,681
| 1,673
|
CONTRIBUTOR
| null |
### System Info
v4.25.1 on M1 Mac with python 3.8
### Who can help?
@sanchit-gandhi @patrickvonplaten @anton-l
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When we want to run Whisper generation for a batch of samples with different prompt lengths (prefix tokens given to the decoder), positional embeddings for the decoder are improperly computed. It assumes all sequences have the same `past_key_values_length`, but this is not true in general.
Scenario:
`decoder_input_ids = [50361, 45431, 2584, 28682, 13, 50258, 50257, 50257]`
(`"<|startofprev|>Something completely irrelevant.<|startoftranscript|><|pad|><|pad|>"`)
`model.generate(input_features, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask)` will not give the correct output because at the beginning of decoding, the pad tokens won't be taken into account that the positional embedding will be off.
### Expected behavior
Instead of tracking `past_key_values_length`, it should use the attention mask to compute position ids. The current implementation is more based off of encoder-decoder architectures that would never do decoder prompting, but it should take more inspiration from decoder-only models to handle prompting. This is done for the Flax implementation in #20479
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20624/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20623
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20623/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20623/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20623/events
|
https://github.com/huggingface/transformers/pull/20623
| 1,479,454,157
|
PR_kwDOCUB6oc5EgGon
| 20,623
|
Update summarization `run_pipeline_test`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Update summarization `run_pipeline_test`.
A few more models can handle longer sequences, and won't give expected exception at this place
```python
with self.assertRaises(Exception):
outputs = summarizer("This " * 1000)
```
So we need to ignore those model config classes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20623/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20623",
"html_url": "https://github.com/huggingface/transformers/pull/20623",
"diff_url": "https://github.com/huggingface/transformers/pull/20623.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20623.patch",
"merged_at": 1670424372000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20622
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20622/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20622/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20622/events
|
https://github.com/huggingface/transformers/issues/20622
| 1,479,160,471
|
I_kwDOCUB6oc5YKjKX
| 20,622
|
Improved logo display in dark mode
|
{
"login": "shaonianche",
"id": 16186646,
"node_id": "MDQ6VXNlcjE2MTg2NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/16186646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaonianche",
"html_url": "https://github.com/shaonianche",
"followers_url": "https://api.github.com/users/shaonianche/followers",
"following_url": "https://api.github.com/users/shaonianche/following{/other_user}",
"gists_url": "https://api.github.com/users/shaonianche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaonianche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaonianche/subscriptions",
"organizations_url": "https://api.github.com/users/shaonianche/orgs",
"repos_url": "https://api.github.com/users/shaonianche/repos",
"events_url": "https://api.github.com/users/shaonianche/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaonianche/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### Feature request

Use [github's](https://github.blog/changelog/2021-11-24-specify-theme-context-for-images-in-markdown/) features to improve how the logo is displayed
### Motivation
...
### Your contribution


|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20622/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20621
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20621/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20621/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20621/events
|
https://github.com/huggingface/transformers/pull/20621
| 1,479,128,441
|
PR_kwDOCUB6oc5Ee88e
| 20,621
|
fix past_key_values in GPTNeoXForCausalLM.prepare_inputs_for_generation
|
{
"login": "ValeKnappich",
"id": 39188710,
"node_id": "MDQ6VXNlcjM5MTg4NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/39188710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ValeKnappich",
"html_url": "https://github.com/ValeKnappich",
"followers_url": "https://api.github.com/users/ValeKnappich/followers",
"following_url": "https://api.github.com/users/ValeKnappich/following{/other_user}",
"gists_url": "https://api.github.com/users/ValeKnappich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ValeKnappich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ValeKnappich/subscriptions",
"organizations_url": "https://api.github.com/users/ValeKnappich/orgs",
"repos_url": "https://api.github.com/users/ValeKnappich/repos",
"events_url": "https://api.github.com/users/ValeKnappich/events{/privacy}",
"received_events_url": "https://api.github.com/users/ValeKnappich/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"After doing some more testing, I noticed another issue that might or might not be a bug. Currently, it's not possible to use anything else than `1` for `num_return_sequences`. Here is a MWE:\r\n\r\n```\r\nimport torch\r\nfrom transformers import GPTNeoXForCausalLM, AutoTokenizer\r\n\r\n# Load model\r\ns = \"NinedayWang/PolyCoder-160M\"\r\nmodel = GPTNeoXForCausalLM.from_pretrained(s)\r\ntokenizer = AutoTokenizer.from_pretrained(s, pad_token=\"<|PAD|>\")\r\n\r\n# Create random prompt\r\nN_TOKENS = 100\r\nBATCH_SIZE=1\r\nNUM_RETURN_SEQUENCES=8\r\npkv = torch.rand(\r\n (\r\n BATCH_SIZE, # batch size \r\n N_TOKENS, # number of tokens\r\n 2 * model.config.num_hidden_layers, \r\n model.config.num_attention_heads, \r\n model.config.hidden_size // model.config.num_attention_heads\r\n )\r\n).permute([2, 0, 3, 1, 4]).split(2)\r\n\r\n# Tokenize\r\nenc = tokenizer(\"Hello world\", return_tensors=\"pt\")\r\nenc[\"attention_mask\"] = torch.cat((torch.ones((1, N_TOKENS)), enc[\"attention_mask\"]), dim=1)\r\n\r\n# Generate\r\nprint(\r\n tokenizer.decode(\r\n model.generate( \r\n **enc,\r\n past_key_values=pkv,\r\n max_new_tokens=100,\r\n pad_token_id=tokenizer.pad_token_id,\r\n do_sample=True,\r\n num_return_sequences=NUM_RETURN_SEQUENCES\r\n )[0],\r\n skip_special_tokens=True\r\n )\r\n)\r\n```\r\n\r\nLeads to \r\n```\r\nTraceback (most recent call last):\r\n File \"stuff/test.py\", line 32, in <module>\r\n num_return_sequences=2\r\n File \"/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/generation/utils.py\", line 1581, in generate\r\n **model_kwargs,\r\n File \"/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/generation/utils.py\", line 2538, in sample\r\n output_hidden_states=output_hidden_states,\r\n File \"/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 663, in forward\r\n return_dict=return_dict,\r\n File \"/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 552, in forward\r\n output_attentions=output_attentions,\r\n File \"/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 325, in forward\r\n output_attentions=output_attentions,\r\n File \"/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/pfs/data5/home/st/st_us-052400/st_st175337/thesis/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 148, in forward\r\n key = torch.cat((past_key, key), dim=-2)\r\nRuntimeError: Sizes of tensors must match except in dimension 2. Expected size 1 but got size 2 for tensor number 1 in the list.\r\n```\r\n\r\nIs that expected behavior? I can fix it by creating multiple prompts (see below) per input, but it seems unintuitive, and I don't see anything about it in the docs. Perhaps the docs should simply mention that.\r\n\r\n```\r\npkv = torch.rand(\r\n (\r\n BATCH_SIZE * NUM_RETURN_SEQUENCES, # <--- expand the batch size \r\n N_TOKENS, # number of tokens\r\n 2 * model.config.num_hidden_layers, \r\n model.config.num_attention_heads, \r\n model.config.hidden_size // model.config.num_attention_heads\r\n )\r\n).permute([2, 0, 3, 1, 4]).split(2)\r\n```\r\n\r\n",
"Hey @ValeKnappich 👋 \r\n\r\nThank you for the addition, I really think we should do this for all models for a better interface. In fact, the argument should be `past_key_values` and not `past`, [as mentioned in the original issue](https://github.com/huggingface/transformers/issues/20347#issuecomment-1346255761), but that's a deeper change. This PR is a quick fix for the problem, so I approve it.\r\n\r\nAs for `num_return_sequences`, let's open a new issue for it to avoid mixing too many things here :D",
"Hi, has this issue been resolved? I tried running the code snippet above:\r\n```\r\nimport torch\r\nfrom transformers import GPTNeoXForCausalLM, AutoTokenizer\r\n\r\n# Load model\r\ns = \"NinedayWang/PolyCoder-160M\"\r\nmodel = GPTNeoXForCausalLM.from_pretrained(s)\r\ntokenizer = AutoTokenizer.from_pretrained(s, pad_token=\"<|PAD|>\")\r\n\r\n# Create random prompt\r\nN_TOKENS = 100\r\nBATCH_SIZE=1\r\nNUM_RETURN_SEQUENCES=8\r\npkv = torch.rand(\r\n (\r\n BATCH_SIZE, # batch size \r\n N_TOKENS, # number of tokens\r\n 2 * model.config.num_hidden_layers, \r\n model.config.num_attention_heads, \r\n model.config.hidden_size // model.config.num_attention_heads\r\n )\r\n).permute([2, 0, 3, 1, 4]).split(2)\r\n\r\n# Tokenize\r\nenc = tokenizer(\"Hello world\", return_tensors=\"pt\")\r\nenc[\"attention_mask\"] = torch.cat((torch.ones((1, N_TOKENS)), enc[\"attention_mask\"]), dim=1)\r\n\r\n# Generate\r\nprint(\r\n tokenizer.decode(\r\n model.generate( \r\n **enc,\r\n past_key_values=pkv,\r\n max_new_tokens=100,\r\n pad_token_id=tokenizer.pad_token_id,\r\n do_sample=True,\r\n num_return_sequences=NUM_RETURN_SEQUENCES\r\n )[0],\r\n skip_special_tokens=True\r\n )\r\n)\r\n```\r\n\r\nand it returned with \r\n```\r\nRuntimeError: The size of tensor a (101) must match the size of tensor b (102) at non-singleton dimension 3\r\n```\r\n\r\nIs this a different error?",
"@ardywibowo the script I paste below works. But keep in mind that it is probably not doing what you expect: when `past_key_values` is passed, only the latest input token is considered (the all other previous tokens are supposed to be encoded in `past_key_valies`) -- in other words, \"Hello\" in \"Hello world\" is ignored when generating the next token, despite being present in the output text.\r\n\r\nTo understand why, you would have to dive into [this blog post](https://jalammar.github.io/illustrated-gpt2/) and into our `generate` code :)\r\n\r\n____________________________\r\n```py\r\nimport torch\r\nfrom transformers import GPTNeoXForCausalLM, AutoTokenizer\r\n\r\n# Load model\r\ns = \"NinedayWang/PolyCoder-160M\"\r\nmodel = GPTNeoXForCausalLM.from_pretrained(s)\r\ntokenizer = AutoTokenizer.from_pretrained(s, pad_token=\"<|PAD|>\")\r\n\r\n# Create random prompt\r\nN_TOKENS = 100\r\nBATCH_SIZE=1\r\npkv = torch.rand(\r\n (\r\n BATCH_SIZE, # batch size\r\n N_TOKENS, # number of tokens\r\n 2 * model.config.num_hidden_layers,\r\n model.config.num_attention_heads,\r\n model.config.hidden_size // model.config.num_attention_heads\r\n )\r\n).permute([2, 0, 3, 1, 4]).split(2)\r\n\r\n# Tokenize\r\nenc = tokenizer(\"Hello world\", return_tensors=\"pt\")\r\nenc[\"attention_mask\"] = torch.ones((1, N_TOKENS+1))\r\n\r\n# Generate\r\nprint(\r\n tokenizer.decode(\r\n model.generate(\r\n **enc,\r\n past_key_values=pkv,\r\n max_new_tokens=100,\r\n pad_token_id=tokenizer.pad_token_id,\r\n do_sample=True,\r\n )[0],\r\n skip_special_tokens=True\r\n )\r\n)\r\n```"
] | 1,670
| 1,686
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
@gante @sgugger
Fixes `past_key_values` in `GPTNeoXForCausalLM.prepare_inputs_for_generation`. Passing `past_key_values` to `model.generate` had no effect whatsoever, since the argument was swallowed. Described in Issue #20347 (note that the validation bug was fixed in PR #20353, but the argument was still not passed along to the forward method)
The attached commit fixes the issue on my end, i.e. I now get different results when passing `past_key_values` to `generate`, as opposed to before.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20621/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20621",
"html_url": "https://github.com/huggingface/transformers/pull/20621",
"diff_url": "https://github.com/huggingface/transformers/pull/20621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20621.patch",
"merged_at": 1671623164000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20620
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20620/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20620/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20620/events
|
https://github.com/huggingface/transformers/pull/20620
| 1,479,005,995
|
PR_kwDOCUB6oc5EehCY
| 20,620
|
Whisper Timestamp processor and prediction
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looking forward to this PR",
"Example output, HF vs openai : \r\nModel : `openai/whisper-medium`.\r\n```python \r\n{'chunks': [{'text': \" Je m'appelle Claude.\", 'timestamp': (0.0, 2.0)},\r\n {'text': ' Je te coupe, Plow.', 'timestamp': (2.0, 4.0)},\r\n {'text': \" Let's just try it again.\", 'timestamp': (8.0, 10.0)},\r\n {'text': \" Je m'appelle Claude.\", 'timestamp': (10.0, 12.0)},\r\n {'text': ' Je te plie, Mlew.', 'timestamp': (12.0, 14.0)},\r\n {'text': \" Huh. It's not quite what I'm saying.\",'timestamp': (16.0, 20.0)},\r\n {'text': ' Really?', 'timestamp': (20.0, 22.0)},\r\n {'text': ' Sounds exactly the same to me.','timestamp': (22.0, 24.0)},\r\n {'text': ' It does? Really?', 'timestamp': (24.0, 26.0)},\r\n {'text': ' Yeah.', 'timestamp': (26.0, 28.0)},\r\n {'text': \" Let's try it again. Really listen.\",'timestamp': (29.0, 30.88)},\r\n {'text': ' Got it.', 'timestamp': (30.88, 32.48)},\r\n {'text': \" Je m'appelle Claude.\",'timestamp': (32.480, 35.24)},\r\n {'text': ' Je te flou-flee.', 'timestamp': (35.24, 37.28)},\r\n {'text': ' Oh, mon Dieu.', 'timestamp': (39.28, 40.28)},\r\n {'text': ' Oh, de fouf.', 'timestamp': (40.28, 41.88)},\r\n {'text': \" Je m'appelle Claude.\",'timestamp': (43.48, 46.6)},\r\n {'text': ' Je te call blue.', 'timestamp': (46.6, 48.24)},\r\n {'text': ' No!', 'timestamp': (48.24, 50.44)},\r\n {'text': ' Okay, maybe if we just break it down.','timestamp': (50.44, 53.28)},\r\n {'text': \" Okay, let's just try it one syllable at a time.\",'timestamp': (53.28, 56.08)},\r\n {'text': ' Okay, so repeat after me.', 'timestamp': (56.08, 58.08)},\r\n {'text': ' Pardon me.', 'timestamp': (58.0, 59.0)},\r\n {'text': ' Je...', 'timestamp': (59.0, 60.0)},\r\n {'text': ' Je...', 'timestamp': (60.0, 61.0)},\r\n {'text': ' Ma...', 'timestamp': (61.0, 62.0)},\r\n {'text': ' Ma...', 'timestamp': (62.0, 63.0)},\r\n {'text': ' Pelle.', 'timestamp': (63.0, 64.0)},\r\n {'text': ' Pelle.', 'timestamp': (64.0, 65.0)},\r\n {'text': ' Great!', 'timestamp': (65.0, 66.0)},\r\n {'text': ' Okay, faster.', 'timestamp': (66.0, 67.0)},\r\n {'text': ' Je...', 'timestamp': (67.0, 68.0)},\r\n {'text': ' Je...', 'timestamp': (68.0, 69.0)},\r\n {'text': ' Ma...', 'timestamp': (69.0, 70.0)},\r\n {'text': ' Pelle.', 'timestamp': (70.0, 71.0)},\r\n {'text': ' Pelle.', 'timestamp': (71.0, 72.0)},\r\n {'text': \" Je m'appelle.\", 'timestamp': (72.0, 73.0)},\r\n {'text': ' Mais pour pour?', 'timestamp': (73.0, 74.0)},\r\n {'text': \" It's too hard.\", 'timestamp': (74.0, 75.0)},\r\n {'text': \" I can't teach you.\", 'timestamp': (75.0, 76.0)},\r\n {'text': ' What are you doing?', 'timestamp': (76.0, 77.0)},\r\n {'text': ' I have to go before I put your head through a wall.','timestamp': (77.0, 78.0)},\r\n {'text': \" Don't go!\", 'timestamp': (78.0, 79.0)},\r\n {'text': \" Don't go!\", 'timestamp': (79.0, 80.0)},\r\n {'text': ' I need you!', 'timestamp': (80.0, 81.0)},\r\n {'text': ' My addition is tomorrow!', 'timestamp': (81.0, 82.0)},\r\n {'text': ' Cha-blu-bla!', 'timestamp': (82.0, 83.0)},\r\n {'text': ' Mille-la-pille!', 'timestamp': (83.0, 84.0)},\r\n {'text': ' Oum-bla!', 'timestamp': (84.0, 85.0)},\r\n {'text': ' Hola!', 'timestamp': (82.56, 83.4)}],\r\n 'text': \" Je m'appelle Claude. Je te coupe, Plow. Let's just try it again. Je \"\r\n \"m'appelle Claude. Je te plie, Mlew. Huh. It's not quite what I'm \"\r\n 'saying. Really? Sounds exactly the same to me. It does? Really? '\r\n \"Yeah. Let's try it again. Really listen. Got it. Je m'appelle \"\r\n \"Claude. Je te flou-flee. Oh, mon Dieu. Oh, de fouf. Je m'appelle \"\r\n 'Claude. Je te call blue. No! Okay, maybe if we just break it down. '\r\n \"Okay, let's just try it one syllable at a time. Okay, so repeat \"\r\n 'after me. Pardon me. Je... Je... Ma... Ma... Pelle. Pelle. Great! '\r\n \"Okay, faster. Je... Je... Ma... Pelle. Pelle. Je m'appelle. Mais \"\r\n \"pour pour? It's too hard. I can't teach you. What are you doing? I \"\r\n \"have to go before I put your head through a wall. Don't go! Don't \"\r\n 'go! I need you! My addition is tomorrow! Cha-blu-bla! '\r\n 'Mille-la-pille! Oum-bla! Hola! Boo.'}\r\n```\r\n```\r\n[(\" Je m'appelle Claude.\", 0.0, 2.0),\r\n (' Je te coupe, Plow.', 2.0, 4.0),\r\n (\" Let's just try it again.\", 8.0, 10.0),\r\n (\" Je m'appelle Claude.\", 10.0, 12.0),\r\n (' Je te plie, Mlew.', 12.0, 14.0),\r\n (\" Huh. It's not quite what I'm saying.\", 16.0, 20.0),\r\n (' Really?', 20.0, 22.0),\r\n (' Sounds exactly the same to me.', 22.0, 24.0),\r\n (' It does? Really?', 24.0, 26.0),\r\n (' Yeah.', 26.0, 28.0),\r\n (\" Let's try it again. Really listen.\", 28.0, 30.0),\r\n (' Got it.', 30.0, 32.0),\r\n (\" Je m'appelle Claude.\", 32.0, 34.0),\r\n (' Je te plie, Mlew.', 34.0, 36.0),\r\n (' Oh, mon Dieu.', 38.0, 40.0),\r\n (' Oh, de fouf.', 40.0, 42.0),\r\n (\" Je m'appelle Claude.\", 42.0, 44.0),\r\n (' Je te coupe, Mlew.', 44.0, 46.0),\r\n (' No!', 46.0, 48.0),\r\n (' Okay.', 48.0, 50.0),\r\n (' Maybe if we just break it down.', 50.0, 52.0),\r\n (\" Okay, let's just try it one syllable at a time.\", 52.0, 54.0),\r\n (' Okay, so repeat after me.', 54.0, 56.0),\r\n (\" Je m'appelle.\", 56.0, 60.0),\r\n (' Great. Okay, faster.', 60.0, 62.0),\r\n (\" Je m'appelle.\", 62.0, 64.0),\r\n (\" Je m'appelle.\", 64.0, 66.0),\r\n (' Mais pour pour?', 66.0, 68.0),\r\n (\" It's too hard. I can't teach you.\", 70.0, 72.0),\r\n (' What are you doing?', 72.0, 74.0),\r\n (' I have to go before I put your head through a wall.', 74.0, 76.0),\r\n (\" Don't go. I need you.\", 76.0, 78.0),\r\n (' My audition is tomorrow.', 78.0, 80.0),\r\n (' Cha-blah-blah.', 80.0, 82.0),\r\n (' Mela-pi.', 82.0, 84.0),\r\n (' Hola!', 84.0, 86.0),\r\n (' Boo!', 86.0, 114.0)]\r\n```\r\nNote that the differences in the text are related to the logit processors that they updated. But overall it is very similar, but 3x faster 😉 ",
"Looking close!!!\r\n🙏🏾🤞🏾 this review gets pushed through!! ",
"> Looking close!!! 🙏🏾🤞🏾 this review gets pushed through!!\r\n\r\nI'm too thirsty lol. Love yall and appreciate all the work being done! ",
"> > Looking close!!! 🙏🏾🤞🏾 this review gets pushed through!!\r\n> \r\n> I'm too thirsty lol. Love yall and appreciate all the work being done!\r\n\r\n@Narsil 🫠👀",
"🙌🏾",
"Really appreciate the constant updates to get this finalized! Thanks! ",
"This should just need a little code cleaning / documenting and will be good for a final review! ",
"Wow! That was a lot of refactoring. Ready for a final review @Narsil ",
"@TheExGenesis the current implementation should not really be different with the `Trie` approach. Problem with the `Trie` was that it does not really keep track of the longest, but rather the first longest common sequence. It also assumed that the entire sequence had to be present (an extra loop would have had to be added). \r\n\r\nWe can still discuss cases with a term appearing twice, in the current implementation the last occurence would be chosen for merge. Do you have a specific example in mind? ",
"@ArthurZucker I'm choosing the first occurrence rather than the last one and it's working well. Otherwise, if an expression is the actual prefix, and occurs later in the sequence, the beginning of the sequence gets eaten. Also, I think you should be discounting stride_right as well when incrementing chunk time. I apologize for not making specific code recommendations right now, I'm a little short on time and working in my own messy environment.",
"I think it is pretty random and would need a heuristic for small sequences. If you merge on a single term, you should probably be using just a little bit more chunks length. Have you tried both versions? 😉 ",
"> I think it is pretty random and would need a heuristic for small sequences. If you merge on a single term, you should probably be using just a little bit more chunks length. Have you tried both versions? 😉\r\n\r\nI'm sorry I don't understand, are you responding to the first or second point? If to the second point, you really want the timestamps to be accurate otherwise they won't match up with the audio.",
"No I was talking about the first point, did you try taking the last occurence as well? Just wondering if you have some kind of experimental benchmark on this. \r\n\r\nThe `stride_right` is not used, based on [this](https://huggingface.co/blog/asr-chunking), the stride right is part of the speech that is disregarded. It is not very intuitive, but basically the stride right does not influence the beginning time of the next sequence.",
"> No I was talking about the first point, did you try taking the last occurence as well? Just wondering if you have some kind of experimental benchmark on this.\r\n> \r\n> The `stride_right` is not used, based on [this](https://huggingface.co/blog/asr-chunking), the stride right is part of the speech that is disregarded. It is not very intuitive, but basically the stride right does not influence the beginning time of the next sequence.\r\n\r\nYeah I tried taking the last occurrence, it ate like 10s of audio, and then I changed it.\r\n\r\nre stride_right - [chunk_iter does use stride_left and stride_right ](https://github.com/huggingface/transformers/blob/15573920bccf879d621e99e21367e216017adf7d/src/transformers/pipelines/automatic_speech_recognition.py#L59), and I verified this empirically, the timestamps are only correct when I take both left and right strides into account. ",
"Oh okay thanks. Would be awesome if you have a sample audio on which I could work on 😉 \r\nI think you are making a good point, `chunk_iter` is indeed stepping w.r.t the stride right! My bad. ",
"> Oh okay thanks. Would be awesome if you have a sample audio on which I could work on 😉 I think you are making a good point, `chunk_iter` is indeed stepping w.r.t the stride right! My bad.\r\n\r\nAll good :) I've been using the first few minutes of [this podcast](https://www.iheart.com/podcast/256-global-voices-podcast-31091854/episode/bangladeshs-new-years-celebration-of-diversity-63758318/) "
] | 1,670
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
This will add support for correct `timestamp` prediction in the generation, and should update the ASR pipeline to use these when generating on longer audio file.
The rough idea is that when the timestamps are generated, the model is more *aware* of the timing and generates `<|endoftext|>` tokens to fill in the silence. So the token-to-time is approximatly a linear regression, and provides valuable information for matching begining and end of chunk of a longer audio.
By using both the fact that **timestamp** tokens always come in pair when separating two sentences, and the approximate **toke-to-time** (see [here](https://github.com/openai/whisper/blob/main/whisper/transcribe.py#L134)) we should increase the performances and also have the timestamp prediction.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20620/reactions",
"total_count": 7,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 1,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20620/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20620",
"html_url": "https://github.com/huggingface/transformers/pull/20620",
"diff_url": "https://github.com/huggingface/transformers/pull/20620.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20620.patch",
"merged_at": 1673967009000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20619
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20619/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20619/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20619/events
|
https://github.com/huggingface/transformers/issues/20619
| 1,478,935,254
|
I_kwDOCUB6oc5YJsLW
| 20,619
|
Stripping last some words from output of model.generate() method
|
{
"login": "ancil009",
"id": 114574269,
"node_id": "U_kgDOBtRDvQ",
"avatar_url": "https://avatars.githubusercontent.com/u/114574269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ancil009",
"html_url": "https://github.com/ancil009",
"followers_url": "https://api.github.com/users/ancil009/followers",
"following_url": "https://api.github.com/users/ancil009/following{/other_user}",
"gists_url": "https://api.github.com/users/ancil009/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ancil009/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ancil009/subscriptions",
"organizations_url": "https://api.github.com/users/ancil009/orgs",
"repos_url": "https://api.github.com/users/ancil009/repos",
"events_url": "https://api.github.com/users/ancil009/events{/privacy}",
"received_events_url": "https://api.github.com/users/ancil009/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hi @ancil009 👋 \r\n\r\nI've spent some time debugging, and here's what's happening:\r\nThe output is shorter because the model outputs `eos_token_id` at that point. In other words, it thinks it is done. In fact, if I modify the code to ignore `eos_token_id`, the output is as follows.\r\n\r\n```\r\nManager Hi sir I am Srikanth so all ready the team is discussed to develop a new feature of XYZ so that's why we discussed those new features about that project so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn to those features and to implement and as of you know my team members are very fast to learn about new things and in work purpose also my team members are very fast and so please give additional information to complete Manager Hi sir I am Srikanth. I am ready to discuss the team is discussed to develop a new feature of XYZ. The team is discussed. So, I am ready to discuss the new features about that project, so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn about the new features and in that case in the Manager Hi sir. Hi sir I am Sri Lankan. Hi sir. I am Sri Lankan. Hi sir. I am Sri Lankan. So, I am Sri Lankan. So, I am Sri Lankan. So, I am Sri Lankan. So, I am very fast and so please give me the team members are very fast and Manager Hi sir. Hi sir, Hi sir, Hi sir, Hi sir, Hi sir, Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. Hi sir, I am Sri Lankan. So, I am very fast and so please give me additional information about the new features of XYZ. Manager Hi sir. [...]\r\n```\r\n\r\nIn other words, the model starts repeating itself, which isn't helpful. From this, we can rule out code-related problems.\r\n\r\nDespite accepting infinite sequences, T5 has a relatively small attention window. Depending on the dataset the model was fine-tuned with, it might also be biased towards short sequences. Can you try splitting your input into multiple (smaller) sequences, to see if it helps?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,674
| 1,674
|
NONE
| null |
### System Info
Hi All,
Am using pretrained "gec-t5_small" for grammar error correction. But the output from model.generate() method stripping the output. Please any one suggest a solution for the same.
**Code**
```
model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small", torch_dtype="auto")
tokenizer = T5Tokenizer.from_pretrained('t5-small',model_max_length=1024, torch_dtype="auto")
sentence = "600 character length sentence"
sentence = sentence.strip()
tokenized_sentence = tokenizer('gec: ' + sentence , max_length=1024, truncation=True, return_tensors='pt',add_special_tokens=True)
model_output = model.generate(
input_ids = tokenized_sentence.input_ids,
attention_mask = tokenized_sentence.attention_mask,
max_new_tokens = 1024, (max_length also tried)
use_cache=True,
num_beams=3, (tried num_beams=5 as well)
early_stopping=True,
do_sample=False,
)
corrected_sentence = tokenizer.decode(
model_output[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
```
**Output**
sentence : "Manager Hi sir I am Srikanth so all ready the team is discussed to develop a new feature of XYZ so that's why we discussed a that new features about that project so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn to that features and to implement and as off you know my team members are very fast to learn about new things and in work purpose also my team members are very fast and so please give additional to complete about new features of our project and sorry for the delay but you give me additional time my team members are to give our more best about to our new features."
corrected_sentence : "Manager Hi sir I am Srikanth so all ready the team is discussed to develop a new feature of XYZ so that's why we discussed those new features about that project so in that case in my team to learn about the new projects topics to based on our discussion the project so it's take's time to learn to those features and to implement and as of you know my team members are very fast to learn about new things and in work purpose also my team members are very fast and so please give additional information to complete"
Please suggest any solution for the same? Is there any way to increase length of output.
Note:- platform Azure databricks
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps:
1) Use the above mentioned code .
2) with sentence as input .
3) check output is deletion some word from end of the string.
### Expected behavior
Corrected should give fully inputted sentence with correct grammar.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20619/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20618
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20618/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20618/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20618/events
|
https://github.com/huggingface/transformers/issues/20618
| 1,478,905,442
|
I_kwDOCUB6oc5YJk5i
| 20,618
|
Incremental Training on model of my domain which I have fine tuned using run_mlm
|
{
"login": "SakshamSoni-code",
"id": 63503820,
"node_id": "MDQ6VXNlcjYzNTAzODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/63503820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SakshamSoni-code",
"html_url": "https://github.com/SakshamSoni-code",
"followers_url": "https://api.github.com/users/SakshamSoni-code/followers",
"following_url": "https://api.github.com/users/SakshamSoni-code/following{/other_user}",
"gists_url": "https://api.github.com/users/SakshamSoni-code/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SakshamSoni-code/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SakshamSoni-code/subscriptions",
"organizations_url": "https://api.github.com/users/SakshamSoni-code/orgs",
"repos_url": "https://api.github.com/users/SakshamSoni-code/repos",
"events_url": "https://api.github.com/users/SakshamSoni-code/events{/privacy}",
"received_events_url": "https://api.github.com/users/SakshamSoni-code/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### Model description
I have used run_mlm to fine tune the model of my own domain. Now I want to do is, I want to pass a incremental flag in run_mlm, if that flag will be true then instead of fine tuning I want to start training on the already trained model of my own domain which we got before. What are the changes we need to do in run_mlm?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20618/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20617
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20617/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20617/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20617/events
|
https://github.com/huggingface/transformers/issues/20617
| 1,478,891,024
|
I_kwDOCUB6oc5YJhYQ
| 20,617
|
pre
|
{
"login": "SakshamSoni-invizAI",
"id": 114740372,
"node_id": "U_kgDOBtbMlA",
"avatar_url": "https://avatars.githubusercontent.com/u/114740372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SakshamSoni-invizAI",
"html_url": "https://github.com/SakshamSoni-invizAI",
"followers_url": "https://api.github.com/users/SakshamSoni-invizAI/followers",
"following_url": "https://api.github.com/users/SakshamSoni-invizAI/following{/other_user}",
"gists_url": "https://api.github.com/users/SakshamSoni-invizAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SakshamSoni-invizAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SakshamSoni-invizAI/subscriptions",
"organizations_url": "https://api.github.com/users/SakshamSoni-invizAI/orgs",
"repos_url": "https://api.github.com/users/SakshamSoni-invizAI/repos",
"events_url": "https://api.github.com/users/SakshamSoni-invizAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/SakshamSoni-invizAI/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"close"
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### Model descript
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20617/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20616
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20616/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20616/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20616/events
|
https://github.com/huggingface/transformers/pull/20616
| 1,478,861,856
|
PR_kwDOCUB6oc5EeBBZ
| 20,616
|
Cpmant test
|
{
"login": "pioliverse",
"id": 119836898,
"node_id": "U_kgDOBySQ4g",
"avatar_url": "https://avatars.githubusercontent.com/u/119836898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pioliverse",
"html_url": "https://github.com/pioliverse",
"followers_url": "https://api.github.com/users/pioliverse/followers",
"following_url": "https://api.github.com/users/pioliverse/following{/other_user}",
"gists_url": "https://api.github.com/users/pioliverse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pioliverse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pioliverse/subscriptions",
"organizations_url": "https://api.github.com/users/pioliverse/orgs",
"repos_url": "https://api.github.com/users/pioliverse/repos",
"events_url": "https://api.github.com/users/pioliverse/events{/privacy}",
"received_events_url": "https://api.github.com/users/pioliverse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20616/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20616",
"html_url": "https://github.com/huggingface/transformers/pull/20616",
"diff_url": "https://github.com/huggingface/transformers/pull/20616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20616.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20615
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20615/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20615/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20615/events
|
https://github.com/huggingface/transformers/issues/20615
| 1,478,817,868
|
I_kwDOCUB6oc5YJPhM
| 20,615
|
return_tensors and return_text in TextGenerationPipeline don't work or partially work
|
{
"login": "PanQiWei",
"id": 46810637,
"node_id": "MDQ6VXNlcjQ2ODEwNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/46810637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PanQiWei",
"html_url": "https://github.com/PanQiWei",
"followers_url": "https://api.github.com/users/PanQiWei/followers",
"following_url": "https://api.github.com/users/PanQiWei/following{/other_user}",
"gists_url": "https://api.github.com/users/PanQiWei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PanQiWei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PanQiWei/subscriptions",
"organizations_url": "https://api.github.com/users/PanQiWei/orgs",
"repos_url": "https://api.github.com/users/PanQiWei/repos",
"events_url": "https://api.github.com/users/PanQiWei/events{/privacy}",
"received_events_url": "https://api.github.com/users/PanQiWei/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is perfectly normal as any value being set will choose its value in order.\r\n\r\nboolean were a bad choice since some combinations don't mean anything.\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L132",
"@Narsil, thanks for responding! \r\n\r\nWell then I think there may have some misguided on the [documentation](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TextGenerationPipeline.__call__), where demonstrates `return_text`, `return_full_text` and `return_tensors` are boolean and default to True or False, also there is no pamareter called `return_type` in `__call__` but undert the hood it's the real one that decide what will be returned. And the document also not clearly demonstrates the relationship of `return_text` and `return_tensors`.\r\n\r\nAnd I remember back to the earlier versions (4.1x and earlier I think) we can decide what will be returned (only `generated_text`, only `generated_token_ids` or both of them) by using the combinations of the three parameters.",
"I may be wrong, but I think `return_type` is an internal parameter, but you can still decide what to return with the other three parameters.\r\n\r\nAs far as I can tell, you can't return a combination of `generated_text` and `generated_token_ids`. You can only return one or the other, which I guess is why some of those combinations don't do anything. Would it help if there was a note in the docs about this?",
"> I may be wrong, but I think `return_type` is an internal parameter, but you can still decide what to return with the other three parameters.\r\n> \r\n> As far as I can tell, you can't return a combination of `generated_text` and `generated_token_ids`. You can only return one or the other, which I guess is why some of those combinations don't do anything. Would it help if there was a note in the docs about this?\r\n\r\n@stevhliu thanks for the replying! Now I'm clear with the functionality and relationship between `return_text` and `return_tensors`, and I think it would be clear to more people if the documentation also target this out. 😄 ",
"Yes the docs could use some polish here, maybe even soft deprecate `return_text` & co in favor of `return_type`.\r\nSoft deprecate meaning we don't ever have to actually remove them, just don't make them as prominent since they are indeed confusing."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### System Info
- transformers version: 4.24.0
- python version: 3.8.11
### Who can help?
Library:
- Text generation: @patrickvonplaten, @Narsil, @gante
- Pipelines: @Narsil
Documentation: @sgugger, @stevhliu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. initialize TextGenerationPipeline, assume we call it `pipeline` below
2. run the following code snips:
```python
results = pipeline(text_input, return_text=True, return_full_text=False, return_tensors=False)[0]
```
```python
results = pipeline(text_input, return_text=True, return_full_text=False, return_tensors=True)[0]
```
```python
results = pipeline(text_input, return_text=False, return_full_text=False, return_tensors=True)[0]
```
```python
results = pipeline(text_input, return_text=False, return_full_text=False, return_tensors=False)[0]
```
3. all the four code snips return the same dict with only one key `generated_text`
### Expected behavior
1. when `return_text=True` and `return_tensors=False`, return a dict contains only one key `generated_text`
2. when `return_text=False` and `return_tensors=True`, return a dict contains only one key `generated_token_ids`
3. when `return_text=True` and `return_tensors=True`, return a dict contains both `generated_text` and `generated_token_ids`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20615/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20614
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20614/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20614/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20614/events
|
https://github.com/huggingface/transformers/issues/20614
| 1,478,685,575
|
I_kwDOCUB6oc5YIvOH
| 20,614
|
Can we add an augment `min_new_tokens` to the `generate` function?
|
{
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"the `min_length` already does what you want the `min_new_tokens` does under the hood, so personally I don't understand why you like to add a new `min_new_tokens` and change what `min_length` original mean. But add `min_new_tokens` as an alias of `min_length` may be a good idea (but not necessary).",
"For my understanding, in the current implementation, `min_length` set the length limit of `len(promt) + len(generated tokens)`\r\n\r\nSee the implementation of [`MinLengthLogitsProcessor`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/logits_process.py#L119):\r\n```python\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:\r\n cur_len = input_ids.shape[-1]\r\n if cur_len < self.min_length:\r\n scores[:, self.eos_token_id] = -float(\"inf\")\r\n return scores\r\n```\r\n\r\n`input_ids` in the previous code block refers to `prompt + generated tokens`. For example, see the implemented of some decoding method for how logits processors are called. (See [`beam_search()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2818) or [`greedy_decoding()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2298))\r\n\r\nIt will be more convenient if we set an argument `min_new_tokens` to **only** limit the length of `generated tokens`, not `prompt + generated tokens`.\r\n\r\n---\r\nPlease correct me if I have missed something\r\n",
"cc @gante ",
"> For my understanding, in the current implementation, `min_length` set the length limit of `len(promt) + len(generated tokens)`\r\n> \r\n> See the implementation of [`MinLengthLogitsProcessor`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/logits_process.py#L119):\r\n> \r\n> ```python\r\n> def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:\r\n> cur_len = input_ids.shape[-1]\r\n> if cur_len < self.min_length:\r\n> scores[:, self.eos_token_id] = -float(\"inf\")\r\n> return scores\r\n> ```\r\n> \r\n> `input_ids` in the previous code block refers to `prompt + generated tokens`. For example, see the implemented of some decoding method for how logits processors are called. (See [`beam_search()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2818) or [`greedy_decoding()`](https://github.com/huggingface/transformers/blob/28f3d431d4b8b74a458a5583297d5101483edb74/src/transformers/generation/utils.py#L2298))\r\n> \r\n> It will be more convenient if we set an argument `min_new_tokens` to **only** limit the length of `generated tokens`, not `prompt + generated tokens`.\r\n> \r\n> Please correct me if I have missed something\r\n\r\n@silverriver yes you are right, its my mistake 😂\r\n\r\nBut I still think `min_new_tokens` and `min_length` should mean the same thing and also to `max_new_tokens` and `max_length` (though they are actually different now), because most people who use `model.generate` would think `min_length` means to 'at least generate min_length tokens' and `max_length` means to 'generate tokens no more than max_length'",
"@PanQiWei I agree, but I think it is impossible to change the current implementation of `max_length` and `min_length` for the conern of back compatibility.",
"Hey @silverriver @PanQiWei 👋 \r\n\r\nHaving `min_new_tokens` would certainly be a welcome change, for the same reason as `max_new_tokens`. It is clear what it does, regardless of the type of model, where `min_tokens`/`max_tokens` are not. In the long run, we'd like to deprecate `min_tokens`/`max_tokens` in favor of `min_new_tokens`/`max_new_tokens`.\r\n\r\nI'll have a look at your PRs :)",
"I have closed my original PR and made a new one (#21044 ) to avoid messing with other commits when I tried to rebase my change."
] | 1,670
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### Feature request
Can we add a new parameter `min_new_tokens` to the `generate` function to limit the length of newly generated tokens? The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of `newly generated tokens`.
### Motivation
We already have `max_new_tokens` to limit the max length of the generated tokens, i.e., `max_length = max_new_tokens + prompt`.
Why not add the `min_new_token` to limit the min length of the generated tokens? (i.e., `min_length = min_new_tokens + prompt`)
I know this is kind like an other syntax sugar, but it will be much convenient if we have this parameter.
### Your contribution
I can sumbit a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20614/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20613
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20613/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20613/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20613/events
|
https://github.com/huggingface/transformers/pull/20613
| 1,478,653,641
|
PR_kwDOCUB6oc5EdSwf
| 20,613
|
Ci-jukebox
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Just skips the 5b test as there is not enough RAM on the CI instance.
Keeping the test is important for local testing IMO
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20613/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20613",
"html_url": "https://github.com/huggingface/transformers/pull/20613",
"diff_url": "https://github.com/huggingface/transformers/pull/20613.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20613.patch",
"merged_at": 1670339643000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20612
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20612/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20612/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20612/events
|
https://github.com/huggingface/transformers/pull/20612
| 1,478,614,310
|
PR_kwDOCUB6oc5EdJ9Z
| 20,612
|
Add DPT hybrid
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20612). All of your documentation changes will be reflected on that endpoint.",
"You should have opened your PR to go on the branch adding BiT and VitHybrid as the PR is not easy to review as it is.",
"Yes sorry :/ Let me open a PR on the other branch",
"Here is a much cleaner version of the PR: https://github.com/NielsRogge/transformers/pull/51 ;) ",
"Closing in favor of https://github.com/huggingface/transformers/pull/20645"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds DPT Hybrid support to `transformers`
Do not merge until #20550 gets merged
cc @NielsRogge @sgugger @patrickvonplatten @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20612/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20612/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20612",
"html_url": "https://github.com/huggingface/transformers/pull/20612",
"diff_url": "https://github.com/huggingface/transformers/pull/20612.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20612.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20611
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20611/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20611/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20611/events
|
https://github.com/huggingface/transformers/issues/20611
| 1,478,590,149
|
I_kwDOCUB6oc5YIX7F
| 20,611
|
ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation'
|
{
"login": "Furknn",
"id": 25117440,
"node_id": "MDQ6VXNlcjI1MTE3NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25117440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Furknn",
"html_url": "https://github.com/Furknn",
"followers_url": "https://api.github.com/users/Furknn/followers",
"following_url": "https://api.github.com/users/Furknn/following{/other_user}",
"gists_url": "https://api.github.com/users/Furknn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Furknn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Furknn/subscriptions",
"organizations_url": "https://api.github.com/users/Furknn/orgs",
"repos_url": "https://api.github.com/users/Furknn/repos",
"events_url": "https://api.github.com/users/Furknn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Furknn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante if you have any idea.",
"Hi @Furknn 👋 \r\n\r\nI have tried to reproduce this in my local machine (current `main` branch) in a local notebook (with `transformers==4.25.1`), using the following script:\r\n\r\n```python\r\nimport tensorflow,torch\r\nfrom transformers import AutoTokenizer, AutoModel\r\nmodel = AutoModel.from_pretrained(\"gpt2\", from_tf=True)\r\n```\r\n\r\nIn both cases, no exception was thrown. Can I ask you to reinstall `transformers` and, if the issue persists, to share a script I can call on my end where I can reproduce the issue? :)",
"I have reinstalled transformers version 4.25.1 and tried. It works correctly now.\r\n\r\nThanks"
] | 1,670
| 1,672
| 1,672
|
NONE
| null |
### System Info
# Info
- `transformers` version: 4.25.1
- Platform: Linux-6.0.8-1-MANJARO-x86_64-with-glibc2.36
- Python version: 3.10.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
# Problem
`ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation'`
I am getting this error while loading a pretrained TensorFlow model as shown below.
```python
import tensorflow,torch
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained("model-name", from_tf=True)
```
Model is located in a local folder
```
model-name\
config.json
tf-model.h5
```
# Stacktrace
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[20], line 5
2 from transformers import AutoTokenizer, AutoModel
4 # load the model
----> 5 model = AutoModel.from_pretrained("model-name", from_tf=True)
File ~/project/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:463, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
461 elif type(config) in cls._model_mapping.keys():
462 model_class = _get_model_class(config, cls._model_mapping)
--> 463 return model_class.from_pretrained(
464 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
465 )
466 raise ValueError(
467 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
468 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
469 )
File ~/project/venv/lib/python3.10/site-packages/transformers/modeling_utils.py:2344, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2341 try:
2342 from .modeling_tf_pytorch_utils import load_tf2_checkpoint_in_pytorch_model
-> 2344 model, loading_info = load_tf2_checkpoint_in_pytorch_model(
2345 model, resolved_archive_file, allow_missing_keys=True, output_loading_info=True
2346 )
2347 except ImportError:
2348 logger.error(
2349 "Loading a TensorFlow model in PyTorch, requires both PyTorch and TensorFlow to be installed."
2350 " Please see https://pytorch.org/ and https://www.tensorflow.org/install/ for installation"
2351 " instructions."
2352 )
File ~/project/venv/lib/python3.10/site-packages/transformers/modeling_tf_pytorch_utils.py:359, in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys, output_loading_info)
355 raise
357 import transformers
--> 359 from .modeling_tf_utils import load_tf_weights
361 logger.info(f"Loading TensorFlow weights from {tf_checkpoint_path}")
363 # Instantiate and load the associated TF 2.0 model
File ~/project/venv/lib/python3.10/site-packages/transformers/modeling_tf_utils.py:42
40 from .configuration_utils import PretrainedConfig
41 from .dynamic_module_utils import custom_object_save
---> 42 from .generation import TFGenerationMixin
43 from .tf_utils import shape_list
44 from .utils import (
45 DUMMY_INPUTS,
46 SAFE_WEIGHTS_INDEX_NAME,
(...)
63 working_or_temp_dir,
64 )
ImportError: cannot import name 'TFGenerationMixin' from 'transformers.generation'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Can be reproduced by loading a tensorflow model in local storage using
```
AutoModel.from_pretrained()
```
### Expected behavior
I have trained a TFBertForMaskedLM model with a custom dataset on google colab. I saved weights of this model by calling
```python
model.save_pretrained()
```
Now I want to load it in my local machine and use it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20611/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20610
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20610/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20610/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20610/events
|
https://github.com/huggingface/transformers/issues/20610
| 1,478,585,946
|
I_kwDOCUB6oc5YIW5a
| 20,610
|
MBART pretrained model is unable to produce output in the target language
|
{
"login": "haqsaiful",
"id": 79313734,
"node_id": "MDQ6VXNlcjc5MzEzNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/79313734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haqsaiful",
"html_url": "https://github.com/haqsaiful",
"followers_url": "https://api.github.com/users/haqsaiful/followers",
"following_url": "https://api.github.com/users/haqsaiful/following{/other_user}",
"gists_url": "https://api.github.com/users/haqsaiful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haqsaiful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haqsaiful/subscriptions",
"organizations_url": "https://api.github.com/users/haqsaiful/orgs",
"repos_url": "https://api.github.com/users/haqsaiful/repos",
"events_url": "https://api.github.com/users/haqsaiful/events{/privacy}",
"received_events_url": "https://api.github.com/users/haqsaiful/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"You also need to set the `tokenizer.tgt_lang` I believe.\r\nAlso cc @ArthurZucker ",
"I think you are just using the wrong checkpoint. \r\nUsing the `\"facebook/mbart-large-50-many-to-many-mmt\"` I obtain the following : \r\n```યુનાઇટેડ સ્ટેટ્સ ઓફ અમેરિકાના પ્રાંતિકારી کہتے हैं कि सीरिया में कोई सैन्य समाधान नहीं है```\r\nwhich, according to Google is Gujarati!. ",
"@ArthurZucker \"facebook/mbart-large-50-many-to-many-mmt\" is fine tuned checkpoint. I am trying with a pretrained checkpoint which is \"facebook/mbart-large-50\". \r\n\r\nThe pretrained checkpoint should also be able to give output in the target language if we force the BOS token to the target language. The output may be little bit distorted but that's fine. Here, its giving the output same as the source language. ",
"> The pretrained checkpoint should also be able to give output in the target language if we force the BOS token to the target language \r\n\r\nI think this depends on the language since it is a `pretrained checkpoint` as mentioned on the model card : \r\n> `mbart-large-50` is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks. See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for fine-tuned versions.\r\n\r\nSince it works totally fine with the fine-tuned checkpoint, this is not a bug.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, is there some mismatch between the tokenizer of `facebook/mbart-large-50` and `shift_tokens_right` of `MBartForConditionalGeneration`? Since the tokenizer of `facebook/mbart-large-en-ro` would give **X [eos, src_lang_code]** while `facebook/mbart-large-50`'s tokenizer would give **[src_lang_code] X [eos]**, but they both use the same `shift_tokens_right` method which I believe is only suitable for input like this **X [eos, src_lang_code]** :\r\n```python\r\n\r\ndef shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int):\r\n \"\"\"\r\n Shift input ids one token to the right, and wrap the last non pad token (the <LID> token) Note that MBart does not\r\n have a single `decoder_start_token_id` in contrast to other Bart-like models.\r\n \"\"\"\r\n prev_output_tokens = input_ids.clone()\r\n\r\n if pad_token_id is None:\r\n raise ValueError(\"self.model.config.pad_token_id has to be defined.\")\r\n # replace possible -100 values in labels by `pad_token_id`\r\n prev_output_tokens.masked_fill_(prev_output_tokens == -100, pad_token_id)\r\n\r\n index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)\r\n decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze()\r\n prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].clone()\r\n prev_output_tokens[:, 0] = decoder_start_tokens\r\n\r\n return prev_output_tokens\r\n```",
"Indeed. But as mentioned in the documentation : \r\n> The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix for both source and target text i.e the text format is [lang_code] X [eos], where lang_code is source language id for source text and target language id for target text, with X being the source or target text respectively.\r\nWhile \r\n> For MBart [...] the source text format is X [eos, src_lang_code] where X is the source text. The target text format is [tgt_lang_code] X [eos]. bos is never used.\r\n\r\nWhich is why they don't have the same tokenization scheme.\r\nI checked that when generating, the `forced_decoder_id` properly works, and I think this issue can be closed as there are no guarantee that a certain pair of language will produce intelligible result as the checkpoints are pretrained. \r\n\r\n\r\n",
"Hi, thanks for the comments!\r\nIt is true that using MBart-50 to do generation with proper `forced_decoder_id` works. But it doesn't work on supervised learning scenarios. When there is no `decoder_input_ids` for training, Mbart-50 would automatically create`decoder_input_ids` from `labels` which follows the tokenization scheme of Mbart rather than Mbart-50. And I think this should be fixed.\r\n<img width=\"770\" alt=\"MBart and MBart-50 2023-01-30 17-49-21\" src=\"https://user-images.githubusercontent.com/38466901/215444119-90199c9d-baa2-421d-86be-0d0e4e585e2c.png\">\r\n\r\n",
"I am not sure I understand. When the `decoder_input_ids` are created from the `labels`, they are a shifted version. \r\nLet's use the example: \r\n- src_text : `'en_XX UN Chief Says There Is No Military Solution in Syria</s>'`\r\n- labels : `'ro_RO Şeful ONU declară că nu există o soluţie militară în Siria</s>'`\r\n- [shifted labels](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mbart/modeling_mbart.py#L1348-L1349) : `'</s>ro_RO Şeful ONU declară că nu există o soluţie militară în Siria'` (= decoder_inputs_ids)\r\nThis means that the `shifted_labels` will follow the correct pattern (which you enforce when generating). \r\n\r\n\r\n",
"Sorry, my bad. You are right. I mistakenly thought the generation schema of MBart-50 is the same as MBart, whose `decoder_start_token_id` is the `lang_id`.",
"The MBart50 AI model is not translating the entire document; it is cutting it in half. How can we fix this?",
"Hey! Could you open a new issue with a reproducer for this? 😉 "
] | 1,670
| 1,691
| 1,675
|
NONE
| null |
Hi,
I am using mbart-large-50 for generation task. Source language is Hindi and target language is Gujarati. However, I am always getting the output in Hindi. It is expected to get few tokens in the target language even though its a pretrained model since i am forcing the BOS token to the target language.
Sharing the code that i am using for this task.
`
# translate Hindi to Gujarati
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50")
tokenizer.src_lang = "hi_IN"
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["gu_IN"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)`
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20610/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20609
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20609/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20609/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20609/events
|
https://github.com/huggingface/transformers/issues/20609
| 1,478,517,009
|
I_kwDOCUB6oc5YIGER
| 20,609
|
Data to text representation considers only first 2 triplets
|
{
"login": "jyotibhat1",
"id": 110028655,
"node_id": "U_kgDOBo7nbw",
"avatar_url": "https://avatars.githubusercontent.com/u/110028655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jyotibhat1",
"html_url": "https://github.com/jyotibhat1",
"followers_url": "https://api.github.com/users/jyotibhat1/followers",
"following_url": "https://api.github.com/users/jyotibhat1/following{/other_user}",
"gists_url": "https://api.github.com/users/jyotibhat1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jyotibhat1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jyotibhat1/subscriptions",
"organizations_url": "https://api.github.com/users/jyotibhat1/orgs",
"repos_url": "https://api.github.com/users/jyotibhat1/repos",
"events_url": "https://api.github.com/users/jyotibhat1/events{/privacy}",
"received_events_url": "https://api.github.com/users/jyotibhat1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep the issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
Hello,
I trained t5-base with wenNLG2020 data set which takes the data in the form of multiple triplets. When a query is made to the model in the same format, it explains the first 2 triplets and ignores the rest. Is it any config issue?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20609/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20608
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20608/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20608/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20608/events
|
https://github.com/huggingface/transformers/issues/20608
| 1,478,319,336
|
I_kwDOCUB6oc5YHVzo
| 20,608
|
Is it possible to add simple custom pytorch-crf layer on top of TokenClassification model. It will make the model more robust.
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi,\r\n\r\nPlease use the forum for these kind of questions. We'd like to keep Github issues for bugs and feature requests.\r\n\r\nThanks!",
"> Hi,\r\n> \r\n> Please use the forum for these kind of questions. We'd like to keep Github issues for bugs and feature requests.\r\n> \r\n> Thanks!\r\n\r\nThis is kind of feature request only. @NielsRogge ",
"Models are fully defined in each modeling file in an independent fashion so you can easily copy/paste them and then customize them to your need :-)"
] | 1,670
| 1,670
| null |
NONE
| null |
### Model description
Is it possible to add simple custom `pytorch-crf` layer on top of `TokenClassification model`. It will make the model more robust.
There should be simple `Notebook tutorial` which teaches us to add our own `custom layer` on top of `Hugging face models` for
- Classification
- Token Classification ( BIO)
By taking an example from `dslim/bert-base-NER`. Then add `from torchcrf import CRF` on top of it.
I am planning to do this, but I don't know how to get this feature coded. Any leads or Notebook example would be helpful.
```
from torchcrf import CRF
model_checkpoint = "dslim/bert-base-NER"
tokenizer = BertTokenizer.from_pretrained(model_checkpoint,add_prefix_space=True)
config = BertConfig.from_pretrained(model_checkpoint, output_hidden_states=True)
bert_model = BertForTokenClassification.from_pretrained(model_checkpoint,id2label=id2label,label2id=label2id,ignore_mismatched_sizes=True)
class BERT_CRF(nn.Module):
def __init__(self, bert_model, num_labels):
super(BERT_CRF, self).__init__()
self.bert = bert_model
self.dropout = nn.Dropout(0.25)
self.classifier = nn.Linear(4*768, num_labels)
self.crf = CRF(num_labels, batch_first = True)
def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask)
**sequence_output = torch.cat((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4]),-1)**
sequence_output = self.dropout(sequence_output)
emission = self.classifier(sequence_output) # [32,256,17]
labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])
if labels is not None:
loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return [loss, prediction]
else:
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return prediction
```
```
args = TrainingArguments(
"spanbert_crf_ner-pos2",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=8,
# per_device_eval_batch_size=32
fp16=True
# bf16=True #Ampere GPU
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
# eval_dataset=train_data,
# data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
```
I get error on line ` **sequence_output = torch.cat((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4]),-1)** `
As `outputs = self.bert(input_ids, attention_mask=attention_mask)` gives the logits for tokenclassification` . How can we get hidden states so that I can concate last 4 hidden states. so that I can do `outputs[1][-1]`?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20608/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20608/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20607
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20607/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20607/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20607/events
|
https://github.com/huggingface/transformers/pull/20607
| 1,478,082,625
|
PR_kwDOCUB6oc5EbTRd
| 20,607
|
Documentation fixes
|
{
"login": "samuelzxu",
"id": 14795989,
"node_id": "MDQ6VXNlcjE0Nzk1OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/14795989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuelzxu",
"html_url": "https://github.com/samuelzxu",
"followers_url": "https://api.github.com/users/samuelzxu/followers",
"following_url": "https://api.github.com/users/samuelzxu/following{/other_user}",
"gists_url": "https://api.github.com/users/samuelzxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuelzxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuelzxu/subscriptions",
"organizations_url": "https://api.github.com/users/samuelzxu/orgs",
"repos_url": "https://api.github.com/users/samuelzxu/repos",
"events_url": "https://api.github.com/users/samuelzxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuelzxu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR just fixes some typos in the documentation.
Please note: Apart from the typos in the *paragraphs*, the other changes were because of significantly differing results I got from running the examples. For instance, "aweful" didn't result in a high negative score, but "awful" did.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20607/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20607",
"html_url": "https://github.com/huggingface/transformers/pull/20607",
"diff_url": "https://github.com/huggingface/transformers/pull/20607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20607.patch",
"merged_at": 1670329967000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20606
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20606/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20606/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20606/events
|
https://github.com/huggingface/transformers/pull/20606
| 1,478,018,329
|
PR_kwDOCUB6oc5EbFN3
| 20,606
|
Adding anchor links to Hindi README
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
1. Adding anchor links to Hindi README
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20606/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20606",
"html_url": "https://github.com/huggingface/transformers/pull/20606",
"diff_url": "https://github.com/huggingface/transformers/pull/20606.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20606.patch",
"merged_at": 1670330185000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20605
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20605/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20605/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20605/events
|
https://github.com/huggingface/transformers/pull/20605
| 1,477,778,528
|
PR_kwDOCUB6oc5EaOvJ
| 20,605
|
Clip floating point constants to bf16 range to avoid inf conversion
|
{
"login": "sangeethabal",
"id": 83724701,
"node_id": "MDQ6VXNlcjgzNzI0NzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83724701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sangeethabal",
"html_url": "https://github.com/sangeethabal",
"followers_url": "https://api.github.com/users/sangeethabal/followers",
"following_url": "https://api.github.com/users/sangeethabal/following{/other_user}",
"gists_url": "https://api.github.com/users/sangeethabal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sangeethabal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sangeethabal/subscriptions",
"organizations_url": "https://api.github.com/users/sangeethabal/orgs",
"repos_url": "https://api.github.com/users/sangeethabal/repos",
"events_url": "https://api.github.com/users/sangeethabal/events{/privacy}",
"received_events_url": "https://api.github.com/users/sangeethabal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
When running HuggingFace BERT (any size) fine-tuning tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, I see NaNs in the loss after the first step.
# What does this PR do?
This PR addresses the issue where the model code passes a value that is out of range for XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, so the conversion would cast it to -inf.
The NaNs likely come from the transformers library change: https://github.com/huggingface/transformers/pull/17306 . This PR replaced many lines which used to be -float(inf) (or other small constants) with torch.finfo().min. For torch.float32 the min value is -3.4028234663852886e+38 which is smaller than the bfloat16 minimum of -3.3895313892515355e+38. So the problem is that torch.finfo(torch.float32).min = -3.4028234663852886e+38 gets converted to -inf. When the original encoder_extended_attention_mask is 1, then encoder_extended_attention_mask becomes (1.0 - 1.0 ) * -inf which becomes NaN (via IEEE rule Inf * 0.0 = NaN).
This PR ensures torch.finfo(torch.bfloat16).min = -3.3895313892515355e+38 and not -inf. Then the results would not have Nans.
The following lines checks for XLA_USE_BF16 or XLA_DOWNCAST_BF16 environment variable and sets the dtype accordingly:
```
if is_torch_tpu_available():
if os.environ.get("XLA_USE_BF16") == 1:
return torch.bfloat16
if os.environ.get("XLA_DOWNCAST_BF16") == 1:
if t.dtype == torch.float:
return torch.bfloat16
if t.dtype == torch.double:
return torch.float32
```
Referencing related issues: https://github.com/aws-neuron/aws-neuron-sdk/issues/593 and https://github.com/pytorch/xla/issues/4152
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20605/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20605",
"html_url": "https://github.com/huggingface/transformers/pull/20605",
"diff_url": "https://github.com/huggingface/transformers/pull/20605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20605.patch",
"merged_at": 1670365527000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20604
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20604/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20604/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20604/events
|
https://github.com/huggingface/transformers/pull/20604
| 1,477,652,857
|
PR_kwDOCUB6oc5EZyIg
| 20,604
|
Fix test for file not found
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging to have tests passing on main, but I will address any comment in followup PRs :-)"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
The test for file not found in the TensorFlow auto model tests is failing on main as the message does not match exactly (see [here](https://app.circleci.com/pipelines/github/huggingface/transformers/53015/workflows/6ea05b10-a541-46db-bcce-b93dc654610e/jobs/636205)). This PR fixes that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20604/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20604",
"html_url": "https://github.com/huggingface/transformers/pull/20604",
"diff_url": "https://github.com/huggingface/transformers/pull/20604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20604.patch",
"merged_at": 1670283236000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20603
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20603/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20603/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20603/events
|
https://github.com/huggingface/transformers/pull/20603
| 1,477,439,362
|
PR_kwDOCUB6oc5EZBxe
| 20,603
|
Update the list of contributors to reflect current organization
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
This PR updates the list of who to tag on PRs/Issues. With the growing number of models, I chose to split them through modality.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20603/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20603",
"html_url": "https://github.com/huggingface/transformers/pull/20603",
"diff_url": "https://github.com/huggingface/transformers/pull/20603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20603.patch",
"merged_at": 1670511943000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20602
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20602/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20602/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20602/events
|
https://github.com/huggingface/transformers/pull/20602
| 1,477,322,503
|
PR_kwDOCUB6oc5EYnXt
| 20,602
|
Fix dtype of weights in from_pretrained when device_map is set
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There is no more safetensors at this stage, (`is_safetensors` means the checkpoint comes from safetensors, but the state dict is a dictionary name to parameter in this case as well)."
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
As reported in #20390, the dtype of the weights after `from_pretrained` is used for a checkpoint is inconsistent between `device_map=None` or `device_map` set:
- `device_map=None` (which uses `nn.Module.laod_state_dict`) will have the dtype of the model stay the same, even if the checkpoints are in a different dtype (so loading a float16 checkpoint in a float32 model gives a float32 model)
- `device_map` set (which manually sets the parameters) will change the dtype of the model to the dtype of the checkpoint (so loading a float16 checkpoint in a float32 model gives a float16 model).
This PR addresses this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20602/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20602/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20602",
"html_url": "https://github.com/huggingface/transformers/pull/20602",
"diff_url": "https://github.com/huggingface/transformers/pull/20602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20602.patch",
"merged_at": 1670346977000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20601
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20601/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20601/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20601/events
|
https://github.com/huggingface/transformers/pull/20601
| 1,477,205,260
|
PR_kwDOCUB6oc5EYMuk
| 20,601
|
updating T5 and BART models to support Prefix Tuning
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey, just for reference could you provide a link to an issue or something explaining what `prefix tuning` is?"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
1. updating T5 and BART models to support Prefix Tuning. Currently, passing `past_key_value` fails. This PR fixes it. Doesn't impact any current functionality.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20601/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20601",
"html_url": "https://github.com/huggingface/transformers/pull/20601",
"diff_url": "https://github.com/huggingface/transformers/pull/20601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20601.patch",
"merged_at": 1670331280000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20600
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20600/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20600/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20600/events
|
https://github.com/huggingface/transformers/pull/20600
| 1,477,159,225
|
PR_kwDOCUB6oc5EYCYA
| 20,600
|
Add-whisper-conversion
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"See the new checkpoints : https://huggingface.co/openai/whisper-large-v2 ",
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20600). All of your documentation changes will be reflected on that endpoint."
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Add the conversion script from whisper which was deleted during the sprint. See this [commit](https://github.com/huggingface/transformers/pull/19166/commits/f92b9a8181f9a84114becd31a5a4210723cdf1ad).
This will help for the Whisper Event!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20600/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20600",
"html_url": "https://github.com/huggingface/transformers/pull/20600",
"diff_url": "https://github.com/huggingface/transformers/pull/20600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20600.patch",
"merged_at": 1670266977000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20599
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20599/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20599/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20599/events
|
https://github.com/huggingface/transformers/pull/20599
| 1,477,105,007
|
PR_kwDOCUB6oc5EX2Rp
| 20,599
|
[Whisper] Fix decoder ids methods
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,687
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
The previous PR https://github.com/huggingface/transformers/pull/20589 incorrectly returned a list of forced decoder ids:
```python
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
print(processor.get_decoder_prompt_ids(task="transcribe"))
```
**Print Output:**
```
[50257, 50358, 50362]
```
The correct format is a nested list of decoder ids, where the first element of each list specifies the position of the forced token and the second the token id:
```python
print(processor.get_decoder_prompt_ids(task="transcribe"))
```
**Print Output:**
```
[(1, 50257), (2, 50358), (3, 50362)]
```
(at position 1 we force token 50257, at 2 we force 50358, at 3 we force 50362)
The PR also implements a test, thus making sure that no such error can be made again 😅
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20599/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20599",
"html_url": "https://github.com/huggingface/transformers/pull/20599",
"diff_url": "https://github.com/huggingface/transformers/pull/20599.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20599.patch",
"merged_at": 1670265922000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20598
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20598/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20598/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20598/events
|
https://github.com/huggingface/transformers/pull/20598
| 1,477,093,821
|
PR_kwDOCUB6oc5EXzz2
| 20,598
|
Fix `get_decoder_prompt_ids` in whisper
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I found this cause I got the following error when running the code in main branch\r\n\r\n```\r\nFile ~/transformers/src/transformers/generation/utils.py:867, in GenerationMixin._get_logits_processor(self, repetition_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, input_ids_seq_length, encoder_input_ids, bad_words_ids, min_length, max_length, eos_token_id, forced_bos_token_id, forced_eos_token_id, prefix_allowed_tokens_fn, num_beams, num_beam_groups, diversity_penalty, remove_invalid_values, exponential_decay_length_penalty, logits_processor, renormalize_logits, suppress_tokens, begin_suppress_tokens, forced_decoder_ids)\r\n 865 begin_index = begin_index if (input_ids_seq_length > 1 or forced_bos_token_id is None) else begin_index + 1\r\n 866 if forced_decoder_ids is not None:\r\n--> 867 begin_index += forced_decoder_ids[-1][0] # generation starts after the last token that is forced\r\n 868 processors.append(SuppressTokensAtBeginLogitsProcessor(begin_suppress_tokens, begin_index))\r\n 869 if forced_decoder_ids is not None:\r\n\r\nTypeError: 'int' object is not subscriptable\r\n```",
"Duplicate of https://github.com/huggingface/transformers/pull/20599",
"Hey @bofenghuang! Sorry about that, hoping to merge the fix ASAP",
"@sanchit-gandhi no problem, thanks for the quick fix!"
] | 1,670
| 1,673
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Hi @sanchit-gandhi,
I think there is one missed line in https://github.com/huggingface/transformers/pull/20589. I've added it back in this PR.
The `forced_decoder_ids` should be something like `[[<token/position>, <token/id>], ...]`. But `prefix_tokens` only returns only token ids.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20598/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20598",
"html_url": "https://github.com/huggingface/transformers/pull/20598",
"diff_url": "https://github.com/huggingface/transformers/pull/20598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20598.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20597
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20597/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20597/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20597/events
|
https://github.com/huggingface/transformers/pull/20597
| 1,477,083,322
|
PR_kwDOCUB6oc5EXxfM
| 20,597
|
Fix `AutomaticSpeechRecognitionPipelineTests.run_pipeline_test`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you !"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Fix `AutomaticSpeechRecognitionPipelineTests.run_pipeline_test` which was changed in #19570 and #20104.
See the comments in this PR changes.
I detected this when working on improving pipeline tests using tiny models. Previously, `Speech2TextConfig` is not used in ASR pipeline tests, but now it does, and gives errors without this PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20597/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20597",
"html_url": "https://github.com/huggingface/transformers/pull/20597",
"diff_url": "https://github.com/huggingface/transformers/pull/20597.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20597.patch",
"merged_at": 1670338130000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20596
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20596/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20596/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20596/events
|
https://github.com/huggingface/transformers/pull/20596
| 1,476,881,704
|
PR_kwDOCUB6oc5EXEX2
| 20,596
|
Remove unused `classifier_dropout` in configs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Similar to #20554, but this time for `classifier_dropout`.
The existing checkpoints with this attribute in their config files could still be loaded via the `**kwargs` --> so won't fail.
@sgugger If you would prefer me to cleanup multiple different unused config attributes in a single PR, let me know 😉
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20596/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20596",
"html_url": "https://github.com/huggingface/transformers/pull/20596",
"diff_url": "https://github.com/huggingface/transformers/pull/20596.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20596.patch",
"merged_at": 1670259873000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20595
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20595/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20595/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20595/events
|
https://github.com/huggingface/transformers/pull/20595
| 1,476,840,732
|
PR_kwDOCUB6oc5EW7Ps
| 20,595
|
Fix whisper and speech to text doc
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Previously the documentation was badly indented for both models and indicated that
> If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value of `inputs_embeds`.`
Which is on valid for the forward pass of the `ForConditionnalGeneration` not for the model alone.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20595/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20595",
"html_url": "https://github.com/huggingface/transformers/pull/20595",
"diff_url": "https://github.com/huggingface/transformers/pull/20595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20595.patch",
"merged_at": 1670261016000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20594
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20594/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20594/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20594/events
|
https://github.com/huggingface/transformers/issues/20594
| 1,476,728,692
|
I_kwDOCUB6oc5YBRd0
| 20,594
|
Transformers model inference via pipeline not releasing memory after 2nd call. Leads to memory leak and crash in Flask web app
|
{
"login": "farazk86",
"id": 33456896,
"node_id": "MDQ6VXNlcjMzNDU2ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/33456896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farazk86",
"html_url": "https://github.com/farazk86",
"followers_url": "https://api.github.com/users/farazk86/followers",
"following_url": "https://api.github.com/users/farazk86/following{/other_user}",
"gists_url": "https://api.github.com/users/farazk86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farazk86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farazk86/subscriptions",
"organizations_url": "https://api.github.com/users/farazk86/orgs",
"repos_url": "https://api.github.com/users/farazk86/repos",
"events_url": "https://api.github.com/users/farazk86/events{/privacy}",
"received_events_url": "https://api.github.com/users/farazk86/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello you are loading the model twice. \r\n\r\nDepending on how you launch your flask webserver, you will use threads or processes. Each request will reach reach a different thread/process and each wlll load all dependencies (including torch) which by itself is like 300Mo. So you could indeed easily blow the amount of memory required.\r\n\r\nWhat we usually recommend is this: Will be soon in the actual docs; https://github.com/huggingface/transformers/pull/20437\r\n\r\nMaking sure you have your model loaded once on a single thread/process. This can be achieved in many ways. \r\n\r\nHere you are loading your model at runtime which will make requests much slower than intended too. I would recommend loading it beforehand during load time of the actual webserver. It doesn't really apply if you want to run models dynamically, but you could still apply the 1 thread techniques which should limit your memory requirements.\r\n\r\nDoes that answer your question ?",
"Thank you @Narsil for your suggestion. You were right and loading the model only once at the app level solved the memory issue and is also much faster in handling every request :)\r\n\r\nThe example you showed in documentation is using ``starlette``. I achieved the same in Flask using below.. just adding the model loading lines at app level instead of inside a function. Below is the updated version of my minimal example:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForTokenClassification\r\nfrom transformers import pipeline\r\nfrom flask import Flask\r\n\r\n\r\n\r\napp = Flask(__name__)\r\ntokenizer = AutoTokenizer.from_pretrained(\"./modelfiles\")\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"./modelfiles\")\r\n\r\n\r\ndef model_test(text):\r\n nlp = pipeline(\"token-classification\", model=model, tokenizer=tokenizer, aggregation_strategy=\"simple\")\r\n ner_results = nlp(text)\r\n print(ner_results)\r\n\r\n return text\r\n\r\n\r\n@app.route('/')\r\ndef memory_test():\r\n text = \"Adam is going to London with Mark and then to Paris with Mary.\"\r\n output_text = model_test(text)\r\n return output_text\r\n\r\n\r\nif __name__ == '__main__':\r\n app.run()\r\n```",
"If I have to host the inference code using FastAPI and transformers Pipeline, should I be creating new instances of Pipeline every time I get a request? I can ensure the model loaded only once. Also, is Pipeline thread safe?",
"> should I be creating new instances of Pipeline every time I get a request?\r\n\r\nThat would be super wasteful. The pipeline creates tokenizer, feature_extractor and model for you. Even if you ensure the model is loaded only once, those other ressources will probably be created.\r\nSeems simpler to simply cache the create of the pipeline directly.\r\n\r\n> is Pipeline thread safe?\r\n\r\nNo. The pipeline itself doesn't do anything too fancy, so you should be ok, but PyTorch is not thread safe itself (it **should** be for reading). But Torch is already using all your cores for inference so nothing to gain by multiplexing the inference itself. And for GPU, it's even worse since you cannot multiplex the kernels either, but you could end up entangling requests from the pipeline (leading to worse latency for all requests)\r\n\r\nIn general, in my experience playing with threads and torch is just asking for trouble. I would go for a single thread pipeline owning thread (or process) and communicate with it your requests. Seems to work much better in almost all the cases. Now, torch itself is not async, so it will block the main thread if you're using async."
] | 1,670
| 1,676
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
@Lysa
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am using a locally saved model to perform ``token-classification``. I saved the model files using the below code
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-large-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-large-NER")
tokenizer.save_pretrained('./modelfiles')
model.save_pretrained('./modelfiles')
```
I am using the model in a Flask web app to take in text, perform ``token-classification`` and return the result. The minimal example of that is given below
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
from flask import Flask
import gc
app = Flask(__name__)
def model_test(text):
tokenizer = AutoTokenizer.from_pretrained("./modelfiles")
model = AutoModelForTokenClassification.from_pretrained("./modelfiles")
nlp = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner_results = nlp(text)
del model
del tokenizer
del nlp
gc.collect() # adding this releases the memory after first call only..
return text
@app.route('/')
def memory_test():
text = "Adam is going to London with Mark and then to Paris with Mary."
output_text = model_test(text)
return output_text
if __name__ == '__main__':
app.run()
```
The above script creates a simple flask web app and then calls the ``model_test()`` every time the page is refreshed.
The memory is not released after each call.. Whats interesting is that after adding ``gc.collect()`` in the function it is released on the first call only and then after second call it does not release memory, as can be seen from the memory usage graph screenshot.. without ``gc.collect()`` the first function call does not release memory.

### Expected behavior
As can be seen from the screenshot, the memory is released after the first call. but for some reason it just keeps accumulating after 2nd call and this leads to a crash.
The models are expected to release memory after each call as is done after first.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20594/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20593
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20593/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20593/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20593/events
|
https://github.com/huggingface/transformers/issues/20593
| 1,476,690,594
|
I_kwDOCUB6oc5YBIKi
| 20,593
|
How to convert a gradio text-geno script to run on gpu
|
{
"login": "cvinker",
"id": 13070943,
"node_id": "MDQ6VXNlcjEzMDcwOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/13070943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cvinker",
"html_url": "https://github.com/cvinker",
"followers_url": "https://api.github.com/users/cvinker/followers",
"following_url": "https://api.github.com/users/cvinker/following{/other_user}",
"gists_url": "https://api.github.com/users/cvinker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cvinker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cvinker/subscriptions",
"organizations_url": "https://api.github.com/users/cvinker/orgs",
"repos_url": "https://api.github.com/users/cvinker/repos",
"events_url": "https://api.github.com/users/cvinker/events{/privacy}",
"received_events_url": "https://api.github.com/users/cvinker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil, @abidlabs and @dawoodkhan82 ",
"What doesn't work ?\r\n\r\n- Is the model not on GPU ?\r\n- Does it crash ? If yes, can we see the stacktrace ?\r\n\r\n\r\nThis line is incorrect:\r\n```python\r\n out_text=out_text.to(device)\r\n```\r\n out_text is `str` so it can't be on a device (it's a pure python object :) )\r\n \r\n ```\r\n model.to(device)\r\n ``` \r\n will also fail, since the model with device_map=\"auto\" is supposed to be on multiple device. (If one device is enough, just don't use it and use directly `device=0` for instance.\r\n \r\n For your loading logic:\r\n ```python\r\ntext2text_generator = pipeline( model=\"facebook/galactica-1.3b\", num_workers=1, device_map=\"auto\")\r\n# \r\n ```\r\n should be enough\r\n \r\n \r\n Then `device_map=\"auto\"` only works when accelerate is in the environment. Could you make sure it's there ?\r\n \r\n Does this help ?\r\n If you had the space to show it might help also fetch some information about what is going wrong.\r\n\r\nThank you !\r\n",
"From the `gradio` side, there should be no difference whether the model is running on cpu or gpu. Can you confirm that the `predict()` function correctly runs on GPU?",
"@Narsil Thank you it's now functional with the following:\r\n\r\n```\r\nimport gradio as gr\r\nimport torch\r\nfrom transformers import pipeline\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\n#tokenizer = AutoTokenizer.from_pretrained(\"facebook/galactica-125m\")\r\n#model = AutoModelForCausalLM.from_pretrained(\"facebook/galactica-125m\")\r\ntext2text_generator = pipeline(model=\"facebook/galactica-1.3b\", num_workers=1, device=0)\r\n\r\ndef predict(text, max_length=64, temperature=0.7, do_sample=True):\r\n text = text.strip()\r\n out_text = text2text_generator(text, max_length=max_length,\r\n temperature=temperature,\r\n do_sample=do_sample,\r\n )[0]['generated_text']\r\n out_text = \"<p>\" + out_text + \"</p>\"\r\n out_text = out_text.replace(text, text + \"<b><span style='background-color: #ffffcc;'>\")\r\n out_text = out_text + \"</span></b>\"\r\n out_text = out_text.replace(\"\\n\", \"<br>\")\r\n return out_text\r\n torch.cuda.empty_cache()\r\niface = gr.Interface(\r\n fn=predict,\r\n inputs=[\r\n gr.inputs.Textbox(lines=5, label=\"Input Text\"),\r\n gr.inputs.Slider(minimum=32, maximum=5160, default=64, label=\"Max Length\"),\r\n gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, step=0.1, label=\"Temperature\"),\r\n gr.inputs.Checkbox(label=\"Do Sample\"),\r\n ],\r\n outputs=gr.HTML(),\r\n description=\"Galactica Base Model\",\r\n examples=[[\r\n \"The attention mechanism in LLM is\",\r\n 128,\r\n 0.7,\r\n True\r\n ],\r\n [\r\n \"Title: Attention is all you need\\n\\nAbstract:\",\r\n 128,\r\n 0.7,\r\n True\r\n ]\r\n ]\r\n)\r\n\r\niface.launch(share=True)\r\n\r\n```\r\n\r\nBut, I run out of memory making it do anything long and I don't know how to make it clear the ram once it gets a new prompt. I know `torch.dtype=torch.float16` but I'm not sure how to use it in this. Thank you for your help, I would share the space but I'm always changing it so it won't be online.",
"You are clearning the cache AFTER the return so it won't be ever run.\r\n\r\nI think this code should be correct. But large prompts, large generation and even worse large beams (don't see them here) are really memory hungry, so it might just be a regular OOM. Have you tried using a larger GPU ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
I've been at this a while so I've decided to just ask.
```
import gradio as gr
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = AutoModelForCausalLM.from_pretrained("facebook/galactica-125m")
text2text_generator = pipeline("text-generation", model=model, tokenizer=tokenizer, num_workers=2)
def predict(text, max_length=64, temperature=0.7, do_sample=True):
text = text.strip()
out_text = text2text_generator(text, max_length=max_length,
temperature=temperature,
do_sample=do_sample,
eos_token_id = tokenizer.eos_token_id,
bos_token_id = tokenizer.bos_token_id,
pad_token_id = tokenizer.pad_token_id,
)[0]['generated_text']
out_text = "<p>" + out_text + "</p>"
out_text = out_text.replace(text, text + "<b><span style='background-color: #ffffcc;'>")
out_text = out_text + "</span></b>"
out_text = out_text.replace("\n", "<br>")
return out_text
iface = gr.Interface(
fn=predict,
inputs=[
gr.inputs.Textbox(lines=5, label="Input Text"),
gr.inputs.Slider(minimum=32, maximum=256, default=64, label="Max Length"),
gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, step=0.1, label="Temperature"),
gr.inputs.Checkbox(label="Do Sample"),
],
outputs=gr.HTML(),
description="Galactica Base Model",
examples=[[
"The attention mechanism in LLM is",
128,
0.7,
True
],
[
"Title: Attention is all you need\n\nAbstract:",
128,
0.7,
True
]
]
)
iface.launch()
```
That's what I want to make run on my gpu, here's what I've got that doesn't work.
```
import gradio as gr
import torch
from transformers import pipeline
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
#tokenizer.pad_token_id = 1
#tokenizer.padding_side = 'left'
#tokenizer.model_max_length = 2020
model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto")
text2text_generator = pipeline("text-generation", model=model, tokenizer=tokenizer, num_workers=1, device_map="auto")
device = torch.device('cuda')
model.to(device)
def predict(text, max_length=64, temperature=0.7, top_k=25, top_p=0.9, no_repeat_ngram_size=10, do_sample=True):
text = text.strip()
#input_ids = tokenizer(text, return_tensors="pt").input_ids.to("cuda")
out_text = text2text_generator(text,
max_length=max_length,
temperature=temperature,
top_k=top_k,
top_p=top_p,
no_repeat_ngram_size=10,
do_sample=do_sample,
eos_token_id = tokenizer.eos_token_id,
bos_token_id = tokenizer.bos_token_id,
pad_token_id = tokenizer.pad_token_id,
return_tensors="pt",
)[0]['generated_text']
out_text=out_text.to(device)
out_text = "<p>" + out_text + "</p>"
out_text = out_text.replace(text, text + "<b><span style='background-color: #ffffcc;'>")
out_text = out_text + "</span></b>"
out_text = out_text.replace("\n", "<br>")
return out_text
iface = gr.Interface(
fn=predict,
inputs=[
gr.inputs.Textbox(lines=5, label="Input Text"),
gr.inputs.Slider(minimum=32, maximum=1024, default=64, label="Max Length"),
gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.7, step=0.05, label="Temperature"),
gr.inputs.Slider(minimum=1, maximum=99, default=25, step=5, label="Top k"),
gr.inputs.Slider(minimum=0.5, maximum=0.99, default=0.9, step=0.01, label="Top p"),
gr.inputs.Slider(minimum=1, maximum=999, default=10, step=1, label="No Repeat Ngram Size"),
gr.inputs.Checkbox(label="Do Sample"),
],
outputs=gr.HTML(),
description="Galactica Base Model",
examples=[[
"The attention mechanism in LLM is",
128,
0.7,
25,
0.9,
10,
True
],
[
"Title: Attention is all you need\n\nAbstract:",
128,
0.7,
25,
0.9,
10,
True
]
]
)
iface.launch()
```
Any pointers would be appreciated I'm rusty if you couldn't tell
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20593/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20592
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20592/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20592/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20592/events
|
https://github.com/huggingface/transformers/pull/20592
| 1,476,605,613
|
PR_kwDOCUB6oc5EWGoC
| 20,592
|
Check if docstring is `None` before formating it
|
{
"login": "xxyzz",
"id": 21101839,
"node_id": "MDQ6VXNlcjIxMTAxODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/21101839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xxyzz",
"html_url": "https://github.com/xxyzz",
"followers_url": "https://api.github.com/users/xxyzz/followers",
"following_url": "https://api.github.com/users/xxyzz/following{/other_user}",
"gists_url": "https://api.github.com/users/xxyzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xxyzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xxyzz/subscriptions",
"organizations_url": "https://api.github.com/users/xxyzz/orgs",
"repos_url": "https://api.github.com/users/xxyzz/repos",
"events_url": "https://api.github.com/users/xxyzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xxyzz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks again for your contribution!"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
docstrings could be `None` if Python optimize level is set to 2.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20591.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20592/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20592",
"html_url": "https://github.com/huggingface/transformers/pull/20592",
"diff_url": "https://github.com/huggingface/transformers/pull/20592.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20592.patch",
"merged_at": 1670330658000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20591
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20591/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20591/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20591/events
|
https://github.com/huggingface/transformers/issues/20591
| 1,476,595,267
|
I_kwDOCUB6oc5YAw5D
| 20,591
|
AttributeError: 'NoneType' object has no attribute 'format'
|
{
"login": "xxyzz",
"id": 21101839,
"node_id": "MDQ6VXNlcjIxMTAxODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/21101839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xxyzz",
"html_url": "https://github.com/xxyzz",
"followers_url": "https://api.github.com/users/xxyzz/followers",
"following_url": "https://api.github.com/users/xxyzz/following{/other_user}",
"gists_url": "https://api.github.com/users/xxyzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xxyzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xxyzz/subscriptions",
"organizations_url": "https://api.github.com/users/xxyzz/orgs",
"repos_url": "https://api.github.com/users/xxyzz/repos",
"events_url": "https://api.github.com/users/xxyzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xxyzz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### System Info
transformers version: 4.21.3
OS: Windows 10
Python version: 3.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
- Install one of spaCy's transformer model.
```
$ python -m pip install spacy[cuda-autodetect]
$ python -m spacy download en_core_web_trf
```
- Set `PYTHONOPTIMIZE` to 2 or use `-OO` option.
- Load spaCy model:
```python
import spacy
spacy.load("en_core_web_trf")
```
- Get error similar to this:
```
File "C:\x\spacy\__init__.py", line 54, in load
File "C:\x\spacy\util.py", line 432, in load_model
File "C:\x\spacy\util.py", line 468, in load_model_from_package
File "C:\x\en_core_web_lg\__init__.py", line 10, in load
File "C:\x\spacy\util.py", line 649, in load_model_from_init_py
File "C:\x\spacy\util.py", line 506, in load_model_from_path
File "C:\x\spacy\util.py", line 554, in load_model_from_config
File "C:\x\spacy\language.py", line 1788, in from_config
File "C:\x\spacy\language.py", line 163, in __init__
File "C:\x\catalogue\__init__.py", line 119, in get_all
File "C:\x\catalogue\__init__.py", line 134, in get_entry_points
File "importlib\metadata\__init__.py", line 162, in load
File "importlib\__init__.py", line 126, in import_module
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\x\spacy_transformers\__init__.py", line 1, in <module>
File "C:\x\spacy_transformers\architectures.py", line 6, in <module>
File "C:\x\spacy_transformers\layers\__init__.py", line 1, in <module>
File "C:\x\spacy_transformers\layers\listener.py", line 4, in <module>
File "C:\x\spacy_transformers\data_classes.py", line 5, in <module>
File "C:\x\transformers\tokenization_utils.py", line 26, in <module>
File "C:\x\transformers\tokenization_utils_base.py", line 3646, in <module>
AttributeError: 'NoneType' object has no attribute 'format'
```
This could happen when running the code in a Python interpreter which has optimize level set to 2 and it can't be changed, for exmaple: [calibre](https://calibre-ebook.com).
### Expected behavior
No error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20591/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20590
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20590/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20590/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20590/events
|
https://github.com/huggingface/transformers/pull/20590
| 1,476,560,255
|
PR_kwDOCUB6oc5EV8fc
| 20,590
|
Vision processors - replace FE with IPs
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Replaces feature extractors with image processors in the `Processor` class which bundle together tokenizers and feature extractor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20590/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20590",
"html_url": "https://github.com/huggingface/transformers/pull/20590",
"diff_url": "https://github.com/huggingface/transformers/pull/20590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20590.patch",
"merged_at": 1670582914000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20589
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20589/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20589/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20589/events
|
https://github.com/huggingface/transformers/pull/20589
| 1,476,421,611
|
PR_kwDOCUB6oc5EVdUF
| 20,589
|
[Whisper] Move decoder id method to tokenizer
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Moves the method `get_decoder_prompt_ids` from the processor to the tokenizer. The primary reason for this change is that the ASR pipeline class does not load the processor object, but rather the feature extractor and tokenizer separately (see [docs](https://github.com/huggingface/transformers/blob/699e90437f984d69ad3c9b891dd2e9d0fc2cffe4/src/transformers/pipelines/automatic_speech_recognition.py#L123)). Therefore, as things currently stand, pipeline does not have access to the processor method `get_decoder_prompt_ids`. By moving it to the tokenizer, it will be able to call this method with pipeline.
Note that this is not a breaking change: we retain a method `get_decoder_prompt_ids` in the processor. This method simply calls the `get_decoder_prompt_ids` from the tokenizer:
https://github.com/huggingface/transformers/blob/ca8b332d31a1b90e18f134620e69063418add69e/src/transformers/models/whisper/processing_whisper.py#L44-L45
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20589/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20589",
"html_url": "https://github.com/huggingface/transformers/pull/20589",
"diff_url": "https://github.com/huggingface/transformers/pull/20589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20589.patch",
"merged_at": 1670252045000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20588
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20588/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20588/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20588/events
|
https://github.com/huggingface/transformers/pull/20588
| 1,476,388,686
|
PR_kwDOCUB6oc5EVVvB
| 20,588
|
Ci-whisper-asr
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
In a recent update, we followed the original code which changed some of the suppress tokens for better performances. This lead to a small change in the output of on particular case. Tested with the original code and we have the correct output now!
Related to #20493 and #20512
See [here](https://huggingface.co/openai/whisper-large/commit/ed97120f929257fb801f99587ada69be0daf5b0a) for the particular commit
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20588/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20588",
"html_url": "https://github.com/huggingface/transformers/pull/20588",
"diff_url": "https://github.com/huggingface/transformers/pull/20588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20588.patch",
"merged_at": 1670255438000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20587
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20587/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20587/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20587/events
|
https://github.com/huggingface/transformers/pull/20587
| 1,476,352,035
|
PR_kwDOCUB6oc5EVNfq
| 20,587
|
[Vision] fix small nit on `BeitDropPath` layers
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a small nit for `DropPath` layers pointed out in: https://github.com/huggingface/transformers/pull/20550#discussion_r1039395745 & https://github.com/huggingface/transformers/pull/20550#discussion_r1039459045
Preferred to fix this separately in a PR to avoid modifying too much files in #20550
cc @NielsRogge @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20587/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20587",
"html_url": "https://github.com/huggingface/transformers/pull/20587",
"diff_url": "https://github.com/huggingface/transformers/pull/20587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20587.patch",
"merged_at": 1670248429000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20586
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20586/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20586/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20586/events
|
https://github.com/huggingface/transformers/pull/20586
| 1,476,344,791
|
PR_kwDOCUB6oc5EVL4s
| 20,586
|
Install `tensorflow_probability` for TF pipeline CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20586). All of your documentation changes will be reflected on that endpoint."
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
So tests like `TQAPipelineTests.test_integration_sqa_tf` or `TQAPipelineTests.test_slow_tokenizer_sqa_tf` could run.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20586/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20586",
"html_url": "https://github.com/huggingface/transformers/pull/20586",
"diff_url": "https://github.com/huggingface/transformers/pull/20586.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20586.patch",
"merged_at": 1670252846000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20585
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20585/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20585/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20585/events
|
https://github.com/huggingface/transformers/pull/20585
| 1,476,316,231
|
PR_kwDOCUB6oc5EVFdf
| 20,585
|
Add `require_torch` to 2 pipeline tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
The 2 tests are for `pytorch`, but in TF pipeline test CI job (where `torch` is not available), it runs with TF models.
This is not expected.
Before #20149, these 2 tests are decorated with `require_torch_scatter`. After that PR, the tests try to run with TF, but failed with `TFTapasMainLayer requires the tensorflow_probability library but it was not found in your environment.`
(this is another thing to fix in docker file)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20585/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20585",
"html_url": "https://github.com/huggingface/transformers/pull/20585",
"diff_url": "https://github.com/huggingface/transformers/pull/20585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20585.patch",
"merged_at": 1670252799000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.