url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/19382
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19382/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19382/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19382/events
|
https://github.com/huggingface/transformers/pull/19382
| 1,399,835,220
|
PR_kwDOCUB6oc5AUvkT
| 19,382
|
Added tokenize keyword arguments to feature extraction pipeline
|
{
"login": "quancore",
"id": 15036825,
"node_id": "MDQ6VXNlcjE1MDM2ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/15036825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quancore",
"html_url": "https://github.com/quancore",
"followers_url": "https://api.github.com/users/quancore/followers",
"following_url": "https://api.github.com/users/quancore/following{/other_user}",
"gists_url": "https://api.github.com/users/quancore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quancore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quancore/subscriptions",
"organizations_url": "https://api.github.com/users/quancore/orgs",
"repos_url": "https://api.github.com/users/quancore/repos",
"events_url": "https://api.github.com/users/quancore/events{/privacy}",
"received_events_url": "https://api.github.com/users/quancore/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"After checking, the broken tests are exactly broken by the lack of `truncation` support.\r\n\r\nAlso for quality you should be able to to \r\n```\r\npip install -e .[dev] # or pip install transformers[dev]\r\nmake fixup\r\n```\r\nCheers.",
"@Narsil I made the changes you indicate.",
"@sgugger I have moved the import to top.",
"Thanks a lot!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
The PR adds keyword arguments for the tokenizer for the feature extraction pipeline. Fixes: https://github.com/huggingface/transformers/issues/19374
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
@Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19382/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19382",
"html_url": "https://github.com/huggingface/transformers/pull/19382",
"diff_url": "https://github.com/huggingface/transformers/pull/19382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19382.patch",
"merged_at": 1665507282000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19380
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19380/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19380/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19380/events
|
https://github.com/huggingface/transformers/pull/19380
| 1,399,763,617
|
PR_kwDOCUB6oc5AUftC
| 19,380
|
Added type hints for TF: TransfoXL
|
{
"login": "thliang01",
"id": 21286104,
"node_id": "MDQ6VXNlcjIxMjg2MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thliang01",
"html_url": "https://github.com/thliang01",
"followers_url": "https://api.github.com/users/thliang01/followers",
"following_url": "https://api.github.com/users/thliang01/following{/other_user}",
"gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thliang01/subscriptions",
"organizations_url": "https://api.github.com/users/thliang01/orgs",
"repos_url": "https://api.github.com/users/thliang01/repos",
"events_url": "https://api.github.com/users/thliang01/events{/privacy}",
"received_events_url": "https://api.github.com/users/thliang01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I removed these Optional types as your suggested.\r\nHolp you may check it and then merge my request.\r\n@Rocketknight1 ",
"Looks good to me, thank you!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Based on Issue #16059
I have added type hints for Tensorflow TransfoXL Model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19380/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19380",
"html_url": "https://github.com/huggingface/transformers/pull/19380",
"diff_url": "https://github.com/huggingface/transformers/pull/19380.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19380.patch",
"merged_at": 1665150299000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19381
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19381/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19381/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19381/events
|
https://github.com/huggingface/transformers/issues/19381
| 1,399,800,679
|
I_kwDOCUB6oc5Tb0Nn
| 19,381
|
Tokenizer loading distillert instead of bert
|
{
"login": "mv96",
"id": 14794584,
"node_id": "MDQ6VXNlcjE0Nzk0NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14794584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mv96",
"html_url": "https://github.com/mv96",
"followers_url": "https://api.github.com/users/mv96/followers",
"following_url": "https://api.github.com/users/mv96/following{/other_user}",
"gists_url": "https://api.github.com/users/mv96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mv96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mv96/subscriptions",
"organizations_url": "https://api.github.com/users/mv96/orgs",
"repos_url": "https://api.github.com/users/mv96/repos",
"events_url": "https://api.github.com/users/mv96/events{/privacy}",
"received_events_url": "https://api.github.com/users/mv96/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi this issue probably belongs in `transformers` not in `tokenizers` so I'll transfer the issue.\r\nThat being said if you could\r\n\r\n\r\n> Please share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.\r\n\r\nThat would help, you're probably running a different version of the code.\r\nalso you mention it's running distilbert, how do you know ?\r\n",
"From a quick look, the issue likely comes from PyTorch's `mps` support which seems to give different results for the same operations",
"> Hi this issue probably belongs in `transformers` not in `tokenizers` so I'll transfer the issue. That being said if you could\r\n> \r\n> > Please share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.\r\n> \r\n\r\nhere is the system information from the command you have mentioned:\r\n\r\nWARNING:tensorflow:From /opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nUse `tf.config.list_physical_devices('GPU')` instead.\r\nMetal device set to: Apple M1\r\n\r\nsystemMemory: 16.00 GB\r\nmaxCacheSize: 5.33 GB\r\n\r\n2022-10-07 13:42:35.226096: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.\r\n2022-10-07 13:42:35.226193: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.22.2\r\n- Platform: macOS-12.6-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- Huggingface_hub version: 0.9.1\r\n- PyTorch version (GPU?): not installed (NA)\r\n- Tensorflow version (GPU?): 2.10.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n\r\n> That would help, you're probably running a different version of the code. also you mention it's running distilbert, how do you know ?\r\n\r\nI just copy paste the same code on Google Colab and it gives a different result, I suppose it's using distillert because a warning message appears, though I am not sure of this .\r\n\r\n",
"> From a quick look, the issue likely comes from PyTorch's `mps` support which seems to give different results for the same operations\r\n\r\nI actually tried removing the PyTorch library, still the same problem. Any suggestions are welcome",
"Hmm it seems like `Tensforflow` with M1 is having an issue:\r\n\r\n```\r\n2022-10-07 13:42:35.226096: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.\r\n2022-10-07 13:42:35.226193: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )\r\n```\r\n\r\nDon't have an M1 handy to test it on though :(",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,668
| 1,668
|
NONE
| null |
**Machine specs:**
MacBook Pro (13-inch, M1, 2020)
chip Apple M1
Memory 16 GB
Hello,
I try to use the Huggingface pipelines which works fine on colab but on my machine it behaves absurd
`from transformers import AutoTokenizer, TFAutoModelForMaskedLM,FillMaskPipeline
name="bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(name)
model = TFAutoModelForMaskedLM.from_pretrained(name)
unmasker = FillMaskPipeline(model=model,tokenizer=tokenizer)
unmasker("[MASK] is the capital of France.",top_k=10)`
<img width="1063" alt="image" src="https://user-images.githubusercontent.com/14794584/194323827-0f2942a3-94c5-4e7e-a3a7-421d7de8b391.png">
then if you try it on colab it works just fine
<img width="1267" alt="image" src="https://user-images.githubusercontent.com/14794584/194324180-e17e7ffb-9f2c-436b-984f-c92755f5fb89.png">
am I doing something really stupid ?
or is it genuinely a problem ?
Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19381/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19379
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19379/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19379/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19379/events
|
https://github.com/huggingface/transformers/issues/19379
| 1,399,538,844
|
I_kwDOCUB6oc5Ta0Sc
| 19,379
|
fill-mask with roberta-base and --targets options
|
{
"login": "felgaet",
"id": 87537133,
"node_id": "MDQ6VXNlcjg3NTM3MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/87537133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felgaet",
"html_url": "https://github.com/felgaet",
"followers_url": "https://api.github.com/users/felgaet/followers",
"following_url": "https://api.github.com/users/felgaet/following{/other_user}",
"gists_url": "https://api.github.com/users/felgaet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felgaet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felgaet/subscriptions",
"organizations_url": "https://api.github.com/users/felgaet/orgs",
"repos_url": "https://api.github.com/users/felgaet/repos",
"events_url": "https://api.github.com/users/felgaet/events{/privacy}",
"received_events_url": "https://api.github.com/users/felgaet/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi there! Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. In this instance, it's because RoBERTa uses a different tokenization algorithm than BERT which mark the beginning of each word with a special symbol.",
"Thanks!"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
Dear @sgugger,
I was trying to use the fill-mask function starting from "roberta-base" and limiting the search to target words using the --targets option.
For numerous target words, however, I get the warning that the word does not exist in the vocabulary.
An example is followed:
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model="roberta-base", tokenizer="roberta-base", top_k=10)
filled = unmasker("When I am hungry, I eat a <mask>.", targets=["pizza", "banana", "pasta"])
for r in filled:
print(r['token_str'], "->", r['score'])
```
The output is:
```
The specified target token `pizza` does not exist in the model vocabulary. Replacing with `p`.
The specified target token `banana` does not exist in the model vocabulary. Replacing with `ban`.
The specified target token `pasta` does not exist in the model vocabulary. Replacing with `past`.
p -> 1.0362288094256655e-07
ban -> 1.0942345918252272e-09
past -> 5.667477598336745e-10
```
This problem does not exist with BERT:
```
pizza -> 0.0014738412573933601
banana -> 0.0009286535205319524
pasta -> 1.1728033314284403e-05
```
Could you explain to me the reason for this behaviour? How can I fix it?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19379/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19378
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19378/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19378/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19378/events
|
https://github.com/huggingface/transformers/pull/19378
| 1,399,455,506
|
PR_kwDOCUB6oc5ATa5a
| 19,378
|
Add TF whisper
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Adds TF Whisper port of PyTorch implementation
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19378/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19378",
"html_url": "https://github.com/huggingface/transformers/pull/19378",
"diff_url": "https://github.com/huggingface/transformers/pull/19378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19378.patch",
"merged_at": 1665409698000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19377
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19377/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19377/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19377/events
|
https://github.com/huggingface/transformers/pull/19377
| 1,399,267,046
|
PR_kwDOCUB6oc5ASwhr
| 19,377
|
Fix DETR docs example, add post_process_object_detection to DETR docs
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I'm re-running the documentation build to check that the doc is built correctly, will merge afterwards :+1: ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
- Fixes DETR docs example
- Adds post_process_object_detection method to DETR docs
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19377/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19377",
"html_url": "https://github.com/huggingface/transformers/pull/19377",
"diff_url": "https://github.com/huggingface/transformers/pull/19377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19377.patch",
"merged_at": 1665093747000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19376
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19376/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19376/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19376/events
|
https://github.com/huggingface/transformers/pull/19376
| 1,399,261,019
|
PR_kwDOCUB6oc5ASvLu
| 19,376
|
fixed issue #19368
|
{
"login": "ShivangMishra",
"id": 35092323,
"node_id": "MDQ6VXNlcjM1MDkyMzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/35092323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivangMishra",
"html_url": "https://github.com/ShivangMishra",
"followers_url": "https://api.github.com/users/ShivangMishra/followers",
"following_url": "https://api.github.com/users/ShivangMishra/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivangMishra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivangMishra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivangMishra/subscriptions",
"organizations_url": "https://api.github.com/users/ShivangMishra/orgs",
"repos_url": "https://api.github.com/users/ShivangMishra/repos",
"events_url": "https://api.github.com/users/ShivangMishra/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivangMishra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(merging -- the failed test is being tracked internally)"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Fixes #19368
Following the issue #19368, I've corrected the type hint as "Optional[Tuple[int, float]]".
Please merge this PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19376/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19376",
"html_url": "https://github.com/huggingface/transformers/pull/19376",
"diff_url": "https://github.com/huggingface/transformers/pull/19376.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19376.patch",
"merged_at": 1665419523000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19375
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19375/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19375/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19375/events
|
https://github.com/huggingface/transformers/issues/19375
| 1,399,253,875
|
I_kwDOCUB6oc5TZutz
| 19,375
|
DeformableDetrForObjectDetection is not supported
|
{
"login": "Eggwardhan",
"id": 32223217,
"node_id": "MDQ6VXNlcjMyMjIzMjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32223217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eggwardhan",
"html_url": "https://github.com/Eggwardhan",
"followers_url": "https://api.github.com/users/Eggwardhan/followers",
"following_url": "https://api.github.com/users/Eggwardhan/following{/other_user}",
"gists_url": "https://api.github.com/users/Eggwardhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eggwardhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eggwardhan/subscriptions",
"organizations_url": "https://api.github.com/users/Eggwardhan/orgs",
"repos_url": "https://api.github.com/users/Eggwardhan/repos",
"events_url": "https://api.github.com/users/Eggwardhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eggwardhan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nDeformable DETR is not yet available in a PyPi release. For now, you have to install the library from source:\r\n\r\n```\r\npip install -q git+https://github.com/huggingface/transformers.git\r\n```"
] | 1,665
| 1,665
| 1,665
|
NONE
| null |
### System Info
when I use
`conda create -n hug numpy matplotlib transformers python=3.8` and activated hug env, or just use `pip install transformers` ,I got the same result
```
(hug) root@e:/# python
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoFeatureExtractor, DeformableDetrForObjectDetection
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'DeformableDetrForObjectDetection' from 'transformers' (/opt/miniconda3/envs/hug/lib/python3.8/site-packages/transformers/__init__.py)
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.conda create -n hug numpy matplotlib transformers python=3.8
2.conda activate hug
3.from transformers import AutoFeatureExtractor, DeformableDetrForObjectDetection
### Expected behavior
Can any one tell me why?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19375/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19374
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19374/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19374/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19374/events
|
https://github.com/huggingface/transformers/issues/19374
| 1,399,199,502
|
I_kwDOCUB6oc5TZhcO
| 19,374
|
Feature extraction pipeline not consider parameters
|
{
"login": "quancore",
"id": 15036825,
"node_id": "MDQ6VXNlcjE1MDM2ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/15036825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quancore",
"html_url": "https://github.com/quancore",
"followers_url": "https://api.github.com/users/quancore/followers",
"following_url": "https://api.github.com/users/quancore/following{/other_user}",
"gists_url": "https://api.github.com/users/quancore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quancore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quancore/subscriptions",
"organizations_url": "https://api.github.com/users/quancore/orgs",
"repos_url": "https://api.github.com/users/quancore/repos",
"events_url": "https://api.github.com/users/quancore/events{/privacy}",
"received_events_url": "https://api.github.com/users/quancore/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @quancore ,\r\n\r\n`feature-extraction` pipeline is a bit of a beast tbh since there are MANY models and architectures behind.\r\nThat being said, the use case you describe seems very legit and interesting.\r\n\r\nDo you want to open a PR for it ?\r\n\r\nIn order to prevent issues, the `tokenizer` part of the arguments should probably be sent as a group. For instance `max_length` is both an argument possible for tokenization and for `generate` function and they mean 2 very different things.\r\nSo doing :\r\n\r\n\r\n```python\r\nimport torch\r\nfrom torch.utils.data import Dataset\r\n\r\nfrom transformers import AutoModel, pipeline\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel_name=\"anferico/bert-for-patents\"\r\n\r\ntext = [\"this is a pan\"]\r\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\r\nmodel = AutoModel.from_pretrained(model_name).to(device)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True, model_max_length=512)\r\npipe_ = pipeline('feature-extraction', model=model, tokenizer=tokenizer, device=torch.cuda.current_device())\r\np = pipe_(text, tokenize_kwargs = {\"padding\": True, \"truncation\": True, \"pad_to_max_length\":True}, return_tensors='np')\r\n\r\nnp.squeeze(p).shape\r\n```\r\nMight be more explicit.\r\n\r\nAll arguments needs to be explicited in `_sanitize_parameters`.\r\n\r\nWould you be willing to open a PR for it ?",
"@Narsil I will try to do that, do you have any suggestions for similar reference implementation?",
"Does this help ?\r\nhttps://huggingface.co/docs/transformers/v4.22.2/en/add_new_pipeline\r\n\r\nYou can go and try at it, doesn't matter how far you go, you can ping me on the PR I'll try to provide some guidance. \r\n\r\nThanks a lot !"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### System Info
- transformers==4.22.2
- python==3.9.2
- Ubuntu 22.04
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from torch.utils.data import Dataset
from transformers import AutoModel, pipeline
from transformers import AutoTokenizer
model_name="anferico/bert-for-patents"
text = ["this is a pan"]
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModel.from_pretrained(model_name).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True, model_max_length=512)
pipe_ = pipeline('feature-extraction', model=model, tokenizer=tokenizer, device=torch.cuda.current_device())
p = pipe_(text, padding=True, truncation=True, pad_to_max_length=True, return_tensors='np')
np.squeeze(p).shape
```
### Expected behavior
There are several problems:
- It does not return Numpy array so I have to make squeeze operation on my own.
- The bigger problem is that it does not consider padding parameters, the expeced return would be (512, 1024) but now return (6, 1024). I have tried any parameter setup at any level but not worked. When I checked the source code, it only consider truncation parameter.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19374/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19373
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19373/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19373/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19373/events
|
https://github.com/huggingface/transformers/pull/19373
| 1,399,142,970
|
PR_kwDOCUB6oc5ASU5q
| 19,373
|
remove `return_dict_in_generate` condition on storing scores.
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19373). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
Fixes an issue in `generate` where the `output_scores` (or `output_attentions` or `output_hidden_states` ) cannot be obtained unless `return_dict_in_generate` is set to `True`. This is problematic because it's not what we want when we have a flag for each of these outputs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19373/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19373",
"html_url": "https://github.com/huggingface/transformers/pull/19373",
"diff_url": "https://github.com/huggingface/transformers/pull/19373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19373.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19372
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19372/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19372/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19372/events
|
https://github.com/huggingface/transformers/pull/19372
| 1,399,115,635
|
PR_kwDOCUB6oc5ASOz7
| 19,372
|
[wip: test doc-build]
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,665
| 1,667
| 1,667
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19372/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19372",
"html_url": "https://github.com/huggingface/transformers/pull/19372",
"diff_url": "https://github.com/huggingface/transformers/pull/19372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19372.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19371
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19371/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19371/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19371/events
|
https://github.com/huggingface/transformers/pull/19371
| 1,398,957,340
|
PR_kwDOCUB6oc5ARryG
| 19,371
|
Make retribert tokenizers independent from BertTokenizer
|
{
"login": "Davidy22",
"id": 872968,
"node_id": "MDQ6VXNlcjg3Mjk2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/872968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Davidy22",
"html_url": "https://github.com/Davidy22",
"followers_url": "https://api.github.com/users/Davidy22/followers",
"following_url": "https://api.github.com/users/Davidy22/following{/other_user}",
"gists_url": "https://api.github.com/users/Davidy22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Davidy22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Davidy22/subscriptions",
"organizations_url": "https://api.github.com/users/Davidy22/orgs",
"repos_url": "https://api.github.com/users/Davidy22/repos",
"events_url": "https://api.github.com/users/Davidy22/events{/privacy}",
"received_events_url": "https://api.github.com/users/Davidy22/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Are these line length check errors coming from the comments? Those are the only lines that approach the cusp of 119, and my editor tells me they're right under the line, and some of those comment lines come straight from the copied-from file",
"Took me enough tries, but finally passing checks. Would have saved myself a hefty bit of trouble if I had started with the styling tools.\r\n\r\nI couldn't help but notice while going through the tools that `python utils/check_copies.py` will \"correct\" many existing files. Is this something that's intentionally held back? Does appear to pass checks without the inclusion of all those changes",
"Cleared up those little oversights.\r\n\r\nSeems like the black version specified in setup.py is 22.3 and the one on my global python install is 22.6, so both should still be within the black project's promise of a yearly standard. The diff tells me that all the changes are just removing blank lines after function signatures or the first line of control flow blocks, which seems to be the usual black policy so I guess they just fixed some edge case that made it add a few empty lines mid-2022.\r\n\r\nAlso that failed check in a part of code that isn't changed is a bit annoying. `test_run_swag_no_trainer` seems to be building and testing a model, which I guess just failed as part of a stars aligning variance thing. I could change something superficial and invoke another CI run to confirm that I didn't stealth break a different module in a PR to make a module more independent, but the path does seem pretty separate.",
"This is because we are using the `--preview` flag, which breaks their promise of compatibility between the versions of the same year. We really like the way it formats all strings (in docstrings/warnings/multi-line strings in general) so we activated it. In three months, we'll switch to the 2023 version and remove the `--preview` flag, which should solve this issue for next year :-)\r\n\r\nThe failure is flaky indeed. Thanks a lot for your work on this!",
"I notice a little pile of deprecation warnings in that same leg of the test suite. If those aren't something that's being intentionally held back for compatibility reasons, I could put together another PR just mopping those up after all of the items from 19303 are cleared off or claimed",
"By all means! We are ignoring those for now, but we do need to clean them up at some point!"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Part of a series of commits to step towards resolving #19303
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19371/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19371",
"html_url": "https://github.com/huggingface/transformers/pull/19371",
"diff_url": "https://github.com/huggingface/transformers/pull/19371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19371.patch",
"merged_at": 1665152040000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19370
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19370/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19370/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19370/events
|
https://github.com/huggingface/transformers/pull/19370
| 1,398,785,206
|
PR_kwDOCUB6oc5ARFTl
| 19,370
|
Removed Bert dependency from BertGeneration code base.
|
{
"login": "Threepointone4",
"id": 22583613,
"node_id": "MDQ6VXNlcjIyNTgzNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22583613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Threepointone4",
"html_url": "https://github.com/Threepointone4",
"followers_url": "https://api.github.com/users/Threepointone4/followers",
"following_url": "https://api.github.com/users/Threepointone4/following{/other_user}",
"gists_url": "https://api.github.com/users/Threepointone4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Threepointone4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Threepointone4/subscriptions",
"organizations_url": "https://api.github.com/users/Threepointone4/orgs",
"repos_url": "https://api.github.com/users/Threepointone4/repos",
"events_url": "https://api.github.com/users/Threepointone4/events{/privacy}",
"received_events_url": "https://api.github.com/users/Threepointone4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your contribution!"
] | 1,665
| 1,672
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
- Related to #19303
- Removed `BertGeneration` dependency from `Bert` code base.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19370/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19370",
"html_url": "https://github.com/huggingface/transformers/pull/19370",
"diff_url": "https://github.com/huggingface/transformers/pull/19370.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19370.patch",
"merged_at": 1665164724000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19369
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19369/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19369/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19369/events
|
https://github.com/huggingface/transformers/pull/19369
| 1,398,628,256
|
PR_kwDOCUB6oc5AQi0w
| 19,369
|
edit: cast attention_mask to long in DataCollatorCTCWithPadding
|
{
"login": "ddobokki",
"id": 44228269,
"node_id": "MDQ6VXNlcjQ0MjI4MjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/44228269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddobokki",
"html_url": "https://github.com/ddobokki",
"followers_url": "https://api.github.com/users/ddobokki/followers",
"following_url": "https://api.github.com/users/ddobokki/following{/other_user}",
"gists_url": "https://api.github.com/users/ddobokki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddobokki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddobokki/subscriptions",
"organizations_url": "https://api.github.com/users/ddobokki/orgs",
"repos_url": "https://api.github.com/users/ddobokki/repos",
"events_url": "https://api.github.com/users/ddobokki/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddobokki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"i add more line\r\n```\r\nif \"attention_mask\" in batch:\r\n```\r\nbcz some case, feature_extractor has config that [\"return_attention_mask\": false]\r\nbut is\r\n```\r\nif self.processor.feature_extractor.return_attention_mask:\r\n```\r\nmore good to read? if so, i'll change that"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
many inf values generated when training Wav2Vec2ForCTC by referring to [run_speech_recognition_ctc.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) using DeepSpeed library.
because Wav2Vec2ForCTC's forword has logics that sum attention_mask, so if you training model using DeepSpeed,
https://github.com/huggingface/transformers/blob/7e7f62bfa72ca03e9f16285dad182f7c57cd8cab/src/transformers/trainer.py#L2390
this method cast attention_mask's dtype int32 to float16
Wav2Vec2FeatureExtractor makes attention_mask and it's dtype int32
here is example
```
import torch
from transformers import Wav2Vec2FeatureExtractor
feature_extractor = Wav2Vec2FeatureExtractor(return_attention_mask=True)
data = [{'input_values':[0.1,0.1,0.1]},{'input_values':[0.2,0.2,0.2,0.2,0.2]}]
attn_mask = feature_extractor.pad(data,padding = "longest",return_tensors="pt")['attention_mask']
print(attn_mask.dtype)
-> torch.int32
```
so i add one line in DataCollatorCTCWithPadding that attention_mask casting long type from int32
```
batch['attention_mask'] = batch['attention_mask'].to(torch.long)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # [18080](https://github.com/huggingface/transformers/issues/18080)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19369/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19369",
"html_url": "https://github.com/huggingface/transformers/pull/19369",
"diff_url": "https://github.com/huggingface/transformers/pull/19369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19369.patch",
"merged_at": 1665151549000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19368
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19368/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19368/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19368/events
|
https://github.com/huggingface/transformers/issues/19368
| 1,398,571,028
|
I_kwDOCUB6oc5TXIAU
| 19,368
|
Incorrect type hint of "exponential_decay_length_penalty" in function "generate"
|
{
"login": "pohunghuang-nctu",
"id": 61784186,
"node_id": "MDQ6VXNlcjYxNzg0MTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/61784186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pohunghuang-nctu",
"html_url": "https://github.com/pohunghuang-nctu",
"followers_url": "https://api.github.com/users/pohunghuang-nctu/followers",
"following_url": "https://api.github.com/users/pohunghuang-nctu/following{/other_user}",
"gists_url": "https://api.github.com/users/pohunghuang-nctu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pohunghuang-nctu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pohunghuang-nctu/subscriptions",
"organizations_url": "https://api.github.com/users/pohunghuang-nctu/orgs",
"repos_url": "https://api.github.com/users/pohunghuang-nctu/repos",
"events_url": "https://api.github.com/users/pohunghuang-nctu/events{/privacy}",
"received_events_url": "https://api.github.com/users/pohunghuang-nctu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,665
| 1,665
| 1,665
|
NONE
| null |
Hi,
Please check below line,
https://github.com/huggingface/transformers/blob/7e7f62bfa72ca03e9f16285dad182f7c57cd8cab/src/transformers/generation_utils.py#L956
According to doc string, "exponential_decay_length_penalty (`tuple(int, float)`, *optional*, defaults to `model.config.exponential_decay_length_penalty`):" (https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L1114)
the correct type hint should be "Optional[Tuple[int, float]]", the number of tuple elements should be 2, and must be "int" in position 0, and "float" in position 1.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19368/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19367
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19367/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19367/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19367/events
|
https://github.com/huggingface/transformers/pull/19367
| 1,398,438,807
|
PR_kwDOCUB6oc5AP5yV
| 19,367
|
Improve and fix ImageSegmentationPipeline
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts @sgugger thank you for the review! All comments are addressed, I'll merge the branch once all tests are passing.",
"> Thanks for making these changes and improving the pipeline ⭐\r\n> \r\n> I think the PR is good to go as is 👍 Would just like to see a bit more test coverage of the different segmentation tasks. Just one per task in the pipeline, possibly adapting existing ones to avoid making the test suite significantly slower. Have you visualised or counted the number of different pixels between the outputs on this branch and main as validation?\r\n\r\nThank you! Yes, I visualized the segments and they are the same."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
- Fixes the image segmentation pipeline test failures caused by changes to the postprocessing methods of supported models
- Updates the ImageSegmentationPipeline tests
- Improves docs, adds 'task' argument to optionally perform semantic, instance or panoptic segmentation
Note: `test_small_model_pt` test is skipped due to a random weight initialization error when loading the `hf-internal-testing/tiny-detr-mobilenetsv3-panoptic` model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ X] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19367/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19367",
"html_url": "https://github.com/huggingface/transformers/pull/19367",
"diff_url": "https://github.com/huggingface/transformers/pull/19367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19367.patch",
"merged_at": 1665174881000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19366
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19366/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19366/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19366/events
|
https://github.com/huggingface/transformers/pull/19366
| 1,398,436,169
|
PR_kwDOCUB6oc5AP5N5
| 19,366
|
Rework pipeline tests
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Note: Flax tests are currently failing because it tries to run the tests inside the pipelines folder, will fix this tomorrow by having all non-pipeline jobs not run any of the tests in the pipelines folder. The `--ignore` flag from pytest does not work for some reason, but the test fetcher can probably fix that somehow ^^.",
"_The documentation is not available anymore as the PR was closed or merged._",
"I love this PR !\r\n\r\n> Note: Flax tests are currently failing because it tries to run the tests inside the pipelines folder\r\n\r\nCan't we make the tests parsable even when having neither PT nor TF ?\r\n\r\nHere this:\r\n\r\n`from transformers import DetrForSegmentation` seems to be the culprit (in `tests/pipelines/test_pipelines_for_segmentation.py`).\r\n\r\nShouldn't we have dummy models when `torch` is not available ?",
"> Can't we make the tests parsable even when having neither PT nor TF ?\r\n\r\nI can certainly do that too, but the pipeline tests are isolated to not be run by the other jobs, so they shouldn't even be run.",
"> I can certainly do that too, but the pipeline tests are isolated to not be run by the other jobs, so they shouldn't even be run.\r\n\r\nI'm thinking about regular user using JAX (or non libraries actually) doing `pytest -sv tests/` . IMO it'd be nice if the command ran instead of crashing."
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
The test fetcher has been pretty good at identifying which tests to run and even if we're not testing everything on each commit anymore, we've mostly avoided bad surprises.
Except for pipeline tests.
This is because the pipeline tests are structured in a way that makes it hard for the test fetcher to guess it has to run them, as they don't see to rely on anything else than the pipeline.
Also running pipeline tests is annoying as you have to remember to activate a special env variable. It made sense back in the days where we had all tests in one folder, but now that they are nicely structured, this is completely unnecessary.
Thus this PR proposes two things:
1. remove the special marker for pipeline tests and the corresponding env variable. We know they are all in `tests/pipelines`.
2. run all pipeline tests any time there is some code change warranting at least one test, like we do for the examples. They are taking the same time roughly, and since the pipelines are a good integration test, I think it actually makes more sense to test those all the times than the examples.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19366/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19366",
"html_url": "https://github.com/huggingface/transformers/pull/19366",
"diff_url": "https://github.com/huggingface/transformers/pull/19366.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19366.patch",
"merged_at": 1665180119000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19365
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19365/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19365/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19365/events
|
https://github.com/huggingface/transformers/pull/19365
| 1,398,409,620
|
PR_kwDOCUB6oc5APzgc
| 19,365
|
Fix pipeline tests for Roberta-like tokenizers
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19365). All of your documentation changes will be reflected on that endpoint."
] | 1,665
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
The pipeline tests rely on an inheritance test for the (weird) behavior of Roberta-like models/tokenizers having a +2 / -2 on the embeddings. This was working fine until someone decided to encourage the community to uncouple all configs, and now it breaks.
This PR fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19365/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19365",
"html_url": "https://github.com/huggingface/transformers/pull/19365",
"diff_url": "https://github.com/huggingface/transformers/pull/19365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19365.patch",
"merged_at": 1665006494000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19364
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19364/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19364/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19364/events
|
https://github.com/huggingface/transformers/pull/19364
| 1,398,384,041
|
PR_kwDOCUB6oc5APuBM
| 19,364
|
Make `Camembert` TF version independent from `Roberta`
|
{
"login": "Mustapha-AJEGHRIR",
"id": 66799406,
"node_id": "MDQ6VXNlcjY2Nzk5NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/66799406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mustapha-AJEGHRIR",
"html_url": "https://github.com/Mustapha-AJEGHRIR",
"followers_url": "https://api.github.com/users/Mustapha-AJEGHRIR/followers",
"following_url": "https://api.github.com/users/Mustapha-AJEGHRIR/following{/other_user}",
"gists_url": "https://api.github.com/users/Mustapha-AJEGHRIR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mustapha-AJEGHRIR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mustapha-AJEGHRIR/subscriptions",
"organizations_url": "https://api.github.com/users/Mustapha-AJEGHRIR/orgs",
"repos_url": "https://api.github.com/users/Mustapha-AJEGHRIR/repos",
"events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger, I'm getting the following errors from circleci :\r\n```python\r\nFAILED tests/pipelines/test_pipelines_feature_extraction.py::FeatureExtractionPipelineTests::test_pt_LongformerConfig_LongformerModel_LongformerTokenizerFast_nofeature_extractor\r\nFAILED tests/pipelines/test_pipelines_feature_extraction.py::FeatureExtractionPipelineTests::test_pt_LongformerConfig_LongformerModel_LongformerTokenizer_nofeature_extractor\r\n```\r\nI don't see why there is `Longformer` on the error while I haven't touched it."
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
related to #19303
Making the Camembert model (`Tensorflow`version) independent from Roberta
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Camembert should not depend from roberta
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19364/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19364",
"html_url": "https://github.com/huggingface/transformers/pull/19364",
"diff_url": "https://github.com/huggingface/transformers/pull/19364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19364.patch",
"merged_at": 1665164545000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19363
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19363/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19363/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19363/events
|
https://github.com/huggingface/transformers/pull/19363
| 1,398,350,187
|
PR_kwDOCUB6oc5APmyd
| 19,363
|
Fix DETR segmentation postprocessing output
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,665
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Ensures post_process_instance_segmentation and post_process_panoptic_segmentation methods return a tensor of shape (target_height, target_width) filled with -1 values if no segment with score > threshold is found.
Applies the same changes as the MaskFormer postprocessing fix [PR](https://github.com/huggingface/transformers/pull/19354).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19363/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19363",
"html_url": "https://github.com/huggingface/transformers/pull/19363",
"diff_url": "https://github.com/huggingface/transformers/pull/19363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19363.patch",
"merged_at": 1665004597000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19362
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19362/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19362/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19362/events
|
https://github.com/huggingface/transformers/issues/19362
| 1,398,290,541
|
I_kwDOCUB6oc5TWDht
| 19,362
|
Misspelled docstring for ensure_valid_input function
|
{
"login": "Bearnardd",
"id": 43574448,
"node_id": "MDQ6VXNlcjQzNTc0NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearnardd",
"html_url": "https://github.com/Bearnardd",
"followers_url": "https://api.github.com/users/Bearnardd/followers",
"following_url": "https://api.github.com/users/Bearnardd/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions",
"organizations_url": "https://api.github.com/users/Bearnardd/orgs",
"repos_url": "https://api.github.com/users/Bearnardd/repos",
"events_url": "https://api.github.com/users/Bearnardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearnardd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Looks wrong indeed. Would you like to open a PR to fix it?",
"@sgugger sure, I will take care of that tomorrow."
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.23.0.dev0
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```{code}
def ensure_valid_input(model, tokens, input_names):
"""
Ensure input are presented in the correct order, without any Non
Args:
model: The model used to forward the input data
tokens: BatchEncoding holding the input data
input_names: The name of the inputs
Returns: Tuple
"""
```
### Expected behavior
It is a really tiny tiny detail but when I was going through `convert_graph_to_onnx.py` file I noticed that there is a misspelled docstring for `ensure_valid_input` function. Namely, I believe that `"Ensure input are ..."` should be `"Ensure inputs are ...`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19362/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19361
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19361/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19361/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19361/events
|
https://github.com/huggingface/transformers/issues/19361
| 1,398,207,811
|
I_kwDOCUB6oc5TVvVD
| 19,361
|
Moving examples in docstrings of RobertaTokenizer and LongformerTokenizer to doc source files
|
{
"login": "srhrshr",
"id": 2330069,
"node_id": "MDQ6VXNlcjIzMzAwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2330069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srhrshr",
"html_url": "https://github.com/srhrshr",
"followers_url": "https://api.github.com/users/srhrshr/followers",
"following_url": "https://api.github.com/users/srhrshr/following{/other_user}",
"gists_url": "https://api.github.com/users/srhrshr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srhrshr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srhrshr/subscriptions",
"organizations_url": "https://api.github.com/users/srhrshr/orgs",
"repos_url": "https://api.github.com/users/srhrshr/repos",
"events_url": "https://api.github.com/users/srhrshr/events{/privacy}",
"received_events_url": "https://api.github.com/users/srhrshr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"#self-assign",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
### Motivation
When using the `# Copied from` mode of do-repeat-yourself, if there are interactive examples in the source file's docstring, e.g., [`[0, 31414, 232, 328, 2]` in RobertaTokenizer](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/roberta/tokenization_roberta.py#L118), then it is currently impossible for the destination file to have accurate interactive examples without having complicated replace patterns in the `copy-check` module. e.g., [`[0, 31414, 232, 2]` is the correct output of this example in `LongformerTokenizer`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longformer/tokenization_longformer.py#L127).
For now, to pass the copy check we have copied over the output line from the Roberta model to Longformer.
> The example should probably go in the doc source file instead, so we can copy without any worry. For now in this PR I'd leave it as the Roberta output as you did. Then we can do a follow-up PR where we change the doc source files for Roberta and Longformer and remove that example from the two docstrings (I can do it or you can, as you prefer!)
_Originally posted by @sgugger in https://github.com/huggingface/transformers/pull/19346#discussion_r988069210_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19361/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19360
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19360/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19360/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19360/events
|
https://github.com/huggingface/transformers/pull/19360
| 1,398,206,786
|
PR_kwDOCUB6oc5APH-n
| 19,360
|
Fix gather for metrics
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,665
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a failing test in the CI due to `gather_for_metrics` not receiving a tuple. This will soon be redundant with a change in Accelerate, but good to have this fix in now anyways with the right version imo
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19360/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19360",
"html_url": "https://github.com/huggingface/transformers/pull/19360",
"diff_url": "https://github.com/huggingface/transformers/pull/19360.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19360.patch",
"merged_at": 1664995921000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19359
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19359/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19359/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19359/events
|
https://github.com/huggingface/transformers/pull/19359
| 1,398,114,337
|
PR_kwDOCUB6oc5AO0Hd
| 19,359
|
Make `XLMRoberta` model and config independent from `Roberta`
|
{
"login": "asofiaoliveira",
"id": 74454835,
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofiaoliveira",
"html_url": "https://github.com/asofiaoliveira",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger\r\nI ran `make style` and fixed your recommendations, but now there's an error in the `run_tests_torch`. I believe it's [this](https://app.circleci.com/pipelines/github/huggingface/transformers/48816/workflows/882fa75f-7b2f-45af-861c-1beb9881aeea/jobs/583177?invite=true#step-112-2911): \r\n```\r\nOSError: Rocketknight1/esm-2-8m is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.\r\n```\r\nI don't really know how to fix that or what caused it.. any tips?"
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes the dependencies of `XLMRobertaConfig` and everything inside `modeling_xlm_roberta.py`. This is related to issue #19303.
I only did this for the PyTorch model as there were some models in the issue that had "PyTorch + TF" and this was not one of them. I can add for tensorflow or flax if needed!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19359/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19359",
"html_url": "https://github.com/huggingface/transformers/pull/19359",
"diff_url": "https://github.com/huggingface/transformers/pull/19359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19359.patch",
"merged_at": 1665496602000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19358
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19358/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19358/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19358/events
|
https://github.com/huggingface/transformers/issues/19358
| 1,398,104,316
|
I_kwDOCUB6oc5TVWD8
| 19,358
|
setting max_new_tokens in text-generation pipeline with OPT produces error
|
{
"login": "gqfiddler",
"id": 1731695,
"node_id": "MDQ6VXNlcjE3MzE2OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1731695?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gqfiddler",
"html_url": "https://github.com/gqfiddler",
"followers_url": "https://api.github.com/users/gqfiddler/followers",
"following_url": "https://api.github.com/users/gqfiddler/following{/other_user}",
"gists_url": "https://api.github.com/users/gqfiddler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gqfiddler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gqfiddler/subscriptions",
"organizations_url": "https://api.github.com/users/gqfiddler/orgs",
"repos_url": "https://api.github.com/users/gqfiddler/repos",
"events_url": "https://api.github.com/users/gqfiddler/events{/privacy}",
"received_events_url": "https://api.github.com/users/gqfiddler/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @gqfiddler 👋 -- thank you for raising this issue 👀 \r\n\r\n@Narsil this seems to be a problem between how `.generate()` expects the max length to be defined, and how the `text-generation` pipeline prepares the inputs. When `max_new_tokens` is passed outside the initialization, [this line](https://github.com/huggingface/transformers/blob/4dd784c32f76fb8285f205b94e2a6ebde731a1cd/src/transformers/pipelines/base.py#L1038) merges the two sets of sanitized arguments (from the initialization we have `max_length`, from the new kwargs we have `max_new_tokens`).\r\n\r\nTo fix this, we can either remove the `ValueError` from generate (but expose ourselves to weird errors) or add more logic to the pipelines e.g. to ignore `max_length` when `max_new_tokens` are is (which is not very pretty). WDYT?",
"Hey, thanks for the quick pickup on this! \r\n\r\n FWIW, in my opinion the existing error + message is exactly the right response for the case where the caller explicitly passes in a value for both ```max_length``` and ```max_new_tokens```. The problem I'm pointing out here is the case where the caller passes in a value for ```max_new_tokens``` and NOT for ```max_length``` (as recommended in the documentation) and the model still raises this error. It seems pretty unproblematic to me for the pipeline code to ignore ```max_length``` (i.e. set it to None) in this case, since the caller has made clear how they wish to limit the model output. It may not be especially pretty, but that sort of conditional default argument value is plenty common and easy to use, so long as it's documented (e.g., in the \"Normalize\" parameter for [sklearn's linear regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html), \"This parameter is ignored when fit_intercept is set to False.\")\r\n\r\nAt any rate, one way or another, the method recommended in the documentation should run without error. In theory the very most minimal fix would be just to clarify this in the documentation for ```max_new_tokens``` with a note like \"To use this parameter, you must set ```max_length``` to 0\"... but given that other autoregressive model pipelines handle this case without throwing an error, probably better to change the code here too.",
"Hi,\r\n\r\nThe first `max_length` comes from the use of `config.prefix` in `facebook/opt-125m`: https://huggingface.co/facebook/opt-125m/blob/main/config.json\r\n\r\nI don't think `prefix` is correctly used in this configuration. `prefix` is meant for XL variants models to have a large text input prompt because the output quality of the model without it is bad (this is old code I'm referring to very old conversations).\r\n\r\nShouldn't the prefix be added directly in the `tokenizer` itself ? (Like prepending every input ids with EOS regardless ?).\r\nThis would seem the most canonical way to handle that IMO.\r\n\r\nRegardless of this:\r\n\r\n- This is indeed a bug, the user never passed `max_length` so we shouldn't set it for him, but changing that means changing the `model.config` itself instead, which might also not be great. Since it's modifying an object outside of the pipelines control which make thing extremely indirect. Culprit line: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L101\r\n- We have to keep the prefix thing unfortunately because of backward compatibility, but it seems pretty bad to use it since it's highly shadowed behavior.\r\n\r\n\r\nEasy fixes for the example:\r\n\r\n- Define `max_new_tokens` in the instantation instead of call:\r\n```python\r\nfrom transformers import pipeline\r\n\r\ntest_generator = pipeline(\r\n \"text-generation\",\r\n model=\"facebook/opt-125m\",\r\n do_sample=True,\r\n max_new_tokens=200,\r\n)\r\n\r\nresponse = test_generator(\r\n \"Here's how this model responds to a test prompt:\",\r\n num_return_sequences=1,\r\n)\r\nprint(response[0][\"generated_text\"])\r\n```\r\n\r\n- Deactivate `max_length` manually:\r\n```python\r\nfrom transformers import pipeline\r\n\r\ntest_generator = pipeline(\r\n \"text-generation\",\r\n model=\"facebook/opt-125m\",\r\n do_sample=True,\r\n)\r\n\r\nresponse = test_generator(\r\n \"Here's how this model responds to a test prompt:\",\r\n num_return_sequences=1,\r\n max_length=None,\r\n max_new_tokens=200\r\n)\r\nprint(response[0][\"generated_text\"])\r\n```\r\n",
"@Narsil I see! Actually, OPT's tokenizer already adds the `prefix` (`\"</s>\"`, token id = 2) at tokenization time.\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/opt-125m\")\r\nprint(tokenizer.bos_token_id)\r\n# 2\r\nprint(tokenizer([\"This is a test\"]))\r\n# {'input_ids': [[2, 713, 16, 10, 1296]], 'attention_mask': [[1, 1, 1, 1, 1]]}\r\n``` \r\n\r\nLooking at the [tokenizer configuration](https://huggingface.co/facebook/opt-125m/blob/main/tokenizer_config.json), we see a `\"add_bos_token\": true`.\r\n\r\nBecause it also has a `config.prefix`, does this mean that the pipelines add another `</s>`? I suppose it is harmless and, for pipeline reasons, removing `config.prefix` would be fine. The problem is all other uses of `config.prefix` outside the huggingface universe, which we can't control (and thus we shouldn't touch it).\r\n\r\nCould we add an ad hoc pipeline exception, e.g [here](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L59)? (OPT models would skip this `if` -> `prefix` is not set -> problem solved)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
NONE
| null |
### System Info
python 3.7.12
transformers 4.22.2
Google Vertex AI platform
### Who can help?
@LysandreJik
(Feel free to tag whoever owns OPT if that's not you! – it's not specified in the list)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
test_generator = pipeline(
"text-generation",
model="facebook/opt-125m",
do_sample=True,
device=device
)
response = test_generator(
"Here's how this model responds to a test prompt:",
max_new_tokens=200,
num_return_sequences=1,
)
print(response[0]['generated_text'])
```
### Expected behavior
This should generate text, but it produces this error:
ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
Meanwhile, the official documentation specifically recommends setting 'max_new_tokens' rather than 'max_length':
**max_length** (int, optional, defaults to model.config.max_length) — The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens. In general, prefer the use of max_new_tokens, which ignores the number of tokens in the prompt.
**max_new_tokens** (int, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
The problem can be worked around by manually setting max_length=None, but that should happen by default as it does with other autoregressive models. The same code runs without error if you swap out the OPT model for EleutherAI/gpt-neo-125M.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19358/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19357
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19357/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19357/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19357/events
|
https://github.com/huggingface/transformers/issues/19357
| 1,398,082,702
|
I_kwDOCUB6oc5TVQyO
| 19,357
|
List of models/tasks failing ONNX inference
|
{
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.23.0.dev0
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.12
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help?
@lewtun
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue was surfaced in https://github.com/huggingface/transformers/pull/19255 (see also https://github.com/huggingface/transformers/issues/19320)
Export any of the following model/task/framework parameterizations to ONNX:
- [ ] ("deberta-v2", "question-answering", "pt"),
- [ ] ("deberta-v2", "multiple-choice", "pt"),
- [ ] ("roformer", "multiple-choice", "pt"),
- [ ] ("groupvit", "default", "pt"),
- [ ] ("perceiver", "masked-lm", "pt"),
- [ ] ("perceiver", "sequence-classification", "pt"),
- [ ] ("perceiver", "image-classification", "pt"),
- [ ] ("bert", "multiple-choice", "tf"),
- [ ] ("camembert", "multiple-choice", "tf"),
- [ ] ("roberta", "multiple-choice", "tf"),
Errors are currently not detected at export time (although see https://github.com/huggingface/transformers/pull/19255). However, running inference on these exported models with any shape other than what they were validated with will fail.
### Expected behavior
* Errors are raised during ONNX export
* Inference runs as expected
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19357/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19356
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19356/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19356/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19356/events
|
https://github.com/huggingface/transformers/pull/19356
| 1,398,073,074
|
PR_kwDOCUB6oc5AOrSj
| 19,356
|
Removed `Bert` interdependency in `tokenization_electra.py`
|
{
"login": "OtherHorizon",
"id": 33909036,
"node_id": "MDQ6VXNlcjMzOTA5MDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/33909036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OtherHorizon",
"html_url": "https://github.com/OtherHorizon",
"followers_url": "https://api.github.com/users/OtherHorizon/followers",
"following_url": "https://api.github.com/users/OtherHorizon/following{/other_user}",
"gists_url": "https://api.github.com/users/OtherHorizon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OtherHorizon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OtherHorizon/subscriptions",
"organizations_url": "https://api.github.com/users/OtherHorizon/orgs",
"repos_url": "https://api.github.com/users/OtherHorizon/repos",
"events_url": "https://api.github.com/users/OtherHorizon/events{/privacy}",
"received_events_url": "https://api.github.com/users/OtherHorizon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks a lot for working on this one! It's missing a few copied froms, I left suggestions. Once you're done, be sure to run `make style` on your branch (for code-formatting) and we should be good to merge!\r\n\r\n`make style` changes a lot of files.",
"Make sure you install the specific versions of black that we use with `pip install -e .[quality]`, it's probably because you don't ahve the same version as the one we use."
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Related to #19303
Removes `bert` dependency from `tokenization_electra.py`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19356/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19356",
"html_url": "https://github.com/huggingface/transformers/pull/19356",
"diff_url": "https://github.com/huggingface/transformers/pull/19356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19356.patch",
"merged_at": 1665159844000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19355
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19355/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19355/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19355/events
|
https://github.com/huggingface/transformers/pull/19355
| 1,398,052,565
|
PR_kwDOCUB6oc5AOm1V
| 19,355
|
Skip failing test while we resolve the issue.
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19355). All of your documentation changes will be reflected on that endpoint."
] | 1,664
| 1,664
| 1,664
|
COLLABORATOR
| null |
# What does this PR do?
Skipping some Maskformer tests that are failing.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19355/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19355",
"html_url": "https://github.com/huggingface/transformers/pull/19355",
"diff_url": "https://github.com/huggingface/transformers/pull/19355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19355.patch",
"merged_at": 1664987029000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19354
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19354/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19354/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19354/events
|
https://github.com/huggingface/transformers/pull/19354
| 1,398,007,236
|
PR_kwDOCUB6oc5AOdKf
| 19,354
|
Fix MaskFormer failing postprocess tests
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like there is still a problem with the test."
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Ensures post_process_instance_segmentation and post_process_panoptic_segmentation methods return a tensor of shape (target_height, target_width) filled with -1 values if no segment with score > threshold is found.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19354/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19354",
"html_url": "https://github.com/huggingface/transformers/pull/19354",
"diff_url": "https://github.com/huggingface/transformers/pull/19354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19354.patch",
"merged_at": 1665001558000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19353
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19353/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19353/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19353/events
|
https://github.com/huggingface/transformers/issues/19353
| 1,398,000,277
|
I_kwDOCUB6oc5TU8qV
| 19,353
|
Very different results on inference between mps and cpu for same input
|
{
"login": "forrestdavis",
"id": 17956221,
"node_id": "MDQ6VXNlcjE3OTU2MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/17956221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forrestdavis",
"html_url": "https://github.com/forrestdavis",
"followers_url": "https://api.github.com/users/forrestdavis/followers",
"following_url": "https://api.github.com/users/forrestdavis/following{/other_user}",
"gists_url": "https://api.github.com/users/forrestdavis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forrestdavis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forrestdavis/subscriptions",
"organizations_url": "https://api.github.com/users/forrestdavis/orgs",
"repos_url": "https://api.github.com/users/forrestdavis/repos",
"events_url": "https://api.github.com/users/forrestdavis/events{/privacy}",
"received_events_url": "https://api.github.com/users/forrestdavis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Interesting! @pcuenca could you take a look here? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,670
| 1,670
|
NONE
| null |
### System Info
transformers version: 4.22.2
Platform: macOS-12.5-arm64-arm-64bit
Python version: 3.9.13
Huggingface_hub version: 0.10.0
PyTorch version (MPS?): 1.13.0.dev20221005 (True)
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am running the following code:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2').to(torch.device('mps'))
inputs = tokenizer('the man who is tall',
return_tensors='pt').to(torch.device('mps'))
print(inputs)
outputs = model(**inputs).logits
print(outputs[0,0,:])
print()
print()
model = GPT2LMHeadModel.from_pretrained('gpt2').to(torch.device('cpu'))
inputs = tokenizer('the man who is tall',
return_tensors='pt').to(torch.device('cpu'))
print(inputs)
outputs = model(**inputs).logits
print(outputs[0,0,:])
```
What I see as output is:
```
{'input_ids': tensor([[1169, 582, 508, 318, 7331]], device='mps:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1]], device='mps:0')}
/opt/homebrew/Caskroom/miniforge/base/envs/mapi/lib/python3.9/site-packages/torch/_tensor_str.py:115: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
nonzero_finite_vals = torch.masked_select(
tensor([-14.9901, -14.6213, -17.5936, ..., -21.4744, -21.1240, -14.7532],
device='mps:0', grad_fn=<SliceBackward0>)
{'input_ids': tensor([[1169, 582, 508, 318, 7331]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])}
tensor([-33.1021, -31.8638, -35.0600, ..., -38.2193, -38.8318, -32.7428],
grad_fn=<SliceBackward0>)
```
### Expected behavior
I expect the output to be the same (or at least much closer). Am I missing something obvious here?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19353/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19352
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19352/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19352/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19352/events
|
https://github.com/huggingface/transformers/issues/19352
| 1,397,954,586
|
I_kwDOCUB6oc5TUxga
| 19,352
|
bug with inputs_embeds for bert (tensorflow)
|
{
"login": "tomergur",
"id": 12786269,
"node_id": "MDQ6VXNlcjEyNzg2MjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/12786269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomergur",
"html_url": "https://github.com/tomergur",
"followers_url": "https://api.github.com/users/tomergur/followers",
"following_url": "https://api.github.com/users/tomergur/following{/other_user}",
"gists_url": "https://api.github.com/users/tomergur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomergur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomergur/subscriptions",
"organizations_url": "https://api.github.com/users/tomergur/orgs",
"repos_url": "https://api.github.com/users/tomergur/repos",
"events_url": "https://api.github.com/users/tomergur/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomergur/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @tomergur 👋 \r\n\r\nSadly, we have no power to prevent this issue. If you look at the traceback from a script like\r\n```python\r\nfrom transformers import BertConfig, TFBertForTokenClassification\r\nbert_conf = BertConfig(num_hidden_layers=2, num_labels=2)\r\nbert_model=TFBertForTokenClassification(bert_conf)\r\nbert_model(inputs_embeds=[[0, 1, 2]], training=True)\r\n``` \r\n\r\nyou'll see\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/joao/transformers/../joao_scripts/dbg.py\", line 4, in <module>\r\n bert_model(inputs_embeds=[[0, 1, 2]], training=True)\r\n File \"/home/joao/hf/lib/python3.10/site-packages/keras/utils/traceback_utils.py\", line 70, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/home/joao/hf/lib/python3.10/site-packages/keras/utils/layer_utils.py\", line 812, in split_out_first_arg\r\n raise ValueError(\r\nValueError: The first argument to `Layer.call` must always be passed.\r\n```\r\n\r\nIn other words, it crashes on `Keras` code before reaching `transformers` code. Passing `input_ids=None` or wrapping in a dictionary (as you mentioned) are the workarounds for this issue -- `Keras` expects the first input defined in the signature of `call` to always be provided.\r\n\r\n_______________________________\r\nI'm closing this issue, but feel free to reopen if you have other queries 🤗 "
] | 1,664
| 1,665
| 1,665
|
NONE
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I use the following code for using inputs_embeds for TFBertForTokenClassification:
(text embedding is a tensor with shape (1, SEQ_LEN,768))
```
bert_conf = BertConfig(num_hidden_layers=2, num_labels=2)
bert_model=TFBertForTokenClassification(bert_conf)
res = self.group_bert(inputs_embeds=text_emb, training=training)
```
but it throws the following exception
```
/lv_local/home/tomergur/convo_search_project/experiments/qpp/supervised/train_ragged_bert_qpp.py:71 call *
res = self.bert(inputs_embeds=text_emb, training=training)
/lv_local/home/tomergur/convo_search_project/csp_venv/lib/python3.8/site-packages/keras/engine/base_layer.py:967 __call__ **
inputs, args, kwargs = self._split_out_first_arg(args, kwargs)
/lv_local/home/tomergur/convo_search_project/csp_venv/lib/python3.8/site-packages/keras/engine/base_layer.py:3011 _split_out_first_arg
raise ValueError(
ValueError: The first argument to `Layer.call` must always be passed.
Process finished with exit code 1
```
while this snippet works:
```
bert_conf = BertConfig(num_hidden_layers=2, num_labels=2)
bert_model=TFBertForTokenClassification(bert_conf)
# text embedding is a tensor with shape (1, SEQ_LEN,768)
inputs={'inputs_embeds':text_emb}
res=self.group_bert(group_inputs,training=training)
```
so for some reasons we have ug when using keyword input
### Expected behavior
make it possible to use inputs_embed with keyword args
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19352/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19351
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19351/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19351/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19351/events
|
https://github.com/huggingface/transformers/pull/19351
| 1,397,902,723
|
PR_kwDOCUB6oc5AOGQf
| 19,351
|
Make LayoutLM tokenizers independent from BertTokenizer
|
{
"login": "arnaudstiegler",
"id": 26485052,
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudstiegler",
"html_url": "https://github.com/arnaudstiegler",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger The PR is ready for review :) "
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Decoupling `LayoutLMTokenizer` and `LayoutLMTokenizerFast` from `BertTokenizer`.
Since only a few class constants change between Bert and LayoutLM, there's a copy flag for every single method in the class.
I wonder whether having prefixes for the class constants could help reducing the amount of code, for instance:
`VOCAB_FILES_NAMES` -> `BERT_VOCAB_FILES_NAMES`
so that we could simply use: `# Copied from transformers.models.bert.tokenization_bert_fast.BertTokenizerFast with Bert->LayoutLM` on the entire tokenizer class though I suspect there are good reasons not to do that.
Related to #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19351/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19351/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19351",
"html_url": "https://github.com/huggingface/transformers/pull/19351",
"diff_url": "https://github.com/huggingface/transformers/pull/19351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19351.patch",
"merged_at": 1665496164000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19350
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19350/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19350/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19350/events
|
https://github.com/huggingface/transformers/pull/19350
| 1,397,832,771
|
PR_kwDOCUB6oc5AN3O_
| 19,350
|
Adding type hints for TF TransfoXL
|
{
"login": "thliang01",
"id": 21286104,
"node_id": "MDQ6VXNlcjIxMjg2MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thliang01",
"html_url": "https://github.com/thliang01",
"followers_url": "https://api.github.com/users/thliang01/followers",
"following_url": "https://api.github.com/users/thliang01/following{/other_user}",
"gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thliang01/subscriptions",
"organizations_url": "https://api.github.com/users/thliang01/orgs",
"repos_url": "https://api.github.com/users/thliang01/repos",
"events_url": "https://api.github.com/users/thliang01/events{/privacy}",
"received_events_url": "https://api.github.com/users/thliang01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19350). All of your documentation changes will be reflected on that endpoint."
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
Based on Issue #16059
As the title suggests, this PR adds type hints to the TF TransfoXL model classes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19350/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19350",
"html_url": "https://github.com/huggingface/transformers/pull/19350",
"diff_url": "https://github.com/huggingface/transformers/pull/19350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19350.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19349
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19349/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19349/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19349/events
|
https://github.com/huggingface/transformers/pull/19349
| 1,397,817,373
|
PR_kwDOCUB6oc5ANz5r
| 19,349
|
Remove `Roberta` Interdependency from `tokenization_luke`
|
{
"login": "OtherHorizon",
"id": 33909036,
"node_id": "MDQ6VXNlcjMzOTA5MDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/33909036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OtherHorizon",
"html_url": "https://github.com/OtherHorizon",
"followers_url": "https://api.github.com/users/OtherHorizon/followers",
"following_url": "https://api.github.com/users/OtherHorizon/following{/other_user}",
"gists_url": "https://api.github.com/users/OtherHorizon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OtherHorizon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OtherHorizon/subscriptions",
"organizations_url": "https://api.github.com/users/OtherHorizon/orgs",
"repos_url": "https://api.github.com/users/OtherHorizon/repos",
"events_url": "https://api.github.com/users/OtherHorizon/events{/privacy}",
"received_events_url": "https://api.github.com/users/OtherHorizon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> Thanks a lot for working on this one! Looking good for the properties added but the main init should retain the specificities of the Luke tokenizer, otherwise it will break it :-)\r\n\r\nHi,thanks for the super fast review. Will fix asap.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19349). All of your documentation changes will be reflected on that endpoint.",
"> I'm sorry if I was unclear in my previous comments. You still need to add the init of Roberta inside the init of Luke. It's just that you should not remove the extra code Luke adds in its init :-)\r\n> \r\n> Also be careful with the docstring changes, they should be reverted.\r\n\r\nworking on it",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
Related to #19303
Removes `Roberta` dependency from `tokenization_luke.py`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19349/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19349",
"html_url": "https://github.com/huggingface/transformers/pull/19349",
"diff_url": "https://github.com/huggingface/transformers/pull/19349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19349.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19348
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19348/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19348/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19348/events
|
https://github.com/huggingface/transformers/issues/19348
| 1,397,754,606
|
I_kwDOCUB6oc5TUAru
| 19,348
|
Adding type hints for TFXLnet #19344
|
{
"login": "thliang01",
"id": 21286104,
"node_id": "MDQ6VXNlcjIxMjg2MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thliang01",
"html_url": "https://github.com/thliang01",
"followers_url": "https://api.github.com/users/thliang01/followers",
"following_url": "https://api.github.com/users/thliang01/following{/other_user}",
"gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thliang01/subscriptions",
"organizations_url": "https://api.github.com/users/thliang01/orgs",
"repos_url": "https://api.github.com/users/thliang01/repos",
"events_url": "https://api.github.com/users/thliang01/events{/privacy}",
"received_events_url": "https://api.github.com/users/thliang01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19348/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19347
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19347/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19347/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19347/events
|
https://github.com/huggingface/transformers/pull/19347
| 1,397,750,608
|
PR_kwDOCUB6oc5ANla6
| 19,347
|
Making `ConvBert Tokenizer` independent from `bert Tokenizer`
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Done! @sgugger do i need to do same for convbert_fast?",
"@sgugger Thanks for quick feeback,\r\nFor tokenization_convbert.py i have change the comment to `# Copied from transformers.models.bert.tokenization_bert.BertTokenizer with ConvBertTokenizer->BertTokenizer`\r\n\r\nand for tokenization_convbert_fast.py i have change the comment to `# Copied from transformers.models.bert.tokenization_bert_fast.BertTokenizerFast with ConvBertTokenizerFast->ConvBertTokenizer\r\n`\r\n\r\nfor convbert_fast `make repo-consistency` gives error : - src/transformers\\models\\convbert\\tokenization_convbert_fast.py: copy does not match models.bert.tokenization_bert_fast.BertTokenizerFast at line 55",
"You'll need to add broader patterns than just the full name of the tokenizer as BERT is used in the docstrings for instance. To see what the copy utils wants to modify, you can run `make fix-copies` locally :-)",
"After running `make fix-copies` it changes `slow_tokenizer_class = BertTokenizer` but it should be `ConvBertTokenizer` in tokenization_convbert_fast.py",
"Yes, that's why I made the suggestions above.",
"So, do i need to copy `BertTokenizer` and `BertTokenizerFast` class to tokenization_convbert_fast.py as after adding those classes it passes all tests",
"Just accept the suggestions above.",
"Done @sgugger are there any more changes?"
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #19303
Added `BertTokenizer` class in tokenization_convbert.py and `BertTokenizerFast` in tokenization_convbert_fast.py
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19347/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19347",
"html_url": "https://github.com/huggingface/transformers/pull/19347",
"diff_url": "https://github.com/huggingface/transformers/pull/19347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19347.patch",
"merged_at": 1665143943000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19346
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19346/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19346/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19346/events
|
https://github.com/huggingface/transformers/pull/19346
| 1,397,731,730
|
PR_kwDOCUB6oc5ANhXF
| 19,346
|
Frees LongformerTokenizer of the Roberta dependency
|
{
"login": "srhrshr",
"id": 2330069,
"node_id": "MDQ6VXNlcjIzMzAwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2330069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srhrshr",
"html_url": "https://github.com/srhrshr",
"followers_url": "https://api.github.com/users/srhrshr/followers",
"following_url": "https://api.github.com/users/srhrshr/following{/other_user}",
"gists_url": "https://api.github.com/users/srhrshr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srhrshr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srhrshr/subscriptions",
"organizations_url": "https://api.github.com/users/srhrshr/orgs",
"repos_url": "https://api.github.com/users/srhrshr/repos",
"events_url": "https://api.github.com/users/srhrshr/events{/privacy}",
"received_events_url": "https://api.github.com/users/srhrshr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19346). All of your documentation changes will be reflected on that endpoint."
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
@sgugger ,
Per the issue #19303, the Roberta tokenizer dependency has been removed from `LongformerTokenizer`.
Thanks for reviewing the PR :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19346/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19346",
"html_url": "https://github.com/huggingface/transformers/pull/19346",
"diff_url": "https://github.com/huggingface/transformers/pull/19346.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19346.patch",
"merged_at": 1664984954000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19344
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19344/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19344/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19344/events
|
https://github.com/huggingface/transformers/pull/19344
| 1,397,705,193
|
PR_kwDOCUB6oc5ANbtT
| 19,344
|
Adding type hints for TFXLnet
|
{
"login": "thliang01",
"id": 21286104,
"node_id": "MDQ6VXNlcjIxMjg2MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thliang01",
"html_url": "https://github.com/thliang01",
"followers_url": "https://api.github.com/users/thliang01/followers",
"following_url": "https://api.github.com/users/thliang01/following{/other_user}",
"gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thliang01/subscriptions",
"organizations_url": "https://api.github.com/users/thliang01/orgs",
"repos_url": "https://api.github.com/users/thliang01/repos",
"events_url": "https://api.github.com/users/thliang01/events{/privacy}",
"received_events_url": "https://api.github.com/users/thliang01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Based on Issue #16059",
"Looks good to me, thanks!"
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
As the title suggests, this PR adds type hints to the TFXLnet model classes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19344/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19344",
"html_url": "https://github.com/huggingface/transformers/pull/19344",
"diff_url": "https://github.com/huggingface/transformers/pull/19344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19344.patch",
"merged_at": 1665746888000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19343
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19343/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19343/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19343/events
|
https://github.com/huggingface/transformers/pull/19343
| 1,397,693,967
|
PR_kwDOCUB6oc5ANZUX
| 19,343
|
Removes Roberta and Bert config dependencies from Longformer
|
{
"login": "srhrshr",
"id": 2330069,
"node_id": "MDQ6VXNlcjIzMzAwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2330069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srhrshr",
"html_url": "https://github.com/srhrshr",
"followers_url": "https://api.github.com/users/srhrshr/followers",
"following_url": "https://api.github.com/users/srhrshr/following{/other_user}",
"gists_url": "https://api.github.com/users/srhrshr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srhrshr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srhrshr/subscriptions",
"organizations_url": "https://api.github.com/users/srhrshr/orgs",
"repos_url": "https://api.github.com/users/srhrshr/repos",
"events_url": "https://api.github.com/users/srhrshr/events{/privacy}",
"received_events_url": "https://api.github.com/users/srhrshr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Done! Thanks for bearing with me @sgugger :) "
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
@sgugger ,
Per the issue #19303, the Roberta and Bert config dependencies are removed from `LongformerConfig` and it now directly inherits from `PretrainedConfig`.
- `LongformerConfig` depends on `RobertaConfig` and `RobertaConfig` in turn inherits from `BertConfig`.
- So I've copied over the defaults (`pad_token_id`, `bos_token_id`, `eos_token_id`) from RobertaConfig and the rest of the defaults from BertConfig that are not conflicting with roberta.
- The docstrings from BertConfig are copied over as well.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19343/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19343",
"html_url": "https://github.com/huggingface/transformers/pull/19343",
"diff_url": "https://github.com/huggingface/transformers/pull/19343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19343.patch",
"merged_at": 1664992215000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19341
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19341/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19341/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19341/events
|
https://github.com/huggingface/transformers/pull/19341
| 1,397,657,586
|
PR_kwDOCUB6oc5ANRse
| 19,341
|
🚨 🚨 🚨 Fix ViT parameter initialization
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR! For such PRs, please ping both @sgugger and @amyeroberts who are better suited to do a review than I am. Thank you!",
"@alaradirik I believe setting eps in layernorm to 1e-6 rather than 1e-12 is also important as mentioned in https://github.com/huggingface/transformers/issues/19305 by @rwightman",
"Yes, @alaradirik, but I think for newer users, or anyone new to ViT or just setting up ViT to train, it would be much better if the default was set to 1e-6 rather than 1e-12 so that they don't have to relook for bugs and most of the time the eps value will not be the first thing they look for, or worst case last thing.",
"Also, shouldn't Kaiming initialization be used for the nn.Conv2d rather than .normal_() initialization in the class ViTPreTrainedModel or any class that directly inherits from PretrainedModel? And the biases of the nn.Conv2d in ViT should be initialized the same way as PyTorch? (https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv2d) @alaradirik @LysandreJik @NielsRogge @sgugger "
] | 1,664
| 1,668
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR aims to rectify the discrepancy between the training performances of HF and Timm ViT implementations.
- Initializes torch and flax ViT dense layer weights with trunc_normal instead of normal (consistent with the TF implementation.
- Initializes cls_token and positional_embeddings with trunc_normal
- Updates DeiT copy
Partially fixes # ([19305](https://github.com/huggingface/transformers/issues/19305))
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
This issue was brought up over `[here.](https://github.com/huggingface/transformers/issues/19305)`
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19341/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19341/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19341",
"html_url": "https://github.com/huggingface/transformers/pull/19341",
"diff_url": "https://github.com/huggingface/transformers/pull/19341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19341.patch",
"merged_at": 1665047041000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19340
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19340/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19340/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19340/events
|
https://github.com/huggingface/transformers/issues/19340
| 1,397,651,583
|
I_kwDOCUB6oc5TTnh_
| 19,340
|
Roberta Gradient checkpointing to only layers, which requires grad
|
{
"login": "berkekisin",
"id": 70319815,
"node_id": "MDQ6VXNlcjcwMzE5ODE1",
"avatar_url": "https://avatars.githubusercontent.com/u/70319815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/berkekisin",
"html_url": "https://github.com/berkekisin",
"followers_url": "https://api.github.com/users/berkekisin/followers",
"following_url": "https://api.github.com/users/berkekisin/following{/other_user}",
"gists_url": "https://api.github.com/users/berkekisin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/berkekisin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/berkekisin/subscriptions",
"organizations_url": "https://api.github.com/users/berkekisin/orgs",
"repos_url": "https://api.github.com/users/berkekisin/repos",
"events_url": "https://api.github.com/users/berkekisin/events{/privacy}",
"received_events_url": "https://api.github.com/users/berkekisin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
NONE
| null |
### Feature request
Current gradient checkpointing in Roberta, directly uses gradient checkpointing for all layers without checking if they require grad. In our use case we only train last 3 layers of Roberta and want to use gradient checkpointing only for the last 3 layers.
### Motivation
Adding this would further decrease the memory usage on GPU.
### Your contribution
https://github.com/huggingface/transformers/blob/6268694e27f1fc0192ba24e4bec181061b4a9bf8/src/transformers/models/roberta/modeling_roberta.py#L497
at this line we can add another condition, then it would look like this:
if self.gradient_checkpointing and self.training and layer_module.requires_grad
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19340/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19339
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19339/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19339/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19339/events
|
https://github.com/huggingface/transformers/issues/19339
| 1,397,636,925
|
I_kwDOCUB6oc5TTj89
| 19,339
|
TypeError: Operation 'neg_out_mps()' does not support input type 'int64' in MPS backend.
|
{
"login": "serdarildercaglar",
"id": 87153193,
"node_id": "MDQ6VXNlcjg3MTUzMTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/87153193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/serdarildercaglar",
"html_url": "https://github.com/serdarildercaglar",
"followers_url": "https://api.github.com/users/serdarildercaglar/followers",
"following_url": "https://api.github.com/users/serdarildercaglar/following{/other_user}",
"gists_url": "https://api.github.com/users/serdarildercaglar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/serdarildercaglar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serdarildercaglar/subscriptions",
"organizations_url": "https://api.github.com/users/serdarildercaglar/orgs",
"repos_url": "https://api.github.com/users/serdarildercaglar/repos",
"events_url": "https://api.github.com/users/serdarildercaglar/events{/privacy}",
"received_events_url": "https://api.github.com/users/serdarildercaglar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"As far as I can tell, this an issue in PyTorch not supporting an operation on MPS, so you should probably file your issue there.",
"I have overcome all the difficulties I have encountered so far with your solution suggestions about transformers. :) ",
"Has this issue been resolved?\r\n\r\nI'm running into the same problem. \r\n\r\n@forrestfaraday, can you share your solution with me?"
] | 1,664
| 1,673
| 1,664
|
NONE
| null |
### System Info
transformers version: 4.22.2
checkpoint: microsoft/deberta-v3-small
task: ner
error: `TypeError: Operation 'neg_out_mps()' does not support input type 'int64' in MPS backend.
`

### Who can help?
@sgugger
`The following columns in the training set don't have a corresponding argument in `DebertaV2ForTokenClassification.forward` and have been ignored: ner_tags, tokens. If ner_tags, tokens are not expected by `DebertaV2ForTokenClassification.forward`, you can safely ignore this message.
***** Running training *****
Num examples = 43943
Num Epochs = 5
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 13735
Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [44], in <cell line: 1>()
----> 1 trainer.train()
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:1521, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1516 self.model_wrapped = self.model
1518 inner_training_loop = find_executable_batch_size(
1519 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1520 )
-> 1521 return inner_training_loop(
1522 args=args,
1523 resume_from_checkpoint=resume_from_checkpoint,
1524 trial=trial,
1525 ignore_keys_for_eval=ignore_keys_for_eval,
1526 )
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:1763, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1761 tr_loss_step = self.training_step(model, inputs)
1762 else:
-> 1763 tr_loss_step = self.training_step(model, inputs)
1765 if (
1766 args.logging_nan_inf_filter
1767 and not is_torch_tpu_available()
1768 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
1769 ):
1770 # if loss is nan or inf simply add the average of previous logged losses
1771 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:2499, in Trainer.training_step(self, model, inputs)
2496 return loss_mb.reduce_mean().detach().to(self.args.device)
2498 with self.compute_loss_context_manager():
-> 2499 loss = self.compute_loss(model, inputs)
2501 if self.args.n_gpu > 1:
2502 loss = loss.mean() # mean() to average on multi-gpu parallel training
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/trainer.py:2531, in Trainer.compute_loss(self, model, inputs, return_outputs)
2529 else:
2530 labels = None
-> 2531 outputs = model(**inputs)
2532 # Save past state if it exists
2533 # TODO: this needs to be fixed and made cleaner later.
2534 if self.args.past_index >= 0:
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:1444, in DebertaV2ForTokenClassification.forward(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1438 r"""
1439 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1440 Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
1441 """
1442 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1444 outputs = self.deberta(
1445 input_ids,
1446 attention_mask=attention_mask,
1447 token_type_ids=token_type_ids,
1448 position_ids=position_ids,
1449 inputs_embeds=inputs_embeds,
1450 output_attentions=output_attentions,
1451 output_hidden_states=output_hidden_states,
1452 return_dict=return_dict,
1453 )
1455 sequence_output = outputs[0]
1457 sequence_output = self.dropout(sequence_output)
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:1101, in DebertaV2Model.forward(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)
1091 token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
1093 embedding_output = self.embeddings(
1094 input_ids=input_ids,
1095 token_type_ids=token_type_ids,
(...)
1098 inputs_embeds=inputs_embeds,
1099 )
-> 1101 encoder_outputs = self.encoder(
1102 embedding_output,
1103 attention_mask,
1104 output_hidden_states=True,
1105 output_attentions=output_attentions,
1106 return_dict=return_dict,
1107 )
1108 encoded_layers = encoder_outputs[1]
1110 if self.z_steps > 1:
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:542, in DebertaV2Encoder.forward(self, hidden_states, attention_mask, output_hidden_states, output_attentions, query_states, relative_pos, return_dict)
533 output_states = torch.utils.checkpoint.checkpoint(
534 create_custom_forward(layer_module),
535 next_kv,
(...)
539 rel_embeddings,
540 )
541 else:
--> 542 output_states = layer_module(
543 next_kv,
544 attention_mask,
545 query_states=query_states,
546 relative_pos=relative_pos,
547 rel_embeddings=rel_embeddings,
548 output_attentions=output_attentions,
549 )
551 if output_attentions:
552 output_states, att_m = output_states
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:386, in DebertaV2Layer.forward(self, hidden_states, attention_mask, query_states, relative_pos, rel_embeddings, output_attentions)
377 def forward(
378 self,
379 hidden_states,
(...)
384 output_attentions=False,
385 ):
--> 386 attention_output = self.attention(
387 hidden_states,
388 attention_mask,
389 output_attentions=output_attentions,
390 query_states=query_states,
391 relative_pos=relative_pos,
392 rel_embeddings=rel_embeddings,
393 )
394 if output_attentions:
395 attention_output, att_matrix = attention_output
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:317, in DebertaV2Attention.forward(self, hidden_states, attention_mask, output_attentions, query_states, relative_pos, rel_embeddings)
308 def forward(
309 self,
310 hidden_states,
(...)
315 rel_embeddings=None,
316 ):
--> 317 self_output = self.self(
318 hidden_states,
319 attention_mask,
320 output_attentions,
321 query_states=query_states,
322 relative_pos=relative_pos,
323 rel_embeddings=rel_embeddings,
324 )
325 if output_attentions:
326 self_output, att_matrix = self_output
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:750, in DisentangledSelfAttention.forward(self, hidden_states, attention_mask, output_attentions, query_states, relative_pos, rel_embeddings)
748 if self.relative_attention:
749 rel_embeddings = self.pos_dropout(rel_embeddings)
--> 750 rel_att = self.disentangled_attention_bias(
751 query_layer, key_layer, relative_pos, rel_embeddings, scale_factor
752 )
754 if rel_att is not None:
755 attention_scores = attention_scores + rel_att
File ~/opt/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:845, in DisentangledSelfAttention.disentangled_attention_bias(self, query_layer, key_layer, relative_pos, rel_embeddings, scale_factor)
842 else:
843 r_pos = relative_pos
--> 845 p2c_pos = torch.clamp(-r_pos + att_span, 0, att_span * 2 - 1)
846 p2c_att = torch.bmm(key_layer, pos_query_layer.transpose(-1, -2))
847 p2c_att = torch.gather(
848 p2c_att,
849 dim=-1,
850 index=p2c_pos.squeeze(0).expand([query_layer.size(0), key_layer.size(-2), key_layer.size(-2)]),
851 ).transpose(-1, -2)
TypeError: Operation 'neg_out_mps()' does not support i`
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01,
#load_best_model_at_end = True,
#metric_for_best_model = 'eval_f1',
overwrite_output_dir= True,
gradient_accumulation_steps = 1,
gradient_checkpointing = False,
use_mps_device = True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_test,
tokenizer=tokenizer,
data_collator=data_collator,
#compute_metrics= compute_metrics1,
)`
### Expected behavior
to be able to train using Apple Silicon Chip
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19339/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19338
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19338/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19338/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19338/events
|
https://github.com/huggingface/transformers/pull/19338
| 1,397,626,759
|
PR_kwDOCUB6oc5ANLFB
| 19,338
|
Attempting to enable chunking for CTC (might be not viable).
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19338). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
Co-Authored-By: Sam Waterbury <sam.waterbury@scale.com>
# What does this PR do?
Attempts to recreate a different version of https://github.com/huggingface/transformers/pull/18949
I am under the impression, that the overall approach is doomed to not work because of how mctc models work.
However this PR also enables nice to have features.
This attempts to create the condition for MCTC models (like https://huggingface.co/speechbrain/m-ctc-t-large)
to work with the `chunk_length_s` argument.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19338/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19338",
"html_url": "https://github.com/huggingface/transformers/pull/19338",
"diff_url": "https://github.com/huggingface/transformers/pull/19338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19338.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19337
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19337/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19337/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19337/events
|
https://github.com/huggingface/transformers/pull/19337
| 1,397,582,621
|
PR_kwDOCUB6oc5ANBqM
| 19,337
|
Making `Camembert` independent from `Roberta`, clean
|
{
"login": "Mustapha-AJEGHRIR",
"id": 66799406,
"node_id": "MDQ6VXNlcjY2Nzk5NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/66799406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mustapha-AJEGHRIR",
"html_url": "https://github.com/Mustapha-AJEGHRIR",
"followers_url": "https://api.github.com/users/Mustapha-AJEGHRIR/followers",
"following_url": "https://api.github.com/users/Mustapha-AJEGHRIR/following{/other_user}",
"gists_url": "https://api.github.com/users/Mustapha-AJEGHRIR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mustapha-AJEGHRIR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mustapha-AJEGHRIR/subscriptions",
"organizations_url": "https://api.github.com/users/Mustapha-AJEGHRIR/orgs",
"repos_url": "https://api.github.com/users/Mustapha-AJEGHRIR/repos",
"events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
This pull request is a clean version of #19312
# What does this PR do?
related to #19303
Making the Camembert model (pytorch version) independent from Roberta
I have changed all the classes of camembert by copy paste from Roberta. I have done the litte necessary changes for every thing to work.
I'm still wondering how to change blocks like those, I have to add a specific checkpoint for Camembert ? :
```python
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
checkpoint="deepset/roberta-base-squad2",
output_type=QuestionAnsweringModelOutput,
config_class=_CONFIG_FOR_DOC,
expected_output="' puppet'",
expected_loss=0.86,
)
```
For testing, the following test works well :
```bash
$ RUN_SLOW=1 pytest tests/models/camembert/test_modeling_camembert.py
```
However, I have noticed the tests are only for the `CamembertModel` but not other classes like `CamembertForCausalLM` ...
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger related to #19303
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19337/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19337",
"html_url": "https://github.com/huggingface/transformers/pull/19337",
"diff_url": "https://github.com/huggingface/transformers/pull/19337.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19337.patch",
"merged_at": 1664976693000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19336
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19336/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19336/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19336/events
|
https://github.com/huggingface/transformers/pull/19336
| 1,397,470,540
|
PR_kwDOCUB6oc5AMqPY
| 19,336
|
Change `BloomConfig` docstring
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19336). All of your documentation changes will be reflected on that endpoint."
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR addresses small changes on the `BloomConfig` docstring that might be slightly confusing.
Original discussion from: https://huggingface.co/bigscience/bloom/discussions/120
Thanks!
cc @sgugger @VictorSanh @SaulLu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19336/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19336",
"html_url": "https://github.com/huggingface/transformers/pull/19336",
"diff_url": "https://github.com/huggingface/transformers/pull/19336.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19336.patch",
"merged_at": 1664986333000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19335
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19335/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19335/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19335/events
|
https://github.com/huggingface/transformers/pull/19335
| 1,397,456,730
|
PR_kwDOCUB6oc5AMnY-
| 19,335
|
[tokenizers] Cache all-special-ids
|
{
"login": "yashneeva",
"id": 87332554,
"node_id": "MDQ6VXNlcjg3MzMyNTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/87332554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashneeva",
"html_url": "https://github.com/yashneeva",
"followers_url": "https://api.github.com/users/yashneeva/followers",
"following_url": "https://api.github.com/users/yashneeva/following{/other_user}",
"gists_url": "https://api.github.com/users/yashneeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yashneeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yashneeva/subscriptions",
"organizations_url": "https://api.github.com/users/yashneeva/orgs",
"repos_url": "https://api.github.com/users/yashneeva/repos",
"events_url": "https://api.github.com/users/yashneeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/yashneeva/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19335). All of your documentation changes will be reflected on that endpoint.",
"@SaulLu sorry for the delay, here's my follow-up to https://github.com/huggingface/transformers/pull/19018 where I fixed the test failures :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19335/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19335",
"html_url": "https://github.com/huggingface/transformers/pull/19335",
"diff_url": "https://github.com/huggingface/transformers/pull/19335.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19335.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19333
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19333/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19333/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19333/events
|
https://github.com/huggingface/transformers/pull/19333
| 1,397,386,843
|
PR_kwDOCUB6oc5AMY1-
| 19,333
|
Added Type hints for XLM TF
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 Thanks for the feedback, I have updated the file."
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
Based on Issue #16059
I have added type hints for Tensorflow XLM model.
@Rocketknight1 Could you kindly check if this is fine?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19333/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19333",
"html_url": "https://github.com/huggingface/transformers/pull/19333",
"diff_url": "https://github.com/huggingface/transformers/pull/19333.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19333.patch",
"merged_at": 1665146691000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19332
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19332/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19332/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19332/events
|
https://github.com/huggingface/transformers/pull/19332
| 1,397,183,863
|
PR_kwDOCUB6oc5ALtQO
| 19,332
|
Remove bert interdependency from clip tokenizer
|
{
"login": "shyamsn97",
"id": 18158178,
"node_id": "MDQ6VXNlcjE4MTU4MTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/18158178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shyamsn97",
"html_url": "https://github.com/shyamsn97",
"followers_url": "https://api.github.com/users/shyamsn97/followers",
"following_url": "https://api.github.com/users/shyamsn97/following{/other_user}",
"gists_url": "https://api.github.com/users/shyamsn97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shyamsn97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shyamsn97/subscriptions",
"organizations_url": "https://api.github.com/users/shyamsn97/orgs",
"repos_url": "https://api.github.com/users/shyamsn97/repos",
"events_url": "https://api.github.com/users/shyamsn97/events{/privacy}",
"received_events_url": "https://api.github.com/users/shyamsn97/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,683
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/19303
Removing `transformers.models.bert.tokenization_bert.BasicTokenizer` from `transformers.models.clip.tokenization_clip`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Tagging @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
The style + quality checks seemed to run smoothly, but let me know if I missed something!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19332/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19332",
"html_url": "https://github.com/huggingface/transformers/pull/19332",
"diff_url": "https://github.com/huggingface/transformers/pull/19332.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19332.patch",
"merged_at": 1664975714000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19331
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19331/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19331/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19331/events
|
https://github.com/huggingface/transformers/pull/19331
| 1,397,098,148
|
PR_kwDOCUB6oc5ALaX6
| 19,331
|
Removed interdependency of BERT's Tokenizer in tokenization of prophetnet
|
{
"login": "divyanshugit",
"id": 53843818,
"node_id": "MDQ6VXNlcjUzODQzODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53843818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyanshugit",
"html_url": "https://github.com/divyanshugit",
"followers_url": "https://api.github.com/users/divyanshugit/followers",
"following_url": "https://api.github.com/users/divyanshugit/following{/other_user}",
"gists_url": "https://api.github.com/users/divyanshugit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyanshugit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyanshugit/subscriptions",
"organizations_url": "https://api.github.com/users/divyanshugit/orgs",
"repos_url": "https://api.github.com/users/divyanshugit/repos",
"events_url": "https://api.github.com/users/divyanshugit/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyanshugit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes BERT dependency from the ProphetNet tokenizer file.
Fixes a part of [#19303](https://github.com/huggingface/transformers/issues/19303)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19331/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19331",
"html_url": "https://github.com/huggingface/transformers/pull/19331",
"diff_url": "https://github.com/huggingface/transformers/pull/19331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19331.patch",
"merged_at": 1664975568000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19330
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19330/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19330/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19330/events
|
https://github.com/huggingface/transformers/pull/19330
| 1,397,051,987
|
PR_kwDOCUB6oc5ALQx_
| 19,330
|
[WIP]remove XLMTokenizer inheritance from FlaubertTokenizer
|
{
"login": "D3xter1922",
"id": 59790120,
"node_id": "MDQ6VXNlcjU5NzkwMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/59790120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D3xter1922",
"html_url": "https://github.com/D3xter1922",
"followers_url": "https://api.github.com/users/D3xter1922/followers",
"following_url": "https://api.github.com/users/D3xter1922/following{/other_user}",
"gists_url": "https://api.github.com/users/D3xter1922/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D3xter1922/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D3xter1922/subscriptions",
"organizations_url": "https://api.github.com/users/D3xter1922/orgs",
"repos_url": "https://api.github.com/users/D3xter1922/repos",
"events_url": "https://api.github.com/users/D3xter1922/events{/privacy}",
"received_events_url": "https://api.github.com/users/D3xter1922/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
Related to #19303
Removes `XLMTokenizer` inheritance from `FlaubertTokenizer`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
# Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19330/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19330",
"html_url": "https://github.com/huggingface/transformers/pull/19330",
"diff_url": "https://github.com/huggingface/transformers/pull/19330.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19330.patch",
"merged_at": 1664975944000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19329
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19329/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19329/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19329/events
|
https://github.com/huggingface/transformers/issues/19329
| 1,397,015,259
|
I_kwDOCUB6oc5TRMLb
| 19,329
|
High CER when Fine Tuning TrOCR Transformers to make an OCR for arabic Language
|
{
"login": "DjouadaFarouk",
"id": 24492403,
"node_id": "MDQ6VXNlcjI0NDkyNDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/24492403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DjouadaFarouk",
"html_url": "https://github.com/DjouadaFarouk",
"followers_url": "https://api.github.com/users/DjouadaFarouk/followers",
"following_url": "https://api.github.com/users/DjouadaFarouk/following{/other_user}",
"gists_url": "https://api.github.com/users/DjouadaFarouk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DjouadaFarouk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DjouadaFarouk/subscriptions",
"organizations_url": "https://api.github.com/users/DjouadaFarouk/orgs",
"repos_url": "https://api.github.com/users/DjouadaFarouk/repos",
"events_url": "https://api.github.com/users/DjouadaFarouk/events{/privacy}",
"received_events_url": "https://api.github.com/users/DjouadaFarouk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"Hi,\r\n\r\nI would first try to overfit a single batch, as explained in [Karpathy's blog post](http://karpathy.github.io/2019/04/25/recipe/). This makes sure everything is set up properly.\r\n\r\nSeveral folks have shown to successfully fine-tune TrOCR from pre-trained encoder + decoder checkpoints, e.g. for Japanese: https://huggingface.co/spaces/Detomo/Japanese-OCR (or this repo: https://github.com/kha-white/manga-ocr).\r\n\r\nSee also this thread for more info: https://github.com/microsoft/unilm/issues/627.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am having the same problem with the Bengali language as the cer is around 87%-89%. @DjouadaFarouk did your issue got solved?",
"I got cer of 0.28 on Arabic using trocr small by using an Arabic tokenizer and initializing the embedding layer and fc layer in the decoder from scratch, I would also recommend starting the positional encoding layer in the decoder from scratch if you started from trocr small weights. ",
"> I got cer of 0.28 on Arabic using trocr small by using an Arabic tokenizer and initializing the embedding layer and fc layer in the decoder from scratch, I would also recommend starting the positional encoding layer in the decoder from scratch if you started from trocr small weights.\r\n\r\nHi there, can you share your code to accomplish that? i'm also trying to replace the tokenizer on the pretrained trocr-small or trocr-base?",
"processor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-handwritten\")\r\n\r\ntokenizer2 = AutoTokenizer.from_pretrained(\"aubmindlab/bert-base-arabertv2\")\r\nprocessor.tokenizer = tokenizer2\r\n\r\ntrain = Data_boxes(path, 'images/train/', 'labels/train', 2, processor = processor)\r\ntest = Data_boxes(path,'images/validation/', 'labels/validation', 2, processor = processor)\r\n\r\n#model defining\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"microsoft/trocr-small-stage1\")\r\n\r\n# set special tokens used for creating the decoder_input_ids from the labels\r\nmodel.config.decoder_start_token_id = processor.tokenizer.cls_token_id\r\nmodel.config.pad_token_id = processor.tokenizer.pad_token_id\r\n# make sure vocab size is set correctly\r\n#model.config.vocab_size = model.config.decoder.vocab_size\r\n\r\nmodel.decoder.config.vocab_size = processor.tokenizer.vocab_size\r\nmodel.config.vocab_size = model.config.decoder.vocab_size\r\nmodel.decoder.output_projection = nn.Linear(256, processor.tokenizer.vocab_size)\r\nmodel.decoder.model.decoder.embed_tokens = nn.Embedding(processor.tokenizer.vocab_size, 256, padding_idx=1)\r\n\r\n# set beam search parameters\r\nmodel.config.eos_token_id = processor.tokenizer.sep_token_id\r\nmodel.config.max_length = 64\r\nmodel.config.early_stopping = True\r\nmodel.config.no_repeat_ngram_size = 3\r\nmodel.config.length_penalty = 2.0\r\nmodel.config.num_beams = 4",
"hope this helps get you more intuitions.",
"``Hi there, I was following a similar approach, but I have problems to load the model for inference. During training model works very well, i have even a CER of 0.08 which is a good averall CER and after each epoch I made an extra inference on one image and at the end the transcripcion is a good one, but maybe I don't have the good code to save or reload the model, can you share your save and load code for a model training in this fashion, for instans this is the code I'm using: \r\n\r\n```python\r\nprocessor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-handwritten\")\r\nprocessor.tokenizer = AutoTokenizer.from_pretrained(\"roberta-latin-scratch/checkpoint-400000\") #my own roberta model\r\nprocessor.save_pretrained('./processor')\r\n\r\n#other code to load and transform GT\r\n\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"microsoft/trocr-base-handwritten\")\r\n\r\n# set special tokens used for creating the decoder_input_ids from the labels\r\nmodel.config.decoder_start_token_id = processor.tokenizer.cls_token_id\r\nmodel.config.pad_token_id = processor.tokenizer.pad_token_id\r\n\r\nmodel.decoder.config.vocab_size = processor.tokenizer.vocab_size\r\nmodel.config.vocab_size = model.config.decoder.vocab_size\r\nmodel.decoder.output_projection = nn.Linear(1024, processor.tokenizer.vocab_size)\r\nmodel.decoder.model.decoder.embed_tokens = nn.Embedding(processor.tokenizer.vocab_size, 1024, padding_idx=1)\r\n\r\n# set beam search parameters\r\nmodel.config.eos_token_id = processor.tokenizer.sep_token_id\r\nmodel.config.max_length = 200\r\nmodel.config.early_stopping = True\r\nmodel.config.no_repeat_ngram_size = 3\r\nmodel.config.length_penalty = 2.0\r\nmodel.config.num_beams = 2\r\n\r\ntrainer.train()\r\ntrainer.save_model(\"trainer_trocr/checkpoints\")\r\nmodel.save_pretrained(\"trainer_trocr/final_model\")\r\n\r\n```\r\n\r\nSo I save the processor, the checkpoints and the final model, after that to load the model for inference there are two options: \r\n\r\n```python\r\np = 'linea.png'\r\nprocessor = TrOCRProcessor.from_pretrained(\"./processor\")\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"trainer_trocr/final_model\")\r\n```\r\nor making like that: \r\n\r\n```python\r\np = 'linea.png'\r\n\r\nprocessor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-handwritten\")\r\nprocessor.tokenizer = AutoTokenizer.from_pretrained(\"roberta-latin-scratch/checkpoint-400000\")\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"trainer_trocr/final_model\")\r\n\r\n```\r\n\r\nIn both cases the output is absurd, in the first I get something like that: [\"le is is is lele one, etc.], using the second one I get a series of trigramms line that: [\"the the the of of of one one one, etc.\"]. Do you have some suggestions about all that? \r\n",
"2 things you have to do:\r\n\r\n\r\n\r\n##first \r\nYou have to to do same updates as training before loading the weights for inference:\r\n\r\n```\r\n# set special tokens used for creating the decoder_input_ids from the labels\r\nmodel.config.decoder_start_token_id = processor.tokenizer.cls_token_id\r\nmodel.config.pad_token_id = processor.tokenizer.pad_token_id\r\n\r\nmodel.decoder.config.vocab_size = processor.tokenizer.vocab_size\r\nmodel.config.vocab_size = model.config.decoder.vocab_size\r\nmodel.decoder.output_projection = nn.Linear(1024, processor.tokenizer.vocab_size)\r\nmodel.decoder.model.decoder.embed_tokens = nn.Embedding(processor.tokenizer.vocab_size, 1024, padding_idx=1)\r\n\r\n# set beam search parameters\r\nmodel.config.eos_token_id = processor.tokenizer.sep_token_id\r\nmodel.config.max_length = 200\r\nmodel.config.early_stopping = True\r\nmodel.config.no_repeat_ngram_size = 3\r\nmodel.config.length_penalty = 2.0\r\nmodel.config.num_beams = 2\r\n```\r\n\r\n##Second thing I load weights using this:\r\n```\r\nmodel = model.from_pretrained(\"/path/to/checkpoint/pytorch_model.bin\")\r\n```",
"I have a question, do the tokenizer and image processor get updated during training? Because I think no.",
"No the tokenizer and image processor are static things. Only the model weights get updated during training.\r\n\r\n@magistermilitum I would recommend fitting the model on a single example to see where the issue happens. Do you make sure you prepare the image + text for the model in the same way during training vs inference? ",
"I faced the same problem before, I think @magistermilitum forgot to change the special tokens before loading the weights.",
"Thanks for your answers, I was actually updating the model during inferences like this: \r\n\r\n```python\r\nfrom transformers import TrOCRProcessor, VisionEncoderDecoderModel, AutoTokenizer\r\nfrom transformers import AutoModel\r\nfrom PIL import Image\r\nimport torch\r\nimport torch.nn as nn\r\n\r\np = 'linea.png'\r\n\r\nprocessor = TrOCRProcessor.from_pretrained(\"./processor\")\r\n\r\nmodel = VisionEncoderDecoderModel.from_pretrained(\"trainer_trocr/checkpoint-15900\")\r\n\r\n#Upading model parameters\r\nmodel.config.decoder_start_token_id = processor.tokenizer.cls_token_id\r\nmodel.config.pad_token_id = processor.tokenizer.pad_token_id\r\nmodel.decoder.config.vocab_size = processor.tokenizer.vocab_size\r\nmodel.config.vocab_size = model.config.decoder.vocab_size\r\nmodel.decoder.output_projection = nn.Linear(1024, processor.tokenizer.vocab_size)\r\nmodel.decoder.model.decoder.embed_tokens = nn.Embedding(processor.tokenizer.vocab_size, 1024, padding_idx=1)\r\n\r\nmodel.config.decoder.activation_function=\"gelu\"\r\nmodel.config.decoder.layernorm_embedding=True\r\nmodel.config.decoder.max_position_embeddings=512\r\nmodel.config.decoder.scale_embedding=False\r\nmodel.config.decoder.use_learned_position_embeddings=True\r\n\r\n# set beam search parameters\r\nmodel.config.eos_token_id = processor.tokenizer.sep_token_id\r\nmodel.config.max_length = 200\r\nmodel.config.early_stopping = True\r\nmodel.config.no_repeat_ngram_size = 3\r\nmodel.config.length_penalty = 2.0\r\nmodel.config.num_beams = 2\r\n\r\n#inference\r\nimage = Image.open(p)\r\nimage_rgb = image.convert('RGB')\r\n\r\npixels = processor(image_rgb, return_tensors=\"pt\").pixel_values\r\ngenerated_ids = model.generate(pixels)\r\ngenerated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(generated_text)\r\n```\r\n\r\nBut in this way I get a list of repeted trigrams as inference. So maybe is the manner in what I'm loading the model. What is exactly the code are you using to load the .bin file? AutoModel.from_pretrained?\r\n\r\n```python\r\nfrom transformers import AutoModel\r\nmodel = AutoModel.from_pretrained(\"trainer_trocr/checkpoint-15900/\")\r\n```",
"try to use this:\r\n\r\n```\r\nmodel = model.from_pretrained(\"/path/to/checkpoint/pytorch_model.bin\")\r\n```",
"Yeah, I mean, \"model.from_pretrained\" is not a native transformers function, so are you using a one specific or maybe the general \"AutoModel.from_pretrained\"?",
"I mean try to load the binary file directly \"pytorch_model.bin\"",
"ok, thanks, I have a working code for inference: after updating the model parameters we must simply load the weights from the bin and updating the trocr model, but at the end as I modified the config arquitecture during training:\r\n\r\n```python\r\nmodel = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-handwritten')\r\n\r\n#Other Code uptading the TrOCR architecture\r\n\r\nstate_dict = torch.load(\"trainer_trocr/checkpoint-15900/pytorch_model.bin\")\r\nmodel.load_state_dict(state_dict)\r\n\r\n```\r\n\r\n",
"I have a question. For the tokenizer, did you make a new one with only the words in your new vocabulary? For example I am using TrOCR to finetune on my own dataset. The tokens are completely different from the pretrained vocabulary. Should I make my own tokenizer with only my new vocabulary? Should I extend the existing vocabulary with my new vocabulary. This part is really confusing ",
"> I have a question. For the tokenizer, did you make a new one with only the words in your new vocabulary? For example I am using TrOCR to finetune on my own dataset. The tokens are completely different from the pretrained vocabulary. Should I make my own tokenizer with only my new vocabulary? Should I extend the existing vocabulary with my new vocabulary. This part is really confusing\r\n\r\nRegarding this issue, there are many directions if you have just some characters that are not included in the tokenizer just add them, but if you completely have a new vocab and can't find any pretrained tokenizer on hugging face then it is better to train your own tokenizer to find the best combination of subwords. There is a tutorial on this on hugging face. "
] | 1,664
| 1,706
| 1,667
|
NONE
| null |
Hello Everyone ,
I need Help /Guidance regarding Creating and arabic OCR using Transformers . I'm using ViT as encoder and arabert as decoder with their pretrained weights using it like this :
```
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"google/vit-base-patch16-224-in21k", "aubmindlab/bert-base-arabertv02"
)
```
i followed this [Tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb) by [NielsRogge](https://github.com/NielsRogge) ( Thank you Niels ) . My issue is that after training i get poor results such as Caracter error rate is higher than 70% .
ive Tried different encoders , Different decoders , Different epochs, Different learning rate , but i don't know what i miss to make it work.
<html>
<body>
<!--StartFragment-->
Step | Training Loss | Validation Loss | Cer
-- | -- | -- | --
200 | 4.380500 | 4.674773 | 0.693091
400 | 4.213600 | 4.367142 | 0.777522
600 | 4.257300 | 4.247403 | 0.756554
800 | 3.277700 | 4.185711 | 0.712005
1000 | 2.831100 | 4.275863 | 0.748850
1200 | 2.250200 | 4.342288 | 0.788509
1400 | 2.237800 | 4.589494 | 0.768880
<!--EndFragment-->
</body>
</html>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19329/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19328
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19328/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19328/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19328/events
|
https://github.com/huggingface/transformers/pull/19328
| 1,396,838,605
|
PR_kwDOCUB6oc5AKi55
| 19,328
|
Code refactor.
|
{
"login": "Kotmin",
"id": 70173732,
"node_id": "MDQ6VXNlcjcwMTczNzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/70173732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kotmin",
"html_url": "https://github.com/Kotmin",
"followers_url": "https://api.github.com/users/Kotmin/followers",
"following_url": "https://api.github.com/users/Kotmin/following{/other_user}",
"gists_url": "https://api.github.com/users/Kotmin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kotmin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kotmin/subscriptions",
"organizations_url": "https://api.github.com/users/Kotmin/orgs",
"repos_url": "https://api.github.com/users/Kotmin/repos",
"events_url": "https://api.github.com/users/Kotmin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kotmin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19328). All of your documentation changes will be reflected on that endpoint.",
"Huh? Changing the `elif` to `if` changes what the code does.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
NONE
| null |
# What does this PR do?
Fixes # (issue)
Replacing elif -> if for readability clear code and pep principles
## Before submitting
- [ x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@patrickvonplaten
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19328/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19328",
"html_url": "https://github.com/huggingface/transformers/pull/19328",
"diff_url": "https://github.com/huggingface/transformers/pull/19328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19328.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19327
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19327/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19327/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19327/events
|
https://github.com/huggingface/transformers/pull/19327
| 1,396,823,994
|
PR_kwDOCUB6oc5AKfwi
| 19,327
|
Remove interdependency from OpenAI tokenizer
|
{
"login": "E-Aho",
"id": 46936677,
"node_id": "MDQ6VXNlcjQ2OTM2Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/46936677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/E-Aho",
"html_url": "https://github.com/E-Aho",
"followers_url": "https://api.github.com/users/E-Aho/followers",
"following_url": "https://api.github.com/users/E-Aho/following{/other_user}",
"gists_url": "https://api.github.com/users/E-Aho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/E-Aho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/E-Aho/subscriptions",
"organizations_url": "https://api.github.com/users/E-Aho/orgs",
"repos_url": "https://api.github.com/users/E-Aho/repos",
"events_url": "https://api.github.com/users/E-Aho/events{/privacy}",
"received_events_url": "https://api.github.com/users/E-Aho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Removes BERT dependency from the OpenAI tokenizer file.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Resolves OpenAI task in this issue](https://github.com/huggingface/transformers/issues/19303)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Pinging @sgugger as requested :)
Black and `make fix-copies` both seem happy with the changes, but I also saw few other issues coming up from those in other places in the repo I didn't touch, so might have not configured something the right way. Lemme know if it looks ok!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19327/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19327",
"html_url": "https://github.com/huggingface/transformers/pull/19327",
"diff_url": "https://github.com/huggingface/transformers/pull/19327.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19327.patch",
"merged_at": 1664920316000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19326
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19326/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19326/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19326/events
|
https://github.com/huggingface/transformers/pull/19326
| 1,396,810,264
|
PR_kwDOCUB6oc5AKc02
| 19,326
|
removing XLMConfig inheritance from FlaubertConfig
|
{
"login": "D3xter1922",
"id": 59790120,
"node_id": "MDQ6VXNlcjU5NzkwMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/59790120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D3xter1922",
"html_url": "https://github.com/D3xter1922",
"followers_url": "https://api.github.com/users/D3xter1922/followers",
"following_url": "https://api.github.com/users/D3xter1922/following{/other_user}",
"gists_url": "https://api.github.com/users/D3xter1922/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D3xter1922/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D3xter1922/subscriptions",
"organizations_url": "https://api.github.com/users/D3xter1922/orgs",
"repos_url": "https://api.github.com/users/D3xter1922/repos",
"events_url": "https://api.github.com/users/D3xter1922/events{/privacy}",
"received_events_url": "https://api.github.com/users/D3xter1922/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can you just put the PR as ready for review and remove WIP from the title? I can't merge draft PRs :-)",
"> Can you just put the PR as ready for review and remove WIP from the title? I can't merge draft PRs :-)\r\n\r\nDone. Thanks:)",
"Thanks for your contribution!"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
related to #19303
Removes `XLMConfig` dependency from `FlaubertConfig`
the `__init__` from `FlaubertConfig` differs from `XLMConfig` in the following ways:
- `pre_norm` and `layerdrop` are specific to `FlaubertConfig`. So, I have not added and `#Copied from ...`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19326/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19326",
"html_url": "https://github.com/huggingface/transformers/pull/19326",
"diff_url": "https://github.com/huggingface/transformers/pull/19326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19326.patch",
"merged_at": 1664926788000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19325
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19325/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19325/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19325/events
|
https://github.com/huggingface/transformers/pull/19325
| 1,396,761,942
|
PR_kwDOCUB6oc5AKSWJ
| 19,325
|
Use a dynamic configuration for circleCI tests
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Feel free to merge whenever ready"
] | 1,664
| 1,665
| 1,665
|
COLLABORATOR
| null |
# What does this PR do?
This PR entirely rewrites the circle CI setup for the tests run at each commit, to use a configuration generated on the fly depending on which tests should be run. It also offers a Python wrapper around the circleCI API via the new util `create_circleci_config.py`.
This way, when a commit does no modification in the source code/tests/examples, only the quality jobs and the test fetcher are run (see the job below this PR). When a modification only touches the examples, the quality jobs and the example tests are run (see this [job](https://app.circleci.com/pipelines/github/huggingface/transformers/48462)) and when a modification touches some code, all tests jobs and example tests are run (tests jobs running on impacted tests only) as seen in this [report](https://app.circleci.com/pipelines/github/huggingface/transformers/48461).
The generated config is stored as an artifact in the `fetch_tests` job (in txt format because yml would not be rendered).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19325/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19325",
"html_url": "https://github.com/huggingface/transformers/pull/19325",
"diff_url": "https://github.com/huggingface/transformers/pull/19325.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19325.patch",
"merged_at": 1665520284000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19324
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19324/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19324/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19324/events
|
https://github.com/huggingface/transformers/issues/19324
| 1,396,656,396
|
I_kwDOCUB6oc5TP0kM
| 19,324
|
Better documentation for pipelines
|
{
"login": "rhelmeczi",
"id": 48860682,
"node_id": "MDQ6VXNlcjQ4ODYwNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/48860682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rhelmeczi",
"html_url": "https://github.com/rhelmeczi",
"followers_url": "https://api.github.com/users/rhelmeczi/followers",
"following_url": "https://api.github.com/users/rhelmeczi/following{/other_user}",
"gists_url": "https://api.github.com/users/rhelmeczi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rhelmeczi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rhelmeczi/subscriptions",
"organizations_url": "https://api.github.com/users/rhelmeczi/orgs",
"repos_url": "https://api.github.com/users/rhelmeczi/repos",
"events_url": "https://api.github.com/users/rhelmeczi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rhelmeczi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"could I get assigned to this?",
"ping @Narsil @stevhliu @sgugger ",
"@Narsil @stevhliu @sgugger could I get assigned to this?",
"@rhelmeczi @DIvkov575 \r\n\r\nThanks for this proposal, this would be with delight to give the documentation some love here !\r\n\r\n`pipeline` is a bit of a magical object with MANY parameters, making them understandable/accessible is definitely a challenge (worth it though !).\r\nFocusing on *some* parameters might be the most important (I don't know I'm just giving ideas).\r\n\r\nAdding example in each pipeline docstring: https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.ImageClassificationPipeline could also be a good thing.\r\nWe could maybe also probably make it mandatory somehow (so that when new pipelines come how we're sure we're showing how to use them.\r\n\r\nWe also have Tasks : https://huggingface.co/tasks that could be used/reused somehow. @merveenoyan \r\n\r\n\r\n\r\n",
"Thanks for the feedback and feel free to take this on if you're interested! \r\n\r\nIf you have any questions about writing the docs, take a look at this guide [here](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation) :)",
"@Narsil Your suggestions are very helpful.\r\n\r\nAdding separate documentation for each pipeline makes sense. For example, in the `TextClassificationPipeline` the keyword arguments are both keyword arguments for the tokenizer's call function, and keyword arguments for the `postprocess` function. I think even a brief statement along the lines of (but not necessarily identical to):\r\n\r\n> keyword arguments passed to `TextClassificationPipeline.tokenizer.__call__` and `TextClassificationPipeline.postprocess`\r\n\r\nwhere the function names are clickable would be extremely helpful. Simply pointing to the recipient functions also makes this a beginner friendly task. I'm assuming of course that for each pipeline, the keyword arguments are only ever passed along to other functions. \r\n\r\nGetting caught up on the documentation should probably be done over several commits: adding one commit at a time for each of the specific pipelines will be much easier to review, that's just my two cents though.\r\n\r\n@DIvkov575 Keeping in mind that I'm not a maintainer of this repository, and therefore keeping in mind that my above suggestions are not necessarily ones that will be accepted, you can feel free to add documentation if you feel up to it.",
"> Getting caught up on the documentation should probably be done over several commits: adding one commit at a time for each of the specific pipelines will be much easier to review, that's just my two cents though.\r\n\r\nI'll go even further, 1 PR per change is the easiest course of action ! Makes it easier to review, and if for some reason one change is more debated is doesn't prevent the other from going in.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
NONE
| null |
### Feature request
The [introduction to pipelines](https://huggingface.co/docs/transformers/main_classes/pipelines) documentation does not provide any details on how additional parameters can be passed to the tokenizer during the preprocessing step. After walking through all of the source code, I can see that when instantiating a pipeline via `transformers.pipline(...)` one can simply pass these arguments in as keyword arguments, this is not documented anywhere. It is also not included in any examples.
This request is to have the documentation updated so future users don't need to read the source code. This update should expand beyond tokenizing (as it also handles post_processing, etc...).
### Motivation
It's very often the case that a tokenizer is not called with the default arguments: padding, max length, etc... are often changed. The implementation for pipelines actually makes setting these arguments very simple, but it is not communicated so it is difficult to take advantage of.
### Your contribution
I can contribute to the documentation if needed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19324/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19324/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19323
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19323/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19323/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19323/events
|
https://github.com/huggingface/transformers/pull/19323
| 1,396,583,273
|
PR_kwDOCUB6oc5AJsTe
| 19,323
|
Add Switch transformers
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.",
"Thanks a lot @sgugger for your comments! \r\nWould love to have another round of review as I added some modification for `accelerate` and `bnb` compatibility 🙏 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19323). All of your documentation changes will be reflected on that endpoint.",
"Failing tests seems to be unrelated to this PR, merging!"
] | 1,664
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR attempts to add Switch Transformers from t5x with @ArthurZucker & @thomwolf
The architecture seems to be similar to a t5 architecture (the architecture is copied from T5), where the FF layer is slightly modified, introducing the first Mixture of Experts (MoE) architecture inside `transformers` library.
paper: https://arxiv.org/abs/2101.03961
weights: https://github.com/google-research/t5x/blob/eb42c2524bf65c8a46624f1a9b9e034d9bc65b14/docs/models.md#converted-mesh-tensorflow-checkpoints
original modeling code: https://github.com/google/flaxformer/tree/b725bd2a51d70e866d819c92de166fbf24425e6a/flaxformer/architectures/moe
# TODOs:
- [x] Make the forward pass run
- [x] Convert the weights in Pytorch format and upload them on the Hub
- [x] Match the logits between the original implementation and ours
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19323/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19323/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19323",
"html_url": "https://github.com/huggingface/transformers/pull/19323",
"diff_url": "https://github.com/huggingface/transformers/pull/19323.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19323.patch",
"merged_at": 1668514005000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19322
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19322/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19322/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19322/events
|
https://github.com/huggingface/transformers/issues/19322
| 1,396,577,006
|
I_kwDOCUB6oc5TPhLu
| 19,322
|
LongT5ForConditionalGeneration NAN losses with bf16
|
{
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Agree that it should work with `bf16` - when you mean it fails, how does it fail? Just bad training results ? Could you define this here? \r\n\r\nAlso cc @ArthurZucker here",
"Sorry I did not specify this, you're right! What I meant is that losses are nans, therefore the model does not learn (it happens the same as when using fp16 with a bf16-pretrained model).",
"Okey cc @ArthurZucker and @stancld here",
"Thanks! If there is anything I can do to help please let me know, I need this kind of urgently so I'm very much interested in it working properly :) ",
"Any update?",
"Hey! Sorry I will have a look tomorrow! 🤗",
"Okay! :) Thanks!",
"Could you give me a training script and the full error stack so that I can work on your issue? 🤗 Sorry for the delay ",
"Yeah, I'll post it here as soon as I can so that you can reproduce it :) thank you very much!!",
"First run:\r\n\r\n```bash\r\npip install transformers datasets rouge_score\r\n```\r\n\r\nThen, with this script you can replicate it. It is set with bf16=True.\r\n\r\n```python\r\n\r\n\r\n\r\nfrom transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainer, Seq2SeqTrainingArguments, AutoTokenizer, DataCollatorForSeq2Seq\r\nfrom datasets import load_dataset, load_metric\r\nimport nltk\r\nimport numpy as np\r\n\r\nnltk.download('punkt')\r\n\r\nmodel_str = \"google/long-t5-tglobal-base\"\r\n\r\nmetric = load_metric(\"rouge\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_str)\r\n\r\ndataset = load_dataset(\"IIC/sqac_tests\")\r\n\r\nmax_input_length = 128\r\nmax_target_length = 16\r\n\r\ndef preprocess_function(examples):\r\n inputs = [doc for doc in examples[\"context\"]]\r\n model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)\r\n\r\n # Setup the tokenizer for targets\r\n with tokenizer.as_target_tokenizer():\r\n labels = tokenizer(examples[\"title\"], max_length=max_target_length, truncation=True)\r\n\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n\r\ntokenized_dataset = dataset.map(preprocess_function, batched=True)\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_str)\r\n\r\nbatch_size = 1\r\nargs = Seq2SeqTrainingArguments(\r\n \"fail_test\",\r\n evaluation_strategy = \"epoch\",\r\n learning_rate=2e-5,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n weight_decay=0.01,\r\n save_total_limit=3,\r\n num_train_epochs=20,\r\n predict_with_generate=True,\r\n bf16=True, # change to fp16 in no Ampere GPU available.\r\n)\r\n\r\ndata_collator = DataCollatorForSeq2Seq(tokenizer, model=model)\r\n\r\ndef compute_metrics(eval_pred):\r\n predictions, labels = eval_pred\r\n decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)\r\n # Replace -100 in the labels as we can't decode them.\r\n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\r\n decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\r\n \r\n # Rouge expects a newline after each sentence\r\n decoded_preds = [\"\\n\".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]\r\n decoded_labels = [\"\\n\".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]\r\n \r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n # Extract a few results\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\n \r\n # Add mean generated length\r\n prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]\r\n result[\"gen_len\"] = np.mean(prediction_lens)\r\n \r\n return {k: round(v, 4) for k, v in result.items()}\r\n\r\ntrainer = Seq2SeqTrainer(\r\n model,\r\n args,\r\n train_dataset=tokenized_dataset[\"train\"],\r\n eval_dataset=tokenized_dataset[\"validation\"],\r\n data_collator=data_collator,\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_metrics\r\n)\r\n\r\ntrainer.train()\r\n```\r\n\r\nWith this you can observe that both training and validation losses are Nans... Hope it helps to figure out what happens, if I can provide any more help please let me know :) @ArthurZucker ",
"Did you have the time to try the script? @ArthurZucker @stancld ",
"Hi, any updates? @ArthurZucker @stancld @patrickvonplaten I did not receive any answer after sending the reproduction script....",
"Hey sorry not yet ! ",
"Okay thanks! Let me know if I can help in some way... :) @ArthurZucker ",
"Hey! Just tested your script and both losses are not Nan. Since you seem to be using a dev version, where you on main? \r\nIt seems that with most recent versions it works perfectly well. \r\n```python \r\n{'eval_loss': 2.1923375129699707, 'eval_rouge1': 26.7045, 'eval_rouge2': 12.0, 'eval_rougeL': 25.4545, 'eval_rougeLsum': 25.8144, 'eval_gen_len': 19.0, 'eval_runtime': 5.0999, 'eval_samples_per_second': 1.961, 'eval_steps_per_second': 1.961, 'epoch': 20.0} \r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [02:37<00:00, 3.10it/s]\r\n \r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'train_runtime': 157.0429, 'train_samples_per_second': 1.274, 'train_steps_per_second': 1.274, 'train_loss': 3.485249328613281, 'epoch': 20.0} \r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [02:37<00:00, 1.27it/s]\r\n```\r\n\r\n I used both `transformers==4.22` and `4.23`. ",
"Thank you so much for trying it out! Let me try it again with latest version (4.25.0.dev) to check if it is working now also in my machine. I was trying to 4.23.0.dev, so maybe if it doesn't work also in 4.25.0.dev I'll have to turn to 4.22 or 4.23 :) ",
"Great, with the last version it does work!! Thank you very much for helping me ! @ArthurZucker "
] | 1,664
| 1,669
| 1,669
|
NONE
| null |
### System Info
transformers version: 4.23.0.dev0
torch version: 1.12.1
OS: Ubuntu 20
Cuda: 11.6
The problem is that LongT5 is supposed to work with bf16=True, but it doesn't. It is known that fp16 should fail in this, and I have tried it and effectively fails. However, Longt5 is supposed to be trained on bf16, therefore it would be expected that turning bf16 to True would work.
My training arguments look like this:
```python
{
"evaluation_strategy": "epoch",
"num_train_epochs": 4,
"do_train": True,
"do_eval": False,
"eval_steps": 2,
"logging_strategy":"epoch",
"save_strategy": "epoch",
"save_total_limit": 4,
"seed": 69,
"bf16": True,
"dataloader_num_workers": 32,
"adam_epsilon": 1e-8,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"group_by_length": False,
"gradient_checkpointing": False,
"lr_scheduler_type": "linear",
"learning_rate": 1e-4,
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 1,
"gradient_accumulation_steps": 64,
"warmup_ratio": 0.08
}
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
If you need a script for reproduction please let me know.
### Expected behavior
Longt5 (as I understand from the forum etc) should work with bf16.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19322/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19321
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19321/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19321/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19321/events
|
https://github.com/huggingface/transformers/pull/19321
| 1,396,556,925
|
PR_kwDOCUB6oc5AJmtC
| 19,321
|
Call _set_save_spec() when creating TF models
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
MEMBER
| null |
Much like confused ducklings, subclassed Keras models tend to imprint on the first concrete input shapes they see unless we explicitly `build()` them with more general shapes. However, the default `tf.keras.Model.build()` makes several restrictive assumptions and doesn't work for us.
The solution is to directly call `model._set_save_spec()` with the shapes we want. We do this in the `__init__` to make sure that it happens before the model is built or called with any inputs. We can also now remove the override on `model.save()`, which is no longer necessary now that we're fixing this properly.
Fixes #19231
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19321/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19321/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19321",
"html_url": "https://github.com/huggingface/transformers/pull/19321",
"diff_url": "https://github.com/huggingface/transformers/pull/19321.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19321.patch",
"merged_at": 1664989430000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19320
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19320/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19320/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19320/events
|
https://github.com/huggingface/transformers/issues/19320
| 1,396,445,017
|
I_kwDOCUB6oc5TPA9Z
| 19,320
|
ONNX conversion of deberta_v2 models
|
{
"login": "kobiche",
"id": 56874660,
"node_id": "MDQ6VXNlcjU2ODc0NjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/56874660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kobiche",
"html_url": "https://github.com/kobiche",
"followers_url": "https://api.github.com/users/kobiche/followers",
"following_url": "https://api.github.com/users/kobiche/following{/other_user}",
"gists_url": "https://api.github.com/users/kobiche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kobiche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kobiche/subscriptions",
"organizations_url": "https://api.github.com/users/kobiche/orgs",
"repos_url": "https://api.github.com/users/kobiche/repos",
"events_url": "https://api.github.com/users/kobiche/events{/privacy}",
"received_events_url": "https://api.github.com/users/kobiche/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I believe I solved it (rather 'I forced it to work')\r\n(line: 159)\r\n```python\r\n t = self.type().dtype() if hasattr(self.type(), 'dtype') else self.type().scalarType()\r\n TYPE_CAST = {\r\n 'float': torch.float64,\r\n 'int': torch.int64\r\n }\r\n t = TYPE_CAST[t.lower()]\r\n output = masked_fill(\r\n g, self, r_mask, g.op(\"Constant\", value_t=torch.tensor(torch.finfo(t).min))\r\n )\r\n```\r\n\r\nThis is not tested with all models, but at least works for now. Hopefully you'll find a nicer solution.",
"cc @michaelbenayoun @lewtun ",
"[UPDATE]\r\n\r\nRemember when I said that there were warnings while tracing the model? Well, I know now they are important.\r\nI cannot use the traced model with an input different from the one I used to trace the model.\r\nThis is the error I get:\r\n```python\r\nonnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_389' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:35 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) size != 0 && (input_shape.Size() % size) == 0 was false. The input tensor cannot be reshaped to the requested shape. Input shape:{12,15,15}, requested shape:{-1,12,136,136}\r\n```\r\n\r\nPlease can you fix this?",
"Hey @kobiche thanks for reporting this error, we also noticed this in #19255 when validating the `deberta` models with different batch size / seq len compared to the one used to trace the model.\r\n\r\nWe'll aim to patch this ASAP",
"https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L577\r\nAnd ONNXRuntime dosen't support torch.where op now.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,686
| 1,670
|
NONE
| null |
### System Info
transformers: 4.22.2
platform: Ubuntu 20.04.2
python: 3.8.10
### Who can help?
DeBERTa-v2: @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following command fails:
python -m transformers.onnx --model=microsoft/deberta-v3-small onnx/
The model can be microsoft/{mdeberta-v3-base, deberta-v3-small, deberta-v3-base, etc.}.
The main problem lies in the `symbolic(...)` method of the class `XSoftmax` class, that is called while tracing the model. However, there are several other warnings while converting the model (Are they important?)
### Expected behavior
Model saved as ONNX format.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19320/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19319
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19319/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19319/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19319/events
|
https://github.com/huggingface/transformers/issues/19319
| 1,396,394,206
|
I_kwDOCUB6oc5TO0je
| 19,319
|
convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py does not work on fairseq wav2vec2-xls-r weights
|
{
"login": "heatz123",
"id": 33706329,
"node_id": "MDQ6VXNlcjMzNzA2MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/33706329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heatz123",
"html_url": "https://github.com/heatz123",
"followers_url": "https://api.github.com/users/heatz123/followers",
"following_url": "https://api.github.com/users/heatz123/following{/other_user}",
"gists_url": "https://api.github.com/users/heatz123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heatz123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heatz123/subscriptions",
"organizations_url": "https://api.github.com/users/heatz123/orgs",
"repos_url": "https://api.github.com/users/heatz123/repos",
"events_url": "https://api.github.com/users/heatz123/events{/privacy}",
"received_events_url": "https://api.github.com/users/heatz123/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"And after some inspections on fairseq library, I found that this change can make the conversion work also on wav2vec2-xls-r weights:\r\nhttps://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py#L249\r\n\r\nto\r\n```python\r\n task_arg = argparse.Namespace(task='audio_pretraining')\r\n task = fairseq.tasks.setup_task(task_arg)\r\n model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([checkpoint_path], task=task)\r\n```\r\n\r\nCan I make a PR on this change?",
"Yes please! Would you like to open a PR for this change? You can also update the conversion script for Wav2Vec2 Conformer - it's exactly the same logic in `convert_wav2vec2_conformer_checkpoint` 🤗",
"I see, thank you. Will open a PR within a day."
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
on bash,
1. wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
2. wget https://huggingface.co/facebook/wav2vec2-xls-r-300m/raw/main/config.json
3. python3 convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path output --checkpoint_path xlsr2_300m.pt --config_path config.json --not_finetuned
Then this error arises:
```
Traceback (most recent call last):
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 273, in <module>
convert_wav2vec2_checkpoint(
File "/home/heatz123/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 254, in convert_wav2vec2_checkpoint
model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([checkpoint_path])
File "/home/heatz123/env/lib/python3.8/site-packages/fairseq/checkpoint_utils.py", line 436, in load_model_ensemble_and_task
task = tasks.setup_task(cfg.task)
File "/home/heatz123/env/lib/python3.8/site-packages/fairseq/tasks/__init__.py", line 39, in setup_task
cfg = merge_with_parent(dc(), cfg)
File "/home/heatz123/env/lib/python3.8/site-packages/fairseq/dataclass/utils.py", line 500, in merge_with_parent
merged_cfg = OmegaConf.merge(dc, cfg)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/omegaconf.py", line 321, in merge
target.merge_with(*others[1:])
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 331, in merge_with
self._format_and_raise(key=None, value=None, cause=e)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
format_and_raise(
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 329, in merge_with
self._merge_with(*others)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 347, in _merge_with
BaseContainer._map_merge(self, other)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 314, in _map_merge
dest[key] = src._get_node(key)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/dictconfig.py", line 258, in __setitem__
self._format_and_raise(
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
format_and_raise(
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/home/heatz123/env/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ConfigKeyError: Key 'multiple_train_files' not in 'AudioPretrainingConfig'
full_key: multiple_train_files
reference_type=Optional[AudioPretrainingConfig]
object_type=AudioPretrainingConfig
```
note that I am using fairseq 0.12.2 (which was installed by default using `pip install fairseq`)
### Expected behavior
converting fairseq -> pytorch(transformers) weights should work without any errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19319/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19318
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19318/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19318/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19318/events
|
https://github.com/huggingface/transformers/issues/19318
| 1,396,295,580
|
I_kwDOCUB6oc5TOcec
| 19,318
|
where can I find document about BertPreTrainedModel?
|
{
"login": "Lim-Sung-Jun",
"id": 61612775,
"node_id": "MDQ6VXNlcjYxNjEyNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/61612775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lim-Sung-Jun",
"html_url": "https://github.com/Lim-Sung-Jun",
"followers_url": "https://api.github.com/users/Lim-Sung-Jun/followers",
"following_url": "https://api.github.com/users/Lim-Sung-Jun/following{/other_user}",
"gists_url": "https://api.github.com/users/Lim-Sung-Jun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lim-Sung-Jun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lim-Sung-Jun/subscriptions",
"organizations_url": "https://api.github.com/users/Lim-Sung-Jun/orgs",
"repos_url": "https://api.github.com/users/Lim-Sung-Jun/repos",
"events_url": "https://api.github.com/users/Lim-Sung-Jun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lim-Sung-Jun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"What are you trying to do with `BertPreTrainedModel`? This is an abstract class that isn't really public facing.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
NONE
| null |
hello,
I am trying to find document about BertPreTrainedModel which is used on colbertv1
I only found the source code below site ( 4.11.3 bert_modeling )
https://huggingface.co/transformers/v3.5.1/_modules/transformers/modeling_bert.html
but I didn't find any document for this in the official site for latest version
https://huggingface.co/
So, My question is **where can I find document about BertPreTrainedModel? is this removed?**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19318/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19317
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19317/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19317/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19317/events
|
https://github.com/huggingface/transformers/pull/19317
| 1,396,269,857
|
PR_kwDOCUB6oc5AIpfR
| 19,317
|
HF <-> megatron checkpoint reshaping and conversion for GPT
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I can't use this from transformers to megatron for accelerate-megatron-plugin to continue finetune the checkpoint. is there any thing i missing? then load the converted checkpoint with resume_from_checkpoint args, it seems that the gpus are reinit and only one gpu is 100% utility. ",
"Hello, thank you for the converting tools.\r\nBy the way, is there any plan that the script of T5 HF model <-> Megatron model will be supported and when?"
] | 1,664
| 1,677
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
With respect to GPT model,
1. Now, user would be able to convert Megatron-LM GPT model having different tensor parallel and pipeline parallel sizes to a universal transformers checkpoint for `gpt2` model. This checkpoint is also sharded by default (10GB per shard) or the value which user provides. Sample command to run to convert from Megatron-LM to Transformers checkpoint (below command is run for a checkpoint having tp-size 2 and pp-size 1):
```
python checkpoint_reshaping_and_interoperability.py \
--convert_checkpoint_from_megatron_to_transformers \
--load_path "megatron_lm_gpt/iter_0005000" \
--save_path "hf_checkpoint" \
--max_shard_size "200MB" \
--tokenizer_name "/home/sourab/code-parrot-minimal" \
--print-checkpoint-structure
```
Ouput logs: https://gist.github.com/pacman100/eedce29f084f3efdac76456bd407f978#file-megatron_to_trfs-log
2. Reverse conversion from transformers checkpoint to Megatron checkpoint with variable TP and PP sizes is supported. Sample command for it is given below (converting to a checkpoint with `target_tensor_model_parallel_size`=2 and `target_pipeline_model_parallel_size`=1. `target_data_parallel_size` is used when `--use_distribured_optimizer` is passed):
```
python checkpoint_reshaping_and_interoperability.py \
--load_path "hf_checkpoint" \
--save_path "megatron_lm_checkpoint" \
--target_tensor_model_parallel_size 2 \
--target_pipeline_model_parallel_size 1 \
--target_data_parallel_size 2 \
--target_params_dtype "bf16" \
--make_vocab_size_divisible_by 128 \
--print-checkpoint-structure
```
Output logs: https://gist.github.com/pacman100/eedce29f084f3efdac76456bd407f978#file-trfs_to_megatron-log
A quick test to make sure everything is working properly:
Code as well as output logs: https://gist.github.com/pacman100/eedce29f084f3efdac76456bd407f978#file-gistfile1-txt
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19317/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19317",
"html_url": "https://github.com/huggingface/transformers/pull/19317",
"diff_url": "https://github.com/huggingface/transformers/pull/19317.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19317.patch",
"merged_at": 1665150415000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19316
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19316/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19316/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19316/events
|
https://github.com/huggingface/transformers/pull/19316
| 1,396,260,708
|
PR_kwDOCUB6oc5AInmq
| 19,316
|
Fix for sequence regression fit() in TF
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
MEMBER
| null |
Fixes #19308
Keras really doesn't like 1-dimensional label tensors. We've caught most of the cases where this causes problems with the dummy loss, but sequence regression slipped through and is now fixed!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19316/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19316",
"html_url": "https://github.com/huggingface/transformers/pull/19316",
"diff_url": "https://github.com/huggingface/transformers/pull/19316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19316.patch",
"merged_at": 1664891308000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19315
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19315/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19315/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19315/events
|
https://github.com/huggingface/transformers/pull/19315
| 1,396,222,869
|
PR_kwDOCUB6oc5AIfon
| 19,315
|
Added Type hints for LED TF
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
Based on Issue #16059
I have added type hints for Tensorflow LED model.
@Rocketknight1 Could you kindly check if this is fine?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19315/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19315",
"html_url": "https://github.com/huggingface/transformers/pull/19315",
"diff_url": "https://github.com/huggingface/transformers/pull/19315.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19315.patch",
"merged_at": 1664891715000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19314
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19314/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19314/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19314/events
|
https://github.com/huggingface/transformers/pull/19314
| 1,396,216,124
|
PR_kwDOCUB6oc5AIePv
| 19,314
|
Added Type hints for LED TF
|
{
"login": "IMvision12",
"id": 88665786,
"node_id": "MDQ6VXNlcjg4NjY1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IMvision12",
"html_url": "https://github.com/IMvision12",
"followers_url": "https://api.github.com/users/IMvision12/followers",
"following_url": "https://api.github.com/users/IMvision12/following{/other_user}",
"gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions",
"organizations_url": "https://api.github.com/users/IMvision12/orgs",
"repos_url": "https://api.github.com/users/IMvision12/repos",
"events_url": "https://api.github.com/users/IMvision12/events{/privacy}",
"received_events_url": "https://api.github.com/users/IMvision12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
Based on Issue #16059
I have added type hints for Tensorflow LED model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19314/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19314",
"html_url": "https://github.com/huggingface/transformers/pull/19314",
"diff_url": "https://github.com/huggingface/transformers/pull/19314.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19314.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19313
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19313/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19313/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19313/events
|
https://github.com/huggingface/transformers/pull/19313
| 1,396,216,077
|
PR_kwDOCUB6oc5AIePK
| 19,313
|
[Docs] Fix link
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes link to `accelerate`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19313/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19313",
"html_url": "https://github.com/huggingface/transformers/pull/19313",
"diff_url": "https://github.com/huggingface/transformers/pull/19313.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19313.patch",
"merged_at": 1664888453000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19312
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19312/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19312/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19312/events
|
https://github.com/huggingface/transformers/pull/19312
| 1,396,195,356
|
PR_kwDOCUB6oc5AIZ-G
| 19,312
|
[WIP] Making `Camembert` independent from Roberta
|
{
"login": "Mustapha-AJEGHRIR",
"id": 66799406,
"node_id": "MDQ6VXNlcjY2Nzk5NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/66799406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mustapha-AJEGHRIR",
"html_url": "https://github.com/Mustapha-AJEGHRIR",
"followers_url": "https://api.github.com/users/Mustapha-AJEGHRIR/followers",
"following_url": "https://api.github.com/users/Mustapha-AJEGHRIR/following{/other_user}",
"gists_url": "https://api.github.com/users/Mustapha-AJEGHRIR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mustapha-AJEGHRIR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mustapha-AJEGHRIR/subscriptions",
"organizations_url": "https://api.github.com/users/Mustapha-AJEGHRIR/orgs",
"repos_url": "https://api.github.com/users/Mustapha-AJEGHRIR/repos",
"events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"In the `CamembertPreTrainedModel` I left `base_model_prefix = \"roberta\"` to be equal to \"roberta\". If I change it into \"camembert\" the test fails. This is normal ?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for your answers @sgugger. I took some time to understand the concepts of `flake8` and `repo-consistency-",
"I have done one thing without being very sure about it (that was the only way to make `make fixup` happy). I added the `CamembertPreTrainedModel` class into the `__init__` and `dummy_pt_objects` (as in the PR). I don't know if this is the correct way to do it ?",
"I don't understand why some torch tests failed just by replacing `self.camembert` with `self.roberta`. ",
"Hi @sgugger, I have done git rebase from the huggingface:main, this brought all the other commit over my branch, this might be very difficult to read. If you want I can close this pull request and open a clean one ?",
"Arg, yes we would need a clean PR: even if I can see the changes are not related, it will mess up authorship of this commit when we merge your PR (it will show all the other random commits of your rebase) and if somehow this PR introduces a bug, when we come look at it later, it will be hard to see what changed the error.\r\n\r\nTip: for GitHub you need to force-push after a rebase when your PR is already open (with `git push -u origin branch --force`)"
] | 1,664
| 1,665
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
related to #19303
Making the Camembert model (pytorch version) independent from Roberta
I have changed all the classes of camembert by copy paste from Roberta. I have done the litte necessary changes for every thing to work.
I'm still wondering how to change blocks like those, I have to add a specific checkpoint for Camembert ? :
```python
@add_code_sample_docstrings(
processor_class=_TOKENIZER_FOR_DOC,
checkpoint="deepset/roberta-base-squad2",
output_type=QuestionAnsweringModelOutput,
config_class=_CONFIG_FOR_DOC,
expected_output="' puppet'",
expected_loss=0.86,
)
```
For testing, the following test works well :
```bash
$ RUN_SLOW=1 pytest tests/models/camembert/test_modeling_camembert.py
```
However, I have noticed the tests are only for the `CamembertModel` but not other classes like `CamembertForCausalLM` ...
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger related to #19303
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19312/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19312",
"html_url": "https://github.com/huggingface/transformers/pull/19312",
"diff_url": "https://github.com/huggingface/transformers/pull/19312.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19312.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19311
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19311/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19311/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19311/events
|
https://github.com/huggingface/transformers/issues/19311
| 1,395,923,064
|
I_kwDOCUB6oc5TNBh4
| 19,311
|
NER Training on custom Data
|
{
"login": "akshaydhok07",
"id": 65851551,
"node_id": "MDQ6VXNlcjY1ODUxNTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/65851551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshaydhok07",
"html_url": "https://github.com/akshaydhok07",
"followers_url": "https://api.github.com/users/akshaydhok07/followers",
"following_url": "https://api.github.com/users/akshaydhok07/following{/other_user}",
"gists_url": "https://api.github.com/users/akshaydhok07/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshaydhok07/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshaydhok07/subscriptions",
"organizations_url": "https://api.github.com/users/akshaydhok07/orgs",
"repos_url": "https://api.github.com/users/akshaydhok07/repos",
"events_url": "https://api.github.com/users/akshaydhok07/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshaydhok07/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have the same question! Can anyone tell me how to design my custom data file format? Thank u!!",
"Please use the [forums](https://discuss.huggingface.co/) to discuss questions like this as we keep the issues for bugs and feature requests only.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"@sgugger @WeiShi-9 @akshaydhok07 @vanpelt @pvl \r\nI have the same problem, how to set custom data for trainning with the [pipeline]([run_ner.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py))\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"@sgugger \r\nThe similar question has been created in the [forum](https://discuss.huggingface.co/t/custom-files-for-run-ner-py/9156), but no one handle it.\r\nAlso, the similar issue actually hasn't been solved in https://github.com/huggingface/transformers/issues/8698 .\r\nIf you can provide a tiny example for csv or json format in README, that should be very helpful. 😃 ",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"你好,我已经收到您的邮件~"
] | 1,664
| 1,693
| 1,693
|
NONE
| null |
https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/examples/pytorch/token-classification/run_ner.py#L200
[README.md](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification#readme) mentions providing text files for training and validation, however, run_ner.py expects CSV or JSON files only.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19311/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19310
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19310/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19310/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19310/events
|
https://github.com/huggingface/transformers/pull/19310
| 1,395,912,773
|
PR_kwDOCUB6oc5AHeXg
| 19,310
|
Add `BloomForQuestionAnswering`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I have just fixed your suggestions @sgugger 💪 Gently pinging you here, and let me know if you need me to open a PR for GPTJ too ;) !",
"Thanks a lot!!"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the class `BloomForQuestionAnswering`, insipred from [`GPTJForQuestionAnswering`](https://github.com/huggingface/transformers/blob/4c962d5e790d06c142af35aad165c74c0bcf861a/src/transformers/models/gptj/modeling_gptj.py#L1024), as the community asked for the release of this class (See the discussion here: https://huggingface.co/bigscience/bloom/discussions/46#633b35f21fd49ee0b64e29d2)
cc @sgugger @ydshieh @LysandreJik @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19310/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19310",
"html_url": "https://github.com/huggingface/transformers/pull/19310",
"diff_url": "https://github.com/huggingface/transformers/pull/19310.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19310.patch",
"merged_at": 1664898733000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19309
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19309/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19309/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19309/events
|
https://github.com/huggingface/transformers/pull/19309
| 1,395,875,574
|
PR_kwDOCUB6oc5AHWeT
| 19,309
|
Update README.md
|
{
"login": "ShubhamJagtap2000",
"id": 63872951,
"node_id": "MDQ6VXNlcjYzODcyOTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/63872951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShubhamJagtap2000",
"html_url": "https://github.com/ShubhamJagtap2000",
"followers_url": "https://api.github.com/users/ShubhamJagtap2000/followers",
"following_url": "https://api.github.com/users/ShubhamJagtap2000/following{/other_user}",
"gists_url": "https://api.github.com/users/ShubhamJagtap2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShubhamJagtap2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShubhamJagtap2000/subscriptions",
"organizations_url": "https://api.github.com/users/ShubhamJagtap2000/orgs",
"repos_url": "https://api.github.com/users/ShubhamJagtap2000/repos",
"events_url": "https://api.github.com/users/ShubhamJagtap2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShubhamJagtap2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> # Fixed link in the main README.md\r\n> \r\n> Fixes # (issue)\r\n> ## Before submitting\r\n> \r\n> * [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).\r\n> \r\n> * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),\r\n> Pull Request section?\r\n> \r\n> * [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link\r\n> to it if that's the case.\r\n> \r\n> * [ ] Did you make sure to update the documentation with your changes? Here are the\r\n> [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and\r\n> [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).\r\n> \r\n> * [ ] Did you write any new necessary tests?\r\n> \r\n> \r\n> ## Who can review?\r\n> \r\n> Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.\r\n> \r\n> @sgugger\r\n\r\n- Hello @sgugger , I was reading the documentation and following the installation steps. I encountered a bug with the link in the documentation I have added in the screenshot below. \r\n\r\n\r\n\r\n\r\n\r\n- The link https://huggingface.co/docs/transformers/examples redirects to a page which does not exist in the website. Below:\r\n\r\n\r\n\r\n\r\n\r\n- It also does not redirect to the page but throws 404 error. Below:\r\n\r\n\r\n\r\n\r\n- I have added a valid link (as per the docs) instead of existing link. Please checkout this issue and whether link added by me is valid or not. Thank you.\r\n\r\n- My added link: https://github.com/huggingface/transformers/tree/main/examples\r\n\r\n- If there is an issue or addition in this version(as it is saying in the webpage) please do look into that. \r\n\r\n",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# Fixed link in the main README.md
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19309/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19309",
"html_url": "https://github.com/huggingface/transformers/pull/19309",
"diff_url": "https://github.com/huggingface/transformers/pull/19309.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19309.patch",
"merged_at": 1664884010000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19308
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19308/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19308/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19308/events
|
https://github.com/huggingface/transformers/issues/19308
| 1,395,790,447
|
I_kwDOCUB6oc5TMhJv
| 19,308
|
TFSequenceClassifierOutput can't return loss batch_size when num_labels=1
|
{
"login": "goreng2",
"id": 45035457,
"node_id": "MDQ6VXNlcjQ1MDM1NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/45035457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goreng2",
"html_url": "https://github.com/goreng2",
"followers_url": "https://api.github.com/users/goreng2/followers",
"following_url": "https://api.github.com/users/goreng2/following{/other_user}",
"gists_url": "https://api.github.com/users/goreng2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goreng2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goreng2/subscriptions",
"organizations_url": "https://api.github.com/users/goreng2/orgs",
"repos_url": "https://api.github.com/users/goreng2/repos",
"events_url": "https://api.github.com/users/goreng2/events{/privacy}",
"received_events_url": "https://api.github.com/users/goreng2/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Verified the issue, making a patch now!",
"@goreng2 I've submitted a fix at #19316. I'll let you know when this is merged to main.",
"@goreng2 The fix should now be merged, but you'll have to install from `main` to use it until the next official release. To do that, replace `pip install transformers` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`. After our next release you can change your code back to just `pip install transformers`.\r\n\r\nIf this doesn't resolve your problem, feel free to comment and re-open this issue!",
"@Rocketknight1 It works! Thanks :) you save me!"
] | 1,664
| 1,664
| 1,664
|
NONE
| null |
### System Info
[**Colab Env**](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)
- `transformers` version: 4.22.2
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
@Rocketknight1 @sgugger @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1lTxUPTa0XPRXJRV1NDYzwvzcMOY-fqJf?usp=sharing
### Expected behavior
`STS-B` task can train with `model.fit()`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19308/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19307
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19307/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19307/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19307/events
|
https://github.com/huggingface/transformers/pull/19307
| 1,395,413,284
|
PR_kwDOCUB6oc5AF1xD
| 19,307
|
Removing BertConfig inheritance from LayoutLMConfig
|
{
"login": "arnaudstiegler",
"id": 26485052,
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudstiegler",
"html_url": "https://github.com/arnaudstiegler",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Welcome to the repo, Arnaud :)\r\n\r\nPinging @sgugger for review"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to #19303
Remove LayoutLMConfig dependence on BertConfig.
The `__init__` from `LayoutLMConfig` diverges from `BertConfig` in the following way:
- `max_2d_position_embeddings` is an argument specific to LayoutLM
For that reason, I didn't add any `# Copied from ...`
Other change:
Previously, the arguments `position_embedding_type`, `use_cache`, `classifier_dropout` were not part of the `__init__` function for LayoutLM and were always set to the BertConfig default (through the `super().__init___()` call). Because there is no inheritance anymore, I added back those arguments to `__init__`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19307/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19307",
"html_url": "https://github.com/huggingface/transformers/pull/19307",
"diff_url": "https://github.com/huggingface/transformers/pull/19307.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19307.patch",
"merged_at": 1664893447000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19306
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19306/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19306/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19306/events
|
https://github.com/huggingface/transformers/issues/19306
| 1,395,389,899
|
I_kwDOCUB6oc5TK_XL
| 19,306
|
T5 vocab size discrepancy between config and tokenizer
|
{
"login": "djaym7",
"id": 12378820,
"node_id": "MDQ6VXNlcjEyMzc4ODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/12378820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djaym7",
"html_url": "https://github.com/djaym7",
"followers_url": "https://api.github.com/users/djaym7/followers",
"following_url": "https://api.github.com/users/djaym7/following{/other_user}",
"gists_url": "https://api.github.com/users/djaym7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djaym7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djaym7/subscriptions",
"organizations_url": "https://api.github.com/users/djaym7/orgs",
"repos_url": "https://api.github.com/users/djaym7/repos",
"events_url": "https://api.github.com/users/djaym7/events{/privacy}",
"received_events_url": "https://api.github.com/users/djaym7/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I found the same issue for deberta-v3 #19301. It should be fine as long as the model's vocab is larger than the tokenizer's. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,668
| 1,668
|
NONE
| null |
### System Info

transformers version == '4.18.0'
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Copy the code in image
### Expected behavior
same vocab size
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19306/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19305
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19305/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19305/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19305/events
|
https://github.com/huggingface/transformers/issues/19305
| 1,395,323,192
|
I_kwDOCUB6oc5TKvE4
| 19,305
|
Huge discrepancy between HuggingFace and Timm for ViT and other vision transformers
|
{
"login": "Phuoc-Hoan-Le",
"id": 39663189,
"node_id": "MDQ6VXNlcjM5NjYzMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/39663189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Phuoc-Hoan-Le",
"html_url": "https://github.com/Phuoc-Hoan-Le",
"followers_url": "https://api.github.com/users/Phuoc-Hoan-Le/followers",
"following_url": "https://api.github.com/users/Phuoc-Hoan-Le/following{/other_user}",
"gists_url": "https://api.github.com/users/Phuoc-Hoan-Le/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Phuoc-Hoan-Le/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Phuoc-Hoan-Le/subscriptions",
"organizations_url": "https://api.github.com/users/Phuoc-Hoan-Le/orgs",
"repos_url": "https://api.github.com/users/Phuoc-Hoan-Le/repos",
"events_url": "https://api.github.com/users/Phuoc-Hoan-Le/events{/privacy}",
"received_events_url": "https://api.github.com/users/Phuoc-Hoan-Le/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] |
closed
| false
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @NielsRogge @amyeroberts @alaradirik ",
"@CharlesLeeeee thank you for bringing this up! We are aware of the discrepancy and aim to rectify it soon. We will fix the parameter initialization issue shortly and open a separate PR to add stochastic depth.\r\n\r\ncc @LysandreJik @NielsRogge @amyeroberts ",
"I believe setting eps in layernorm to 1e-6 rather than 1e-12 is also important.",
"FWIW there is an issue related to this on the timm side as well https://github.com/rwightman/pytorch-image-models/issues/1477\r\n\r\nAs per my comments, the init issue should be minor / non consequential as it would not result in a significant difference given that std == .02. I've trained from scratch with much more significantly different inits and the end results aren't too far off. \r\n\r\nThe layer norm eps is likely an issue though, that was not mentioned on the timm side. For float16, 0 + 1e-12 = 0, not so for 1e-6 or 1e-5, which are defaults for all vision models I'm aware of that use LN. It looks like there are possibly other models that incorrectly use 1e-12 such as convnext? This could cause stability issues at reduced precision and will change the validation results for weights pretrained with 1e-5 or 1e-6. Generally 1e-12 should only be used as eps if you're sticking with float32 (or all uses of that eps are guaranteed to be upcast to float32).\r\n\r\n",
"Kaiming initialization should be used for the nn.Conv2d rather than .normal_() initialization in the class ViTPreTrainedModel or any class that directly inherits from PretrainedModel. And the biases of the nn.Conv2d in ViT should be initialized the same way as PyTorch. (https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv2d) @LysandreJik @NielsRogge @amyeroberts @alaradirik",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@CharlesLeeeee you are partially right, it seems that ViT uses PyTorch's default initialization scheme for `nn.conv2d`, at least in [timm](https://github.com/rwightman/pytorch-image-models/blob/7c4ed4d5a43f46084cc9b6f20a5edb8839bbeb14/timm/models/vision_transformer.py#L395). The JAX init however uses a LeCun normal as seen [here](https://github.com/rwightman/pytorch-image-models/blob/7c4ed4d5a43f46084cc9b6f20a5edb8839bbeb14/timm/models/vision_transformer.py#L415-L418). \r\n\r\nI'm working on this in #19449 "
] | 1,664
| 1,670
| 1,670
|
NONE
| null |
### Feature request
Differences between HugginFace and Timm implementation of Vision Transformers can be listed as below:
-Missing stochastic depth (https://arxiv.org/abs/2012.12877)
-Using m.weight.data.normal_(mean=0.0, std=0.02) instead of trunc_normal_()
-Missing trunc_normal_() init for the position embedding and cls_token
My DeiT started training properly once I started using the trunc_normal_() init and stochastic depth for my huggingface ViT model. Also, I remove the pruning head functionality and I no longer inherit the HuggingFace ViT model class from the "PretrainedModel" class, but I'm not actually sure if this actually caused my training to work properly.
### Motivation
These things could mean the difference between getting Nan or not during training for DeiT using the process from https://arxiv.org/abs/2012.12877
### Your contribution
Would love to share my code but I can't. I refer you to read the code (https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19305/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19304
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19304/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19304/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19304/events
|
https://github.com/huggingface/transformers/pull/19304
| 1,395,276,368
|
PR_kwDOCUB6oc5AFZeA
| 19,304
|
Correct typos and fix a broken link in docs
|
{
"login": "paulaxisabel",
"id": 102936794,
"node_id": "U_kgDOBiKw2g",
"avatar_url": "https://avatars.githubusercontent.com/u/102936794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulaxisabel",
"html_url": "https://github.com/paulaxisabel",
"followers_url": "https://api.github.com/users/paulaxisabel/followers",
"following_url": "https://api.github.com/users/paulaxisabel/following{/other_user}",
"gists_url": "https://api.github.com/users/paulaxisabel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulaxisabel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulaxisabel/subscriptions",
"organizations_url": "https://api.github.com/users/paulaxisabel/orgs",
"repos_url": "https://api.github.com/users/paulaxisabel/repos",
"events_url": "https://api.github.com/users/paulaxisabel/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulaxisabel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
Correct typos in [docs](https://github.com/huggingface/transformers/tree/main/docs) and update the trainer doc link from https://github.com/huggingface/transformers/blob/main/docs/source/main_classes/trainer.mdx to https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.mdx
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19304/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19304",
"html_url": "https://github.com/huggingface/transformers/pull/19304",
"diff_url": "https://github.com/huggingface/transformers/pull/19304.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19304.patch",
"merged_at": 1664991638000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19303
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19303/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19303/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19303/events
|
https://github.com/huggingface/transformers/issues/19303
| 1,395,019,875
|
I_kwDOCUB6oc5TJlBj
| 19,303
|
Make all models folder independent from each other
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 4608548278,
"node_id": "LA_kwDOCUB6oc8AAAABErDdtg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/HACKTOBERFEST-ACCEPTED",
"name": "HACKTOBERFEST-ACCEPTED",
"color": "FF5733",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hello! Happy to take LayoutLM Config and Tokenizer :) ",
"Hi, I would like to work on this. thanks.",
"Hi @OtherHorizon , as explained in the issue, please pick a model/config and/or tokenizer so that others know not to pick the same one :-)",
"Hi @sgugger, I like this philosophy, and I can work on DoRYing `LongformerConfig` and `Longformer Tokenizer` :) ",
"Hi @sgugger, I would like to work on RobertaConfig config and DistilBert tokenizer! ",
"Hi @sgugger, I would love to work on `Camembert` model.",
"Hello @sgugger I can work on the `Xlm-Roberta` model and config",
"Heya! I'd like to grab the `OpenAI-GPT` tokenizer :)",
"Hi! I'd Like to work on `Flaubert` config and model",
"Hi, I would like to work on mT5 and ProphetNet.",
"Hey! I can take a look at ClipTokenizer!",
"Hi, i would like to work on `Electra` and `Luke` Tokenizer",
"Hi, I would like to work on `BertGeneration`.\r\n",
"Hi! I'd Like to work on `ConvBERT` and `Lxmert` tokenizer.",
"Hi! I would like to work on `MobileBert` tokenizer.",
"Hello! I can also take up refactoring XLM-ProphetNet model and config. ",
"Hello, commenting to mark retribert",
"Hello! I would like to work on `Herbert` tokenizer.",
"Hello, I would like to work on Roformer tokenizer",
"I can work on LED ",
"Hi, I'd would gladly like to work on the Funnel tokenizer.",
"Hi all - I'd like to work on `Blenderbot` and `Squeezebert tokenizer` 😄",
"Hi, I'd like to work on `MarkupLM` config. ",
"Coming in to mark BertJapanese and Cpm now that I have some idea of how this goes",
"I would like to work on `Derberta tokenizer` maybe a typo . Just to confirm, task is to remove `GPT2Tokenizer` from https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/deberta/tokenization_deberta.py#L66",
"I can work on `mt5 model` if free! @divyanshugit Can I take the issue, if you haven't already started?",
"hi @sgugger i would love to work on MarkupLMConfig",
"@sgugger Would like to work on the Roberta config.Please assign me",
"hi @sgugger i can try working on the ~~XLM-ProphetNet model~~ \r\nedit: just saw there was a pr for this, I can take whatever is left (if any)",
"The fast tokenizers for ELECTRA and Longformer are still available FYI :-)"
] | 1,664
| 1,674
| 1,674
|
COLLABORATOR
| null |
Transformers has a Do Repeat Yourself policy in the sense that it does not provide building blocks that we then mix and match, but we strive to have each model be self-contained in terms of code, at the price of code duplication. You can find more about this philosophy in [this blog post](https://huggingface.co/blog/transformers-design-philosophy).
There are instances in the library (mostly with older models) where this is not respected. This issue will serve as a tracker for all those instances, so that the library is cleaner and each model/tokenizer/config is easier to tweak by itself. This will also make it easier for us to test individual models in autonomy.
If you wish to make a contribution to Transformers, you can help! Pick a config/model/tokenizer in the list below (double-check someone is not working on it already by searching this page!) and indicate with a comment that wish to work on it. Read our [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) as well as the section below, and once you are ready, open a PR and tag @sgugger on it.
## How to remove a dependency from another model
There are two different types of dependencies: either a configuration/model/tokenizer uses an intermediate object from another model (example: some tokenizer uses the `BasicTokenizer` defined in the `tokenization_bert` module, or it subclasses another configuration/model/tokenizer.
In the first case, the object code should just be copied inside the file, with a "Copied from" statement. This will make sure that code is always kept up to date even if the basic object is modified. For instance, if a tokenizer is using `BasicTokenizer`, go copy the code in `tokenization_bert` for that class, then paste it in the tokenizer module you are treating and add the following copied from comment:
```py
# Copied from transformers.models.bert.tokenization_bert.BasicTokenizer
class BasicTokenizer(object):
...
```
In the second case, the code of the class (and all its building blocks) should be copied and renamed to be prefixed by the model: for instance if you are copying code from the modeling_bert module to build Roberta, you replace all `BertLayer`, `BertOutput` etc... by `RobertaLayer`, `RobertaOutput`...
You should then add a copied from statement (when the copy is without any modification) like this one:
```py
# Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta
class RobertaAttention(nn.Module):
...
```
Note the replacement pattern that will adapt all names used. Note that:
- you can add more of those patterns, separated by a comma like [here](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py#L1388).
- you can ask to replace all possible casings like [here](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/mobilebert/modeling_mobilebert.py#L1549)
- you can just copy one method and not the whole class like [here](https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/roberta/modeling_roberta.py#L741)
**NB:** No need for copied from statements in the config (the defaults are probably different anyway).
## Objects to cover
### Configurations
- [x] Flaubert config (should not use XLM)
- [x] LayoutLM config (should not use Bert)
- [x] LongformerConfig (should not use Roberta)
- [x] MarkupLMConfig (should not Roberta)
- [x] RobertaConfig (should not use Bert)
- [x] XLM-ProphetNet config (should not use ProphetNet)
- [x] XLM-Roberta config (should not use Roberta)
### Models
- [x] BertGeneration (should not use BertEncoder)
- [x] Camembert (should not use Roberta) (PyTorch + TF)
- [x] Flaubert (should not use XLM) (PyTorch + TF)
- [ ] mT5: ~PyTorch~, TensorFlow, Flax (should not use T5)
- [x] XLM-ProphetNet (should not use ProphetNet)
- [ ] Xlm-Roberta: ~PyTorch~, TensorFlow, Flax (should not use Roberta)
### Tokenizers
- [x] BertJapanese (should not use any imports from tokenization bert)
- [x] Blenderbot (should not use Roberta) (slow/fast)
- [x] Clip (should not use BasicTokenizer from Bert)
- [x] ConvBERT (should not use Bert) (slow/fast)
- [x] Cpm tokenizer (should not use XLNet) (slow/fast)
- [x] Derberta tokenizer (should not use GPT2) (slow/fast)
- [x] DistilBert (should not use Bert) (slow/fast)
- [x] Electra (should not use Bert) (fast)
- [x] Flaubert (should not use XLM)
- [x] Funnel (should not use Bert) (slow/fast)
- [x] Herbert (should not BasicTokenizer from Bert and XLM)
- [x] LayoutLM (should not use Bert) (slow/fast)
- [x] LED (should not use BART) (slow/fast)
- [x] Longformer (should not use Roberta) (fast tokenizer)
- [x] Luke (should not use Roberta)
- [x] Lxmert (should not use Bert) (slow/fast)
- [x] MobileBert (should not use Bert) (slow/fast)
- [x] Openai-GPT (should not use BasicTokenizer from Bert)
- [x] ProphetNet (should not use BasicTokenzier and WordPieceTokenizer from Bert)
- [x] Retribert tokenizer (should not use Bert) (slow/fast)
- [x] Roformer tokenizer (should not use any imports from tokenization bert)
- [x] Squeezebert tokenizer (should not use Bert) (slow/fast)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19303/reactions",
"total_count": 12,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19303/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19302
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19302/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19302/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19302/events
|
https://github.com/huggingface/transformers/pull/19302
| 1,394,974,492
|
PR_kwDOCUB6oc5AEX7k
| 19,302
|
Don't automatically add bug label
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
COLLABORATOR
| null |
# What does this PR do?
This PR removes the auto-added "bug" label in issues created with the Bug template. It's better if we had said label ourselves since those issues range from questions, to bug in the user's code and not always correspond to a bug in the library.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19302/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19302",
"html_url": "https://github.com/huggingface/transformers/pull/19302",
"diff_url": "https://github.com/huggingface/transformers/pull/19302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19302.patch",
"merged_at": 1664815325000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19301
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19301/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19301/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19301/events
|
https://github.com/huggingface/transformers/issues/19301
| 1,394,855,872
|
I_kwDOCUB6oc5TI8_A
| 19,301
|
deberta-v3 has 100 more vocabs than its tokenizer
|
{
"login": "wenmin-wu",
"id": 9409333,
"node_id": "MDQ6VXNlcjk0MDkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9409333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wenmin-wu",
"html_url": "https://github.com/wenmin-wu",
"followers_url": "https://api.github.com/users/wenmin-wu/followers",
"following_url": "https://api.github.com/users/wenmin-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/wenmin-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wenmin-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wenmin-wu/subscriptions",
"organizations_url": "https://api.github.com/users/wenmin-wu/orgs",
"repos_url": "https://api.github.com/users/wenmin-wu/repos",
"events_url": "https://api.github.com/users/wenmin-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/wenmin-wu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Maybe of interest to @ArthurZucker as well",
"> Maybe of interest to @ArthurZucker as well\r\n\r\nThanks Lysandre!",
"Hi @wenmin-wu, \r\n\r\nIt seems that the difference between the size of the tokenizer and the size of the `word_embeddings` matrix is voluntary on the part of the deberta-v3's authors as you can see in the issue https://github.com/microsoft/DeBERTa/issues/103.\r\n\r\nBy experience, it can happen that the size of `word_embeddings` is bigger than the total number of known tokens, for constraints on the size of the matrix (which should be a multiple of a certain number for example) or by precautions because the shape of the model was decided before having the final form of the tokenizer or to have \"available\" tokens which could be added later (with a little fine-tuning anyway).",
"Hi @SaulLu , thanks for your explanation. I'm also confused with the tokenizer:\r\n1. The tokenizer doesn't directly return the index of that token, e.g.\r\n```Python\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/deberta-v3-base\", use_fast=True)\r\ntokenizer(\"test\")\r\n# outputs: {'input_ids': [1, 1010, 2], 'token_type_ids': [0, 0, 0], 'attention_mask': [1, 1, 1]}\r\ntokenizer.vocab[\"test\"]\r\n# outputs: 9982\r\n# 9982 != 1010\r\n```\r\n2. How does the tokenizer handle the space before/after the special character? e.g.:\r\n```Python\r\ntokenizer(\"you (are) nice\", return_attention_mask=False, return_token_type_ids=False)\r\n# outputs: {'input_ids': [1, 274, 287, 6614, 285, 1085, 2]}\r\ntokenizer(\"you(are)nice\", return_attention_mask=False, return_token_type_ids=False)\r\n# outputs: {'input_ids': [1, 274, 555, 6614, 285, 22184, 2]}\r\n```\r\nseems the difference is ` (` -> `287` and `(` -> `555`, `) ` -> `1085` and `)` -> `22184`\r\nbut actually, their decoded value is the same (special character without space):\r\n```Python\r\ntokenizer.decode(287) == tokenizer.decode(555), tokenizer.decode(1085) == tokenizer.decode(22184)\r\n# Outputs: True, True\r\n```",
"Hi @wenmin-wu,\r\n\r\n> The tokenizer doesn't directly return the index of that token, [...]\r\n\r\nBy default, the `\"microsoft/deberta-v3-base\"` model adds a prefix space. You can see it this by running (knowing that the `▁` symbol corresponds to a space):\r\n```python\r\nprint(tokenizer.convert_ids_to_tokens(tokenizer(\"test\").input_ids))\r\n# outputs: ['[CLS]', '▁test', '[SEP]']\r\n```\r\n\r\n> How does the tokenizer handle the space before/after the special character?\r\n\r\nTo see the token to which each id corresponds I advise you to use the `convert_ids_to_tokens` method instead of `decode`. On the example you shared, you will see that spaces are well included in the following token when they exist:\r\n```python\r\nprint(tokenizer.convert_ids_to_tokens(tokenizer(\"you (are) nice\", return_attention_mask=False, return_token_type_ids=False).input_ids))\r\n# outputs: ['[CLS]', '▁you', '▁(', 'are', ')', '▁nice', '[SEP]']\r\nprint(tokenizer.convert_ids_to_tokens(tokenizer(\"you(are)nice\", return_attention_mask=False, return_token_type_ids=False).input_ids))\r\n# outputs: ['[CLS]', '▁you', '(', 'are', ')', 'nice', '[SEP]']\r\n```\r\n\r\nFor information, the `decode` method is a method that does its best to reconstitute a text from a sequence of token ids produced by a generative model. This task being very complicated I suggest you try to use this method only when it is really necessary (i.e. only for a sequence of token ids produced by a generative model).\r\n\r\nI hope this helped you",
"@SaulLu Got it, thanks a lot for your detailed explanation",
"I'm so glad I could help you! I'm closing this issue as I feel like it answers all your questions. :hugs: ",
"Yep @SaulLu, your answers helped me a lot with the [Feedback Prize - English Language Learning](https://www.kaggle.com/competitions/feedback-prize-english-language-learning/leaderboard) Kaggle competition. I'm in the gold zone now. Many thanks!",
"@SaulLu Got another improvement again. I'm a Prize Contender now! Many thanks!"
] | 1,664
| 1,665
| 1,665
|
NONE
| null |
### System Info
- `transformers` version: 4.22.1
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.8.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
Hi @LysandreJik @SaulLu , I think this issue needs both of you to help or confirm:
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```Python
model_type = "microsoft/deberta-v3-base"
tokenizer = AutoTokenizer.from_pretrained(model_type)
print(tokenizer.vocab_size) # output: 128000
print(len(tokenizer.vocab)) # output: 128001, the extra one is padding?
config = AutoConfig.from_pretrained(model_type)
print(config.vocab_size) # output: 128100
model = AutoModel.from_pretrained(model_type, config=config)
print(print(len(model.embeddings.word_embeddings.weight)) # 128100, which is consistent with the config
```
### Expected behavior
The deberta model should have the same vocab_size as its tokenizer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19301/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19300
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19300/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19300/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19300/events
|
https://github.com/huggingface/transformers/issues/19300
| 1,394,772,166
|
I_kwDOCUB6oc5TIojG
| 19,300
|
Trainer with save_total_limit=1 keeps 2 checkpoints when EarlyStoppingCallback is active
|
{
"login": "DavidNemeskey",
"id": 690386,
"node_id": "MDQ6VXNlcjY5MDM4Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/690386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidNemeskey",
"html_url": "https://github.com/DavidNemeskey",
"followers_url": "https://api.github.com/users/DavidNemeskey/followers",
"following_url": "https://api.github.com/users/DavidNemeskey/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidNemeskey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidNemeskey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidNemeskey/subscriptions",
"organizations_url": "https://api.github.com/users/DavidNemeskey/orgs",
"repos_url": "https://api.github.com/users/DavidNemeskey/repos",
"events_url": "https://api.github.com/users/DavidNemeskey/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidNemeskey/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thanks for the writeup. That's not a bug, but very much intended. This is the only instance in which the `save_total_limit` argument is not fully respected, because:\r\n- we need to keep the best model\r\n- we also need to keep the last model to be able to resume training if a crash happens",
"@sgugger \"If a crash happens\". But what about when the training has ended? The checkpoint is not really needed anymore then, is it?\r\n\r\nIn any case, I could not find anything about this exception in the documentation. If it is not because of my inaptitude at looking up information (which it well may be), it might be worth adding a sentence to the description of the `save_total_limit`.",
"Sure, do you want to make a PR for this?",
"Just to clarify: a PR for what? :smile: ",
"To add the exception to the documentation and/or delete the checkpoint at the end of training.",
"@sgugger I would like to fix this issue, can help on where this fix needs to go ? ",
"@sgugger I think we can close this ticket",
"Thank you guys for taking care of this!"
] | 1,664
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
### System Info
Transformers: 4.21.3
Python: 3.10.4
OS: Linux 5.4.0-91-generic #102-Ubuntu SMP x86_64
Torch: 1.12.1+cu113
GPU: A100 40G
### Who can help?
@sgugger
### Information
As the title says. See below for details.
### Tasks
I trained BERT-based sequence classifiers.
### Reproduction
1. Run a training () with `EarlyStoppingCallback` active, `save_total_limit` set to 1 and `early_stopping` to something (e.g. 3)
2. Run it for a number of iterations greater than `early_stopping`
3. Check the number of checkpoints.
What happens is that if the best performing checkpoint is not the last, both the last and the best checkpoints are kept.
### Expected behavior
With `save_total_limit=1`, I would expect that only the best checkpoint is kept.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19300/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19299
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19299/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19299/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19299/events
|
https://github.com/huggingface/transformers/issues/19299
| 1,394,497,369
|
I_kwDOCUB6oc5THldZ
| 19,299
|
Model's internal loss is different than calculated loss at tf.keras.metric.Metric
|
{
"login": "uunal",
"id": 2520197,
"node_id": "MDQ6VXNlcjI1MjAxOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2520197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uunal",
"html_url": "https://github.com/uunal",
"followers_url": "https://api.github.com/users/uunal/followers",
"following_url": "https://api.github.com/users/uunal/following{/other_user}",
"gists_url": "https://api.github.com/users/uunal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uunal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uunal/subscriptions",
"organizations_url": "https://api.github.com/users/uunal/orgs",
"repos_url": "https://api.github.com/users/uunal/repos",
"events_url": "https://api.github.com/users/uunal/events{/privacy}",
"received_events_url": "https://api.github.com/users/uunal/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @uunal, I don't think this is a bug in `transformers`. The main issue is that Keras displays the loss as the average for the whole epoch so far, but your Metric only displays the loss for the most recent batch. This is because you overwrite it with each batch. A better approach might be to store `self.total_loss` and `self.num_batches` and then in `result()` you could return `tf.math.exp(self.total_loss / self.num_batches)`.\r\n\r\nI'm going to close this issue for now, but if you believe you've identified a bug in `transformers`, feel free to re-open it!",
"Thank you for the quick response and your suggestion. I understood the difference in losses.\r\nIf anyone interested later on, adding the updated metric code below:\r\n\r\n```\r\nclass PerplexityMetric(tf.keras.metrics.Metric):\r\n def __init__(self, name=\"perplexity\", **kwargs):\r\n super(PerplexityMetric, self).__init__(name=name, **kwargs)\r\n self.cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy(\r\n from_logits=True, reduction=tf.keras.losses.Reduction.NONE\r\n )\r\n self.total_loss = self.add_weight(name=\"pl\", shape=(), initializer=\"zeros\")\r\n self.num_batches = self.add_weight(\r\n name=\"nb\", shape=(), initializer=\"zeros\"\r\n )\r\n\r\n def _calculate_loss(self, real, pred):\r\n unmasked_loss = self.cross_entropy(tf.nn.relu(real), pred)\r\n # make sure only labels that are not equal to -100 affect the loss\r\n loss_mask = tf.cast(real != -100, dtype=unmasked_loss.dtype)\r\n masked_loss = unmasked_loss * loss_mask\r\n reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(loss_mask)\r\n loss_ = tf.reshape(reduced_masked_loss, (1,))\r\n return loss_[-1]\r\n\r\n def update_state(self, y_true, y_pred, sample_weight=None):\r\n loss_ = self._calculate_loss(y_true, y_pred)\r\n # update state variables\r\n self.num_batches.assign_add(1)\r\n self.total_loss.assign_add(loss_)\r\n\r\n def result(self):\r\n # perplexity: subword-level \r\n return tf.math.exp(self.total_loss / self.num_batches)\r\n\r\n def reset_state(self):\r\n # reset at the start of each epoch.\r\n self.total_loss.assign(0.0)\r\n self.num_batches.assign(0)\r\n```"
] | 1,664
| 1,664
| 1,664
|
NONE
| null |
### System Info
- `transformers` version: 4.22.1
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.5
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Using tf.keras.Model version of gpt2, and training as casual language modelling.
```
gpt2_model.compile(..., metrics=[PerplexityMetric()]) #using inner loss (hf_compute_loss)
gpt2_model.fit(...)
```
The problem occurs when metric is calculated. I did not come up with an idea to use inner loss of transformers in tf.keras.metric.Metric (if anyone knows how it's good to rid of duplicate calculation) . So calculating loss as written in def hf_compute_loss in TFCausalLanguageModelingLoss. PerplexityMetric as:
```
class PerplexityMetric(tf.keras.metrics.Metric):
def __init__(self, name='perplexity', **kwargs):
super(PerplexityMetric, self).__init__(name=name, **kwargs)
self.cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE
)
self.perplexity = self.add_weight(name='tp', initializer='zeros')
def _calculate_perplexity(self, real, pred):
unmasked_loss = self.cross_entropy(tf.nn.relu(real), pred)
# make sure only labels that are not equal to -100 affect the loss
loss_mask = tf.cast(real != -100, dtype=unmasked_loss.dtype)
masked_loss = unmasked_loss * loss_mask
reduced_masked_loss = tf.reduce_sum(masked_loss) / tf.reduce_sum(
loss_mask
)
loss_ = tf.reshape(reduced_masked_loss, (1,))
perplexity = tf.math.exp(loss_[-1])
return perplexity
def update_state(self, y_true, y_pred, sample_weight=None):
perplexity = self._calculate_perplexity(y_true, y_pred)
self.perplexity.assign(perplexity)
def result(self):
return self.perplexity
def reset_state(self):
# reset at the start of each epoch.
self.perplexity.assign(0.0)
```
### Expected behavior
Both inner loss calculation and the loss calculation in the metric are same. What I expect is that calculated losses are the same in training output. Last float within brackets is loss calculated in PerplexityMetric() (_loss__).
```
1/7 [===>..........................] - ETA: 2s - loss: 8.0234 - perplexity: 1708.3318[7.44036245]
2/7 [=======>......................] - ETA: 2s - loss: 8.1277 - perplexity: 1703.3676[7.78889942]
3/7 [===========>..................] - ETA: 1s - loss: 8.2354 - perplexity: 2413.6597[7.75492764]
4/7 [================>.............] - ETA: 1s - loss: 8.3068 - perplexity: 2333.0405[7.91714859]
5/7 [====================>.........] - ETA: 0s - loss: 8.3837 - perplexity: 2743.9358[7.48942709]
6/7 [========================>.....] - ETA: 0s - loss: 8.3680 - perplexity: 1789.0269[7.8069911]
```
I did not understand the difference between loss values. Is there any wrong calculation in PerplexityMetric or is this a buggy behaviour?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19299/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19298
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19298/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19298/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19298/events
|
https://github.com/huggingface/transformers/issues/19298
| 1,394,229,331
|
I_kwDOCUB6oc5TGkBT
| 19,298
|
Bug with Training T5 Tokenizers on New Data
|
{
"login": "yangky11",
"id": 5431913,
"node_id": "MDQ6VXNlcjU0MzE5MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5431913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangky11",
"html_url": "https://github.com/yangky11",
"followers_url": "https://api.github.com/users/yangky11/followers",
"following_url": "https://api.github.com/users/yangky11/following{/other_user}",
"gists_url": "https://api.github.com/users/yangky11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangky11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangky11/subscriptions",
"organizations_url": "https://api.github.com/users/yangky11/orgs",
"repos_url": "https://api.github.com/users/yangky11/repos",
"events_url": "https://api.github.com/users/yangky11/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangky11/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @yangky11 ,\r\n\r\nThank you very much for pointing this out! It is indeed a problem!\r\n\r\nTo fix it, we'll have to look into whether we need to pay special attention to the `train_new_from_iterator` method of the `T5TokenizerFast` tokenizer or (even better) whether we really need the sentinel ids to be at the end.",
"@SaulLu Thanks for your response! \r\n\r\nI'm not familiar with every detail of T5. But to my knowledge, the only reason we need them to be at the end is that user code sometimes relies on this feature (as in the example I mentioned). It's also possible that you add APIs to `T5TokenizerFast` to allow the user to query the ids of sentinel tokens. Then the user code can handle them on their side.",
"Also cc @ArthurZucker here FYI",
"@patrickvonplaten @SaulLu I can pick this issue. What should be the approach we need to take ? \r\n",
"Also cc @LysandreJik @sgugger here ",
"Any luck on this?",
"@patrickvonplaten @sgugger Can we decide what approach to take to fix this ? ",
"I think choosing how to resolve this issue requires some discussion.\r\n\r\nI think we should change the example scripts that look for sentinel tokens based on the fact that they are the last tokens in the vocabulary. Rather, it should be based on the fact that they are the tokens in the `additional_special_tokens` attribute (respectively `additional_special_tokens_ids` for their ids).\r\n\r\nHowever, an important thing to know with this suggestion is that even if for T5 tokenizers there is no reason to have any additional special tokens other than sentinel ids, there is still a risk that a user adds a new additional special token that is not a sentinel token and this would lead to a (silent) error. \r\n\r\nWhich leads me to a second opinion, I think this issue shows here that a `breaking change `would be very beneficial to the user experience. Indeed, for T5, sentinel ids have a very important meaning for the model (as much as bos, eos, or sep tokens for bert type models for example) and I think it would be justified to have a dedicated attribute for them (such as `sentinel_tokens`) rather than being in the additional tokens list of `SpecialTokensMixin`. Or, another possibility would be to be able to name the tokens in the list of additional tokens so that we can rely on this naming to retrieve them from the list rather than their position in the list. ",
"@SaulLu I was also thinking along same lines, as a simpler fix remove the pointers of sentinel tokens as the last tokens and then use an additional attribute to get those token. \r\n\r\n@patrickvonplaten @sgugger what do you folks think of this approach ? ",
"As long as it's done in examples that are focused on T5-only (e.g. not the generic `run_summarization`), no problem with me!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nHas this issue been resolved? I see at least the `run_t5_mlm_flax.py` example is still relying on the old behavior."
] | 1,664
| 1,672
| 1,671
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.10.135-122.509.amzn2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
# Train T5's tokenizer on some new data.
training_corpus = ["12rdpo2rkfp", "$##@sdfag", "ja23m d@#"]
tokenizer = AutoTokenizer.from_pretrained("t5-large")
new_tokenizer = tokenizer.train_new_from_iterator(training_corpus, vocab_size=110)
# Print the vocabulary sequentially.
for i in range(110):
print(new_tokenizer.convert_ids_to_tokens([i])[0])
# You'll see sentinel tokens such as `<extra_id_1>` are NOT at the end of the vocabulary.
```
### Expected behavior
The sentinel tokens in T5 must be at the end of the vocabulary. This constraint is stated in the documentation (e.g., [here](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer.extra_ids)), and official examples are relying on it. The code below is trying to find sentinel tokens from the back of the vocabulary (`len(self.tokenizer) - sentinel_ids`).
https://github.com/huggingface/transformers/blob/5cd16f01db3b5499d4665e8624801ed30ba87bdd/examples/flax/language-modeling/run_t5_mlm_flax.py#L378
However, when I follow [Hugging Face Course](https://huggingface.co/course/en/chapter6/2?fw=pt) to train T5's tokenizer on new data. The new tokenizer does not conform to this constraint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19298/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19297
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19297/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19297/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19297/events
|
https://github.com/huggingface/transformers/issues/19297
| 1,394,209,432
|
I_kwDOCUB6oc5TGfKY
| 19,297
|
Floating point exception (core dumped) when using `transformers.onnx`
|
{
"login": "blakechi",
"id": 56323787,
"node_id": "MDQ6VXNlcjU2MzIzNzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56323787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blakechi",
"html_url": "https://github.com/blakechi",
"followers_url": "https://api.github.com/users/blakechi/followers",
"following_url": "https://api.github.com/users/blakechi/following{/other_user}",
"gists_url": "https://api.github.com/users/blakechi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blakechi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blakechi/subscriptions",
"organizations_url": "https://api.github.com/users/blakechi/orgs",
"repos_url": "https://api.github.com/users/blakechi/repos",
"events_url": "https://api.github.com/users/blakechi/events{/privacy}",
"received_events_url": "https://api.github.com/users/blakechi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @lewtun ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,664
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
#### Code
```bash
python -m transformers.onnx --model=google/long-t5-tglobal-base --feature=seq2seq-lm onnx
```
#### Error Message
```bash
Validating ONNX model...
Floating point exception (core dumped)
```
#### Another Trial
Although the model isn't an official one, but it appears in the [document](https://huggingface.co/docs/transformers/model_doc/longt5#transformers.LongT5ForConditionalGeneration.forward.example). Showed the same message as above.
```bash
python -m transformers.onnx --model=Stancld/longt5-tglobal-large-16384-pubmed-3k_steps --feature=seq2seq-lm onnx
```
### Expected behavior
It shouldn't raise `Floating point exception (core dumped)`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19297/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19296
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19296/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19296/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19296/events
|
https://github.com/huggingface/transformers/issues/19296
| 1,394,109,223
|
I_kwDOCUB6oc5TGGsn
| 19,296
|
AssertionError: Padding_idx must be within num_embeddings MarianModel
|
{
"login": "talhaanwarch",
"id": 37379131,
"node_id": "MDQ6VXNlcjM3Mzc5MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/37379131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talhaanwarch",
"html_url": "https://github.com/talhaanwarch",
"followers_url": "https://api.github.com/users/talhaanwarch/followers",
"following_url": "https://api.github.com/users/talhaanwarch/following{/other_user}",
"gists_url": "https://api.github.com/users/talhaanwarch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talhaanwarch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talhaanwarch/subscriptions",
"organizations_url": "https://api.github.com/users/talhaanwarch/orgs",
"repos_url": "https://api.github.com/users/talhaanwarch/repos",
"events_url": "https://api.github.com/users/talhaanwarch/events{/privacy}",
"received_events_url": "https://api.github.com/users/talhaanwarch/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @talhaanwarch 👋 This seems to be an error in our default value for the size of the embeddings. Thank you for raising it!\r\n\r\nMeanwhile, you should be able to run\r\n```python\r\nfrom transformers import MarianModel, MarianConfig\r\n\r\n# Initializing a Marian Helsinki-NLP/opus-mt-en-de style configuration\r\nconfiguration = MarianConfig()\r\nconfiguration.vocab_size = configuration.pad_token_id + 1\r\n\r\n# Initializing a model from the Helsinki-NLP/opus-mt-en-de style configuration\r\nmodel = MarianModel(configuration)\r\n```"
] | 1,664
| 1,665
| 1,665
|
NONE
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The code is taken from official huggingface documentation [here](https://huggingface.co/docs/transformers/model_doc/marian#transformers.MarianConfig.example)
from transformers import MarianModel, MarianConfig
#### Initializing a Marian Helsinki-NLP/opus-mt-en-de style configuration
configuration = MarianConfig()
#### Initializing a model from the Helsinki-NLP/opus-mt-en-de style configuration
model = MarianModel(configuration)
#### Accessing the model configuration
configuration = model.config
### Expected behavior
I am trying to initialize MarianModel without using pretrained weights. So instead of this code
```
model_name = "penpen/novel-zh-en"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
````
I wan to use it with out using pretrained model. Instead I want it to be trained from scratch. So i used MarianConfig, but it throw the error.
```
AssertionError: Padding_idx must be within num_embeddings
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19296/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19295
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19295/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19295/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19295/events
|
https://github.com/huggingface/transformers/issues/19295
| 1,394,105,275
|
I_kwDOCUB6oc5TGFu7
| 19,295
|
I think group_by_length feature can be faster using a namely length list
|
{
"login": "comchobo",
"id": 65698076,
"node_id": "MDQ6VXNlcjY1Njk4MDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/65698076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/comchobo",
"html_url": "https://github.com/comchobo",
"followers_url": "https://api.github.com/users/comchobo/followers",
"following_url": "https://api.github.com/users/comchobo/following{/other_user}",
"gists_url": "https://api.github.com/users/comchobo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/comchobo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/comchobo/subscriptions",
"organizations_url": "https://api.github.com/users/comchobo/orgs",
"repos_url": "https://api.github.com/users/comchobo/repos",
"events_url": "https://api.github.com/users/comchobo/events{/privacy}",
"received_events_url": "https://api.github.com/users/comchobo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"As explained in the documentation, you should provide the `lengths` column in the dataset. If you use the `Dataset.map` method to build it, you won't need to implement any multiprocessing since it will do it for you. It will also cache the result so you only need to the computation once."
] | 1,664
| 1,665
| 1,665
|
NONE
| null |
### Feature request
Current group_by_length requires ```lengths``` list that can be fully-loaded from dataset:
```
class Trainer:
...
def _get_train_sampler(self) -> Optional[torch.utils.data.Sampler]:
...
if self.args.group_by_length:
if is_datasets_available() and isinstance(self.train_dataset, datasets.Dataset):
lengths = (
self.train_dataset[self.args.length_column_name]
if self.args.length_column_name in self.train_dataset.column_names
else None
)
```
For example, If I want to make lengths list with 'attention_mask' column, ```lengths``` list would be look like:
```
[[1,1,1,1,1],[1,1,1,1,1,1,1,1,1],[1,1,1] ...]
```
But If I want to use length information only, why can't I make this list like:
```
[5,9,3...] # length of each row
```
If I make this 'length file' before training with group_by_length feature, then the ```lengths``` list can be easily made by :
```
def appender(length):
temp = []
temp.append([1 for _ in range(length)])
return temp
class customized_trainer(Trainer):
def handle_group_by_length(self, filepath=''):
with open(filepath,'r') as f:
lengthsfile=json.load(f) # json or whatsoever
import multiprocessing as mp
pool = mp.Pool(6)
lengths = pool.map(appender, lengthsfile)
lengths = [ent for sublist in lengths for ent in sublist]
self.lengths = lengths
```
```
trainer.handle_group_by_length('file.json') # this would load the length file
```
This reduced the loading time into about 1/10, 30 mins into 3 mins for me.
### Motivation
While I tried to train with ~25GB text dataset, this code required about 30 minutes to load ```lengths``` list.
I found that loading full column in the group_by_length feature is the problem.
So I changed the code to load much smaller dataset - column length list - and It worked.
### Your contribution
This would require users to make "row length list file' before training. So I'm not that assertive to this proposal.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19295/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19294
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19294/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19294/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19294/events
|
https://github.com/huggingface/transformers/issues/19294
| 1,394,086,521
|
I_kwDOCUB6oc5TGBJ5
| 19,294
|
Cant run same int8 example on local machine that works in Colab
|
{
"login": "auwsom",
"id": 25093612,
"node_id": "MDQ6VXNlcjI1MDkzNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/25093612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/auwsom",
"html_url": "https://github.com/auwsom",
"followers_url": "https://api.github.com/users/auwsom/followers",
"following_url": "https://api.github.com/users/auwsom/following{/other_user}",
"gists_url": "https://api.github.com/users/auwsom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/auwsom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/auwsom/subscriptions",
"organizations_url": "https://api.github.com/users/auwsom/orgs",
"repos_url": "https://api.github.com/users/auwsom/repos",
"events_url": "https://api.github.com/users/auwsom/events{/privacy}",
"received_events_url": "https://api.github.com/users/auwsom/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I solved this by using \"pip install git+https://github.com/huggingface/transformers.git\" for the latest transformers package. I installed the one locally in the last month, but things must change that fast in this space. I leaving the issue for others, but closing.",
"I'm getting `\"AttributeError: /home/user/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats\"` now, and dont see any search results for this error.",
"If someone knows about this error above, that would be great. ",
"This error turned out to be from not having Cuda installed properly. PyTorch comes with its own many times, but bitsandbytes doesnt use it.",
"Hello,\r\nI stumbled over the same error: \"undefined symbol: cget_col_row_stats\", but I have cuda installed (Stable Diffusion works) \r\nCan you tell me what you mean by \"installing Cuda properly\"?",
"@OWKenobi Well, on my Ubuntu 2204 system, with RTX3060, the highest Cuda available was 11.7, however the \"compatible\" Cuda version is 11.6 (look for cuda116 in the Torch package name). 11.6 however requires a slightly older driver 515.65.01. So I had to remove the one from `ubuntu-drivers autoinstall` and manually install 515. Then had to install Cuda 11.6. Hope that helps.\r\n\r\nThere might be one other AskUbuntu post about this:\r\nhttps://askubuntu.com/questions/1392998/cuda-installation-uncomprehensible-conflicts"
] | 1,664
| 1,672
| 1,665
|
NONE
| null |
### System Info
This also looks like a bug:
(but consumer hardware Dell workstation with 64GBs RAM and RTX 3060 12GBs )
```
Traceback (most recent call last):
File "/home/user/.local/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/home/user/.local/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
```
### Who can help?
I dont see the Bloom model above, but Im trying to follow this example:
https://huggingface.co/docs/transformers/perf_infer_gpu_one
using the colab for "BLOOM-3B"
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I'm trying to run the colab: https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing
Which runs fine using the "load_in_8bit=True" arg, but gives this error locally with any of the models. Ive tried downloading with huggingface_hub, git lfs clone and using normal cache (with the smaller model).
"TypeError: BloomForCausalLM.__init__() got an unexpected keyword argument 'load_in_8bit'"
Somehow AutoModelForCausalLM is passing off to BloomForCausalLM which is not finding load_in_8bit.. I'm going to try installing the latest transformers package..
```
#name = "/home/user/bloom-mnt/huggingface2/models--bigscience--bloom-7b1/snapshots/fdd9eac0805a9fa2d0641982eceda25885251975"
#name = "/home/user/bloom-mnt/huggingface2/models--bigscience--bloom-3b/snapshots/515ae965cc83b9ebbf0054de106c434bd4ec35dc"
name = "/home/user/bloom-mnt/huggingface2/bloom-560m"
#name = "bigscience/bloom-1b7"
text = "Hello my name is"
max_new_tokens = 20
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
#device = ("cpu")
model_8bit = AutoModelForCausalLM.from_pretrained(name, device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model_8bit = model_8bit.to(device)
def generate_from_model(model, tokenizer):
encoded_input = tokenizer(text, return_tensors='pt')
output_sequences = model.generate(input_ids=encoded_input['input_ids'].to(device))
return tokenizer.decode(output_sequences[0], skip_special_tokens=True)
print(generate_from_model(model_8bit, tokenizer))
```
### Expected behavior
success
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19294/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19293
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19293/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19293/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19293/events
|
https://github.com/huggingface/transformers/issues/19293
| 1,394,082,729
|
I_kwDOCUB6oc5TGAOp
| 19,293
|
T5 Tokenizer Prepends Space after Each Added (Extra) Token
|
{
"login": "ankrgyl",
"id": 565363,
"node_id": "MDQ6VXNlcjU2NTM2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankrgyl",
"html_url": "https://github.com/ankrgyl",
"followers_url": "https://api.github.com/users/ankrgyl/followers",
"following_url": "https://api.github.com/users/ankrgyl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions",
"organizations_url": "https://api.github.com/users/ankrgyl/orgs",
"repos_url": "https://api.github.com/users/ankrgyl/repos",
"events_url": "https://api.github.com/users/ankrgyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankrgyl/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"In case it helps with debugging, it returns a different result with `use_fast=False`:\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained('t5-base', use_fast=False)\r\ntokenizer.add_tokens(['<'])\r\ntokenizer.decode(tokenizer('a<=5').input_ids)\r\n# 'a < =5</s>' (notice the space both before and after the `<`)",
"Maybe also of interest to @ArthurZucker ",
"I did some digging, and I believe the culprit (in the non-fast, but potentially fast, tokenizer) is this:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L433\r\n\r\n```\r\n# Make sure we don't split on any special tokens (even they were already in the vocab before e.g. for Albert)\r\nif special_tokens:\r\n if len(new_tokens) == 1:\r\n _insert_one_token_to_ordered_list(self.unique_no_split_tokens, new_tokens[0])\r\n else:\r\n self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(new_tokens)))\r\nelse:\r\n # Or on the newly added tokens\r\n if len(tokens_to_add) == 1:\r\n _insert_one_token_to_ordered_list(self.unique_no_split_tokens, tokens_to_add[0])\r\n else:\r\n self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(tokens_to_add)))\r\nself._create_trie(self.unique_no_split_tokens)\r\n```\r\n\r\nDo you know if that's a fundamental restriction with respect to the models? Or if not, could we potentially expose a flag that disables this behavior?",
"@ankrgyl What is the expected behaviour of decode should be ? ",
"My desired behavior would be that `a<=5` round trips (i.e. encodes and then decodes) to `a<=5`, not `a < =5`",
"@LysandreJik @ArthurZucker I debugged this issue. \r\n\r\nIncase of of input is a a>=5\r\ncalling self.tokens_trie.split() returns ['a>=5']\r\n\r\nIncase of of input is a a<=5\r\ncalling self.tokens_trie.split() returns ['a', '<', '=5']\r\n\r\nHere '<' is not part of the original vocab is added. ",
"Hi @raghavanone just to clarify, that has been known the whole time. The specific issue here is that added tokens (in this case `<`) _cannot_ split words (see the code snippet I pasted above).",
"@ankrgyl Did some more digging, this is being done intentionally when we add new token to tokenizer. Look at _decode function in tokenization_utils.py . I do not think this a bug. \r\n\r\n@ArthurZucker Please validate this understanding.",
"Hey! @raghavanone you are right, this is not a bug! \r\n@ankrgyl you can use the following snippet : \r\n```python \r\n>>> tokenizer.decode(tokenizer('a<=5').input_ids, spaces_between_special_tokens = False)\r\n>>> 'a<=5</s>'\r\n``` \r\n(this works for the slow tokenizer, not for the fast, will have a look) tell me if that fixes your issue! 😄 ",
"@ArthurZucker Just pondering why is slow and fast tokenizer functionally not equal ?",
"Well.. this is not really intended ^^ But mostly the `fast` is an entire library mostly implemented in `rust`, so we must have forgotten to update this argument when adding it to the `transformers` tokenizers. cc @LysandreJik and @SaulLu FYI 🤗 ",
"@ArthurZucker confirmed with both the slow and fast tokenizers (`build_tokenizer()` below is a wrapper function in my code that simply adds `<` as a special token):\r\n\r\n```\r\nIn [2]: tokenizer = model.build_tokenizer()\r\n\r\nIn [3]: tokenizer.decode(tokenizer('a<=5').input_ids, spaces_between_special_tokens = False)\r\nOut[3]: 'a< =5</s>'\r\n\r\nIn [4]: tokenizer = model.build_tokenizer(use_fast=False)\r\n\r\nIn [5]: tokenizer.decode(tokenizer('a<=5').input_ids, spaces_between_special_tokens = False)\r\nOut[5]: 'a<=5</s>'\r\n```",
"Awesome, closing this issue, will open a PR in tokenizers when I have bandwith to try to match the outputs. ",
"@ArthurZucker happy to help with this"
] | 1,664
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
### System Info
```
$ transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.22.1
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
- Python version: 3.10.7
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik @SaulLu
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('t5-base')
tokenizer.add_tokens(['<']) # '>' is already in the vocab
tokenizer.decode(tokenizer('a>=5').input_ids)
# prints 'a>=5</s>' as expected (no space after >)
tokenizer.decode(tokenizer('a<=5').input_ids)
# prints 'a< =5</s>'
### Expected behavior
There shouldn't be a space after the `<` character.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19293/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19291
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19291/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19291/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19291/events
|
https://github.com/huggingface/transformers/pull/19291
| 1,393,849,513
|
PR_kwDOCUB6oc5AAoeX
| 19,291
|
make more clear fail on numpy tensor in marian
|
{
"login": "kventinel",
"id": 14203222,
"node_id": "MDQ6VXNlcjE0MjAzMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kventinel",
"html_url": "https://github.com/kventinel",
"followers_url": "https://api.github.com/users/kventinel/followers",
"following_url": "https://api.github.com/users/kventinel/following{/other_user}",
"gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kventinel/subscriptions",
"organizations_url": "https://api.github.com/users/kventinel/orgs",
"repos_url": "https://api.github.com/users/kventinel/repos",
"events_url": "https://api.github.com/users/kventinel/events{/privacy}",
"received_events_url": "https://api.github.com/users/kventinel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19291). All of your documentation changes will be reflected on that endpoint."
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19291/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19291",
"html_url": "https://github.com/huggingface/transformers/pull/19291",
"diff_url": "https://github.com/huggingface/transformers/pull/19291.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19291.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19290
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19290/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19290/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19290/events
|
https://github.com/huggingface/transformers/issues/19290
| 1,393,806,212
|
I_kwDOCUB6oc5TE8uE
| 19,290
|
ValueError: The following `model_kwargs` are not used by the model: ['length']
|
{
"login": "CrackerHax",
"id": 6037535,
"node_id": "MDQ6VXNlcjYwMzc1MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6037535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrackerHax",
"html_url": "https://github.com/CrackerHax",
"followers_url": "https://api.github.com/users/CrackerHax/followers",
"following_url": "https://api.github.com/users/CrackerHax/following{/other_user}",
"gists_url": "https://api.github.com/users/CrackerHax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CrackerHax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CrackerHax/subscriptions",
"organizations_url": "https://api.github.com/users/CrackerHax/orgs",
"repos_url": "https://api.github.com/users/CrackerHax/repos",
"events_url": "https://api.github.com/users/CrackerHax/events{/privacy}",
"received_events_url": "https://api.github.com/users/CrackerHax/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @CrackerHax,\r\n\r\nThanks for the issue. \r\n\r\nCould I ask you to share also the lines of code that you used to initialize the model, tokenizer and user_input which work with the `4.21.0` version of transformers and not the newer versions?",
"FYI, I'm also triggering this error when I use the latest transformers, when running the [BLIP captioning model example](https://github.com/salesforce/LAVIS) on the saleforce lavis library which uses transformers:\r\n\r\n``` py\r\nimport torch\r\nfrom lavis.models import load_model_and_preprocess\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n# loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.\r\n# this also loads the associated image processors\r\nmodel, vis_processors, _ = load_model_and_preprocess(name=\"blip_caption\", model_type=\"base_coco\", is_eval=True, device=device)\r\n# preprocess the image\r\n# vis_processors stores image transforms for \"train\" and \"eval\" (validation / testing / inference)\r\nimage = vis_processors[\"eval\"](raw_image).unsqueeze(0).to(device)\r\n# generate caption\r\nmodel.generate({\"image\": image})\r\n# ['a large fountain spewing water into the air']\r\n```\r\n\r\nIn my case I was getting the following error:\r\n\r\n```\r\n The following `model_kwargs` are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)\r\n```",
"> Hi @CrackerHax,\r\n> \r\n> Thanks for the issue.\r\n> \r\n> Could I ask you to share also the lines of code that you used to initialize the model, tokenizer and user_input which work with the `4.21.0` version of transformers and not the newer versions?\r\n\r\nconfig = transformers.GPTJConfig.from_pretrained(\"./gpt-j-6B-8bit)\r\ntokenizer = AutoTokenizer.from_pretrained(\"./gpt-j-6B-8bit\",add_prefix_space=True)\r\nvocab = tokenizer.get_vocab()\r\ngpt = GPTJForCausalLM.from_pretrained(\"./gpt-j-6B-8bit\",low_cpu_mem_usage=True)\r\n\r\nprompt = tokenizer(user_input, return_tensors='pt', return_length=True)\r\nprompt = {key: value.to(device) for key, value in prompt.items()}\r\nout = gpt.generate(**prompt, max_length=max_length, do_sample=True, temperature=temperature, repetition_penalty=2.0)\r\n out = tokenizer.decode(out[0], skip_special_tokens=True)",
"> FYI, I'm also triggering this error when I use the latest transformers, when running the [BLIP captioning model example](https://github.com/salesforce/LAVIS) on the saleforce lavis library which uses transformers:\r\n> \r\n> ```python\r\n> import torch\r\n> from lavis.models import load_model_and_preprocess\r\n> device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n> # loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.\r\n> # this also loads the associated image processors\r\n> model, vis_processors, _ = load_model_and_preprocess(name=\"blip_caption\", model_type=\"base_coco\", is_eval=True, device=device)\r\n> # preprocess the image\r\n> # vis_processors stores image transforms for \"train\" and \"eval\" (validation / testing / inference)\r\n> image = vis_processors[\"eval\"](raw_image).unsqueeze(0).to(device)\r\n> # generate caption\r\n> model.generate({\"image\": image})\r\n> # ['a large fountain spewing water into the air']\r\n> ```\r\n> \r\n> In my case I was getting the following error:\r\n> \r\n> ```\r\n> The following `model_kwargs` are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)\r\n> ```\r\n\r\nSame here. Can anyone help fix this?",
"@zzxslp \r\n\r\nChange these line at https://github.com/salesforce/BLIP/blob/main/models/med.py#L932 as following:\r\n\r\nfrom\r\n```python\r\n def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs):\r\n input_shape = input_ids.shape\r\n # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly\r\n if attention_mask is None:\r\n attention_mask = input_ids.new_ones(input_shape)\r\n\r\n # cut decoder_input_ids if past is used\r\n if past is not None:\r\n input_ids = input_ids[:, -1:]\r\n\r\n return {\r\n \"input_ids\": input_ids, \r\n \"attention_mask\": attention_mask, \r\n \"past_key_values\": past,\r\n \"encoder_hidden_states\": model_kwargs.get(\"encoder_hidden_states\", None),\r\n \"encoder_attention_mask\": model_kwargs.get(\"encoder_attention_mask\", None),\r\n \"is_decoder\": True,\r\n }\r\n```\r\n\r\nto\r\n```python\r\n def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, **model_kwargs):\r\n input_shape = input_ids.shape\r\n # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly\r\n if attention_mask is None:\r\n attention_mask = input_ids.new_ones(input_shape)\r\n\r\n # cut decoder_input_ids if past is used\r\n if past is not None:\r\n input_ids = input_ids[:, -1:]\r\n\r\n return {\r\n \"input_ids\": input_ids, \r\n \"attention_mask\": attention_mask, \r\n \"past_key_values\": past,\r\n \"encoder_hidden_states\": encoder_hidden_states,\r\n \"encoder_attention_mask\": encoder_attention_mask,\r\n \"is_decoder\": True,\r\n }\r\n```\r\n\r\nWhy: https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/generation_utils.py#L899",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still needs to be addressed",
"Would you like to take a look at this @ArthurZucker?",
"Sure! \r\n",
"@ArthurZucker I can look at this issue, But I am setup for reproduction due to size of model, how do you folks setup for reproducing bug for larger models ? ",
"Well, in that case I just downloaded the model. You can usually initialize a random tiny model using a different configuration. \r\nCould you tell me why you need to set `return_length=True`. This has the effect of adding `length` to the list of inputs, and is not useful when generating. This argument is mostly used when training a fast tokenizer.",
"> Well, in that case I just downloaded the model. You can usually initialize a random tiny model using a different configuration. Could you tell me why you need to set `return_length=True`. This has the effect of adding `length` to the list of inputs, and is not useful when generating. This argument is mostly used when training a fast tokenizer.\r\n\r\nYes, in my use case I need the length when generating. Unless you have a really good reason, it shouldn't be removed but fixed.",
"The thing is that the `generate` function does not take the `length` argument, and there is no reason to add it as it is not used. Which is why I don't understand why you would use it? 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> The thing is that the `generate` function does not take the `length` argument, and there is no reason to add it as it is not used. Which is why I don't understand why you would use it? 😉\r\n\r\nYes it does, it takes min and max length as arguments. The only other option is to use an older version of huggingface. Why would it work in earlier versions and not now? This is useful for adjusting max_length dynamically such as:\r\n\r\n```\r\n max_t=max_length+prompt['length'][0]\r\n if(max_length<min_length):\r\n max_length=min_length\r\n prompt = {key: value.to(device) for key, value in prompt.items()}\r\n out = gpt.generate(**prompt, min_length=min_length, max_length=max_t, do_sample=do_sample)\r\n```",
"> > FYI, I'm also triggering this error when I use the latest transformers, when running the [BLIP captioning model example](https://github.com/salesforce/LAVIS) on the saleforce lavis library which uses transformers:\r\n> > ```python\r\n> > import torch\r\n> > from lavis.models import load_model_and_preprocess\r\n> > device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n> > # loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.\r\n> > # this also loads the associated image processors\r\n> > model, vis_processors, _ = load_model_and_preprocess(name=\"blip_caption\", model_type=\"base_coco\", is_eval=True, device=device)\r\n> > # preprocess the image\r\n> > # vis_processors stores image transforms for \"train\" and \"eval\" (validation / testing / inference)\r\n> > image = vis_processors[\"eval\"](raw_image).unsqueeze(0).to(device)\r\n> > # generate caption\r\n> > model.generate({\"image\": image})\r\n> > # ['a large fountain spewing water into the air']\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > In my case I was getting the following error:\r\n> > ```\r\n> > The following `model_kwargs` are not used by the model: ['encoder_hidden_states', 'encoder_attention_mask'] (note: typos in the generate arguments will also show up in this list)\r\n> > ```\r\n> \r\n> Same here. Can anyone help fix this?\r\n\r\ntransformers==4.25.1 will be ok?",
"No, the `generate` function take a `max_length` argument, and almost always has. \r\nWe don't adapt the library to external ones like `lavis` so this will not be adressed",
"I feel like you did not read my reply. I KNOW the generate function uses max_length.",
"Oh sorry, I was just responding to the question of whether we will change this or not! \r\nTHe `max_new_token` should do what you want, it takes into account the length of the input"
] | 1,664
| 1,687
| 1,674
|
NONE
| null |
### System Info
4.22.2
### Who can help?
@SaulLu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
prompt = tokenizer(user_input, return_tensors='pt', return_length=True)
prompt = {key: value.to(device) for key, value in prompt.items()}
out = gpt.generate(**prompt, ...)
```
When using "return_length=True" with the tokenizer, the error is given. This is from a change in a recent version and did not happen in older versions.
`ValueError: The following `model_kwargs` are not used by the model: ['length'] (note: typos in the generate arguments will also show up in this list)`
### Expected behavior
Model should not produce an error when "return_length" is set to True
Downgrade to 4.21.0 fixes the problem and according to my googling this is what people are doing
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19290/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19290/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19289
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19289/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19289/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19289/events
|
https://github.com/huggingface/transformers/issues/19289
| 1,393,797,328
|
I_kwDOCUB6oc5TE6jQ
| 19,289
|
Call to pipeline.predict() fails
|
{
"login": "s-udhaya",
"id": 2215597,
"node_id": "MDQ6VXNlcjIyMTU1OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2215597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s-udhaya",
"html_url": "https://github.com/s-udhaya",
"followers_url": "https://api.github.com/users/s-udhaya/followers",
"following_url": "https://api.github.com/users/s-udhaya/following{/other_user}",
"gists_url": "https://api.github.com/users/s-udhaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s-udhaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-udhaya/subscriptions",
"organizations_url": "https://api.github.com/users/s-udhaya/orgs",
"repos_url": "https://api.github.com/users/s-udhaya/repos",
"events_url": "https://api.github.com/users/s-udhaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/s-udhaya/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @s-udhaya ,\r\n\r\nThe code you're referring to is very old in the codebase and was created for compat with stuff like scikit-learn, over which I have very little knowledge.\r\n\r\nThe recommended way to call the pipeline is to do;\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"text-classification\")\r\nprint(pipe(\"This restaurant is awesome\"))\r\n```\r\n\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\ndef dataset():\r\n for i in range(1000):\r\n # Load from somewhere, a dataset, some file etc..\r\n yield \"This restaurant is awesome\"\r\n \r\npipe = pipeline(\"text-classification\")\r\nfor out in pipe(dataset()):\r\n print(out)\r\n```\r\n\r\nOfc you can send a list if you want, but it's not going to be used for batching any way (batching is an orthogonal concept in pipelines which you activate by using the parameter `pipeline(..., batch_size=n)`)",
"Hi @Narsil ,\r\n\r\nMany thanks for the detailed response. Actually I am in the process of building a dedicated [Mlflow flavor for transformers](https://github.com/s-udhaya/mlflow/tree/add-transformers-flavor/mlflow/transformers) and I am heavily using this awesome pipeline abstraction as this drastically reduces the complexity of my implementation. \r\n\r\nAs you could see [here](https://github.com/s-udhaya/mlflow/blob/add-transformers-flavor/examples/transformers/finetune_trainer_xlm_autolog.py), I use the recommended way to call the pipeline so far. However in order to have a unified interface across all mlflow model flavours, It is essential that the predict function to work. As far as I understand, this [fix](https://github.com/huggingface/transformers/compare/main...s-udhaya:transformers:fix_pipeline_predict#diff-441f558737166b045444da9c4be81f566b3d69054e8f20e288aed746a691fa61R845) should resolve the issue. Would you be kind enough to have a look at it, if you have time or redirect me to someone who can assist me in here?\r\n\r\nThanks again.",
"> As far as I understand, this [fix](https://github.com/huggingface/transformers/compare/main...s-udhaya:transformers:fix_pipeline_predict#diff-441f558737166b045444da9c4be81f566b3d69054e8f20e288aed746a691fa61R845) should resolve the issue.\r\n\r\nThe problem is that because the purpose of this code as since been lost, we would be very hesitant to change it because of our commitment to never make breaking changes.\r\n\r\n> However in order to have a unified interface across all mlflow model flavours, It is essential that the predict function to work. \r\n\r\nIf that is the core of the issue (which sounds odd) can't you just wrap the pipeline in another dummy object that just uses `predict` to use `__call__` ?\r\n\r\nYou can ofc open a PR with your proposed change and motivations. We will study it, and let others chime in if there is a good reason for the current code.",
"Hi @Narsil ,\r\n\r\n\r\n\r\n> If that is the core of the issue (which sounds odd) can't you just wrap the pipeline in another dummy object that just uses `predict` to use `__call__` ?\r\n\r\nThis is exactly what happens in Pipeline's predict method. I would rather prefer to fix the issue at the source than creating a workaround.\r\n\r\n\r\n\r\n> The problem is that because the purpose of this code as since been lost, we would be very hesitant to change it because of our commitment to never make breaking changes.\r\n\r\nI understand your concern regarding breaking changes. however I believe, this change is not going to be a breaking change, and the predict function is not usable in its current state.\r\n\r\nI will go ahead and create a PR. Prior to that I will run all the tests to make sure, the fix does not introduce any breaking changes. Let us see how it rolls :)\r\n\r\nThanks for the response.",
"> I will go ahead and create a PR. Prior to that I will run all the tests to make sure, the fix does not introduce any breaking changes. Let us see how it rolls :)\r\n\r\nThanks ! Again as I said, this is very old code maybe it broke a long time ago already and your PR is good (and we probably need to add a test to make sure we don't break again).",
"@Narsil Thanks for your input. I have created a PR with the fix and added few basic tests to make sure we don't break again. Please have a look at it whenever you have time and feel free to propose changes if necessary."
] | 1,664
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Execute the following piece of code resulted in an exception that is pasted below.
```python
from transformers import pipeline
pipe = pipeline("text-classification")
print(pipe.predict(["This restaurant is awesome"]))
```
Exception:
```
Traceback (most recent call last):
File "pipeline_test.py", line 5, in <module>
print(pipe.predict(["This restaurant is awesome"]))
File "miniconda3/envs/mlflow-py3.9/lib/python3.9/site-packages/transformers/pipelines/base.py", line 840, in predict
return self(X=X)
File "miniconda3/envs/mlflow-py3.9/lib/python3.9/site-packages/transformers/pipelines/text_classification.py", line 138, in __call__
result = super().__call__(*args, **kwargs)
TypeError: __call__() missing 1 required positional argument: 'inputs'
```
### Expected behavior
Successful predictions as shown below
```
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
```
### Proposed fix
I dig a bit deeper into the implementation based on the exception and found out that this [change](https://github.com/huggingface/transformers/compare/main...s-udhaya:transformers:fix_pipeline_predict#diff-441f558737166b045444da9c4be81f566b3d69054e8f20e288aed746a691fa61R845) fixes the issue. If this indeed a fix, I am happy to create a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19289/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19288
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19288/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19288/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19288/events
|
https://github.com/huggingface/transformers/pull/19288
| 1,393,736,431
|
PR_kwDOCUB6oc5AASyZ
| 19,288
|
docker-build: Update actions/checkout to v3
|
{
"login": "Sushrut1101",
"id": 75667593,
"node_id": "MDQ6VXNlcjc1NjY3NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/75667593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sushrut1101",
"html_url": "https://github.com/Sushrut1101",
"followers_url": "https://api.github.com/users/Sushrut1101/followers",
"following_url": "https://api.github.com/users/Sushrut1101/following{/other_user}",
"gists_url": "https://api.github.com/users/Sushrut1101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sushrut1101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sushrut1101/subscriptions",
"organizations_url": "https://api.github.com/users/Sushrut1101/orgs",
"repos_url": "https://api.github.com/users/Sushrut1101/repos",
"events_url": "https://api.github.com/users/Sushrut1101/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sushrut1101/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR @Sushrut1101! I'll ping Yih-Dar @ydshieh for review but he's off for the week; he'll review when he's back! Thanks :)",
"> Thanks for your PR @Sushrut1101! I'll ping Yih-Dar @ydshieh for review but he's off for the week; he'll review when he's back! Thanks :)\r\n\r\n\r\n\r\n> Actually checked and it's fine for me; thanks for your PR @Sushrut1101!\r\n\r\nYou're welcome! 👍😃"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
For hacktoberfest 2022 :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19288/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19288",
"html_url": "https://github.com/huggingface/transformers/pull/19288",
"diff_url": "https://github.com/huggingface/transformers/pull/19288.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19288.patch",
"merged_at": 1664893613000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19287
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19287/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19287/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19287/events
|
https://github.com/huggingface/transformers/pull/19287
| 1,393,728,640
|
PR_kwDOCUB6oc5AARRA
| 19,287
|
fix marianMT convertion to onnx
|
{
"login": "kventinel",
"id": 14203222,
"node_id": "MDQ6VXNlcjE0MjAzMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kventinel",
"html_url": "https://github.com/kventinel",
"followers_url": "https://api.github.com/users/kventinel/followers",
"following_url": "https://api.github.com/users/kventinel/following{/other_user}",
"gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kventinel/subscriptions",
"organizations_url": "https://api.github.com/users/kventinel/orgs",
"repos_url": "https://api.github.com/users/kventinel/repos",
"events_url": "https://api.github.com/users/kventinel/events{/privacy}",
"received_events_url": "https://api.github.com/users/kventinel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Can you please confirm that the slow tests pass with your change on convert.py, ie run this\r\n\r\n6 passed, 426 deselected, 35 warnings"
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19283
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19287/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19287",
"html_url": "https://github.com/huggingface/transformers/pull/19287",
"diff_url": "https://github.com/huggingface/transformers/pull/19287.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19287.patch",
"merged_at": 1665407489000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19286
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19286/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19286/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19286/events
|
https://github.com/huggingface/transformers/pull/19286
| 1,393,712,463
|
PR_kwDOCUB6oc5AAOJD
| 19,286
|
make small fixes in logging
|
{
"login": "kventinel",
"id": 14203222,
"node_id": "MDQ6VXNlcjE0MjAzMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kventinel",
"html_url": "https://github.com/kventinel",
"followers_url": "https://api.github.com/users/kventinel/followers",
"following_url": "https://api.github.com/users/kventinel/following{/other_user}",
"gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kventinel/subscriptions",
"organizations_url": "https://api.github.com/users/kventinel/orgs",
"repos_url": "https://api.github.com/users/kventinel/repos",
"events_url": "https://api.github.com/users/kventinel/events{/privacy}",
"received_events_url": "https://api.github.com/users/kventinel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> Hello! What problem does that solve? :)\r\n\r\nFix style of imports and froze constant dict to save it unmodified and preserve from errors.",
"@LysandreJik, can you help me please with imports problems in tests?",
"I am not sure we want to accept this PR, unfortunately I do not understand what problem it fixes. It seems to me that this already works well, do you have an example of where something would fail to do its intended purpose? Thanks!",
"> I am not sure we want to accept this PR, unfortunately I do not understand what problem it fixes. It seems to me that this already works well, do you have an example of where something would fail to do its intended purpose? Thanks!\r\n\r\nFrozendict -- its some good for python coding constraint, that dissalow chanding of dict. So its can help to avoid bags in future.\r\n\r\nAnd some minor code style changes to rid from NOQA",
"@LysandreJik, ping",
"Thanks for your PR @kventinel, but I don't think this fixes any real issue so we will not be merging this PR as it is. Thanks."
] | 1,664
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix some typing errors in logging file
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19286/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19286",
"html_url": "https://github.com/huggingface/transformers/pull/19286",
"diff_url": "https://github.com/huggingface/transformers/pull/19286.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19286.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19285
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19285/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19285/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19285/events
|
https://github.com/huggingface/transformers/issues/19285
| 1,393,676,703
|
I_kwDOCUB6oc5TEdGf
| 19,285
|
A developer environment, for faster and efficient contributions.
|
{
"login": "AnirudhDaya",
"id": 94374523,
"node_id": "U_kgDOBaAKew",
"avatar_url": "https://avatars.githubusercontent.com/u/94374523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnirudhDaya",
"html_url": "https://github.com/AnirudhDaya",
"followers_url": "https://api.github.com/users/AnirudhDaya/followers",
"following_url": "https://api.github.com/users/AnirudhDaya/following{/other_user}",
"gists_url": "https://api.github.com/users/AnirudhDaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnirudhDaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnirudhDaya/subscriptions",
"organizations_url": "https://api.github.com/users/AnirudhDaya/orgs",
"repos_url": "https://api.github.com/users/AnirudhDaya/repos",
"events_url": "https://api.github.com/users/AnirudhDaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnirudhDaya/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @AnirudhDaya, what difference is there with https://github.dev/huggingface/transformers ?\r\n\r\nThank you!",
"Sorry my bad. It didn't catch my eye.",
"No worries :)"
] | 1,664
| 1,664
| 1,664
|
NONE
| null |
### Feature request
Hello maintainers.
I would like to add gitpod to your repo to help beginners contribute.
- Gitpod is an online IDE which can be launched from any GitHub page. Within seconds, Gitpod provides a fully working development environment, including a VS Code-powered IDE and a cloud-based Linux container explicitly configured for the project
- Gitpod is highly contextual, such that it opens the IDE in the correct mode depending on the context:
- If you are looking at a particular file of a certain commit on GitHub, starting a Gitpod workspace will check out the right version and open the file you’ve been looking at in the IDE.
- Starting a Gitpod workspace from an issue will automatically create a branch and preconfigure the commit message.
- Once you are in the IDE, you can interact with GitHub in various ways. Besides the obvious Git integration, you can do things like commenting inline in editors, approving and even merging PRs.
### Motivation
I have seen many repos adopting gitpod to help new contributors in their open-source journey. It's hassle-free. You don't need to worry about dependencies not being present, since gitpod does all the heavy lifting.
### Your contribution
I can add a badge to your repo in the [CONTRIBUTING.md](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) file similar to this one:
<a href="https://gitpod.io/#<your-repository-url>">
<img
src="https://img.shields.io/badge/Contribute%20with-Gitpod-908a85?logo=gitpod"
alt="Contribute with Gitpod"
/>
</a>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19285/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19284
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19284/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19284/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19284/events
|
https://github.com/huggingface/transformers/pull/19284
| 1,393,636,509
|
PR_kwDOCUB6oc4___xv
| 19,284
|
Added type hints for TF: rag model
|
{
"login": "debjit-bw",
"id": 68442560,
"node_id": "MDQ6VXNlcjY4NDQyNTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/68442560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/debjit-bw",
"html_url": "https://github.com/debjit-bw",
"followers_url": "https://api.github.com/users/debjit-bw/followers",
"following_url": "https://api.github.com/users/debjit-bw/following{/other_user}",
"gists_url": "https://api.github.com/users/debjit-bw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/debjit-bw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/debjit-bw/subscriptions",
"organizations_url": "https://api.github.com/users/debjit-bw/orgs",
"repos_url": "https://api.github.com/users/debjit-bw/repos",
"events_url": "https://api.github.com/users/debjit-bw/events{/privacy}",
"received_events_url": "https://api.github.com/users/debjit-bw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 can you take a look at this? I just added type hints and fixed a syntax error in one previous type. Still it shows it failed these 2 tests. I have no idea what these tests do, can you shed some light on this issue?",
"Thanks a lot @Rocketknight1, I have made the suggested changes!",
"Looks perfect now, thank you!"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
Based on Issue https://github.com/huggingface/transformers/issues/16059
Type hints for the [TFRagModel](https://huggingface.co/docs/transformers/model_doc/rag) have been added.
@Rocketknight1 Could you please check the changes and merge if it's fine?
Thanks a lot.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19284/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19284",
"html_url": "https://github.com/huggingface/transformers/pull/19284",
"diff_url": "https://github.com/huggingface/transformers/pull/19284.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19284.patch",
"merged_at": 1664891796000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19283
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19283/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19283/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19283/events
|
https://github.com/huggingface/transformers/issues/19283
| 1,393,630,331
|
I_kwDOCUB6oc5TERx7
| 19,283
|
python -m transformers.onnx --model=Helsinki-NLP/opus-mt-en-zh onnx/
|
{
"login": "chaodreaming",
"id": 49591435,
"node_id": "MDQ6VXNlcjQ5NTkxNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/49591435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaodreaming",
"html_url": "https://github.com/chaodreaming",
"followers_url": "https://api.github.com/users/chaodreaming/followers",
"following_url": "https://api.github.com/users/chaodreaming/following{/other_user}",
"gists_url": "https://api.github.com/users/chaodreaming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chaodreaming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaodreaming/subscriptions",
"organizations_url": "https://api.github.com/users/chaodreaming/orgs",
"repos_url": "https://api.github.com/users/chaodreaming/repos",
"events_url": "https://api.github.com/users/chaodreaming/events{/privacy}",
"received_events_url": "https://api.github.com/users/chaodreaming/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Also reproduced on mac",
"Fast fix: run with `--atol 1e-4`.",
"The --atol 1e-4 method can be run, but the running result seems incorrect",
"I think it's not a big difference for such big NN. In ouputs you have values much greater than 1e-4, so problem probably in some other place.",
"After exporting, the data dimension is incorrect. Can you give me a code? Thank you very much",
"> Can you give me a code?\r\n\r\nAll code is here:)\r\n\r\nWhy do you think, that problem in dimensions? They realy checked [here](https://github.com/huggingface/transformers/blob/main/src/transformers/onnx/convert.py#L430) \r\n",
"from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom transformers.models.marian import MarianOnnxConfig\r\n\r\nmodel_ckpt = \"Helsinki-NLP/opus-mt-en-de\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_ckpt)\r\nref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt)\r\n# Export model\r\nfeature = \"seq2seq-lm\"\r\nonnx_path = f\"onnx/{model_ckpt}-{feature}/\"\r\n# Run this from a Jupyter notebook\r\n!python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path}\r\n# Test export with inputs\r\nbatch_size = 4\r\nencoder_inputs = tokenizer(\r\n [\"Studies have been shown that owning a dog is good for you\"] * batch_size,\r\n return_tensors=\"np\",\r\n)\r\ndecoder_inputs = tokenizer(\r\n [\"Studien haben gezeigt dass es hilfreich ist einen Hund zu besitzen\"]\r\n * batch_size,\r\n return_tensors=\"np\",\r\n)\r\nall_inputs = {\r\n \"input_ids\": encoder_inputs[\"input_ids\"],\r\n \"attention_mask\": encoder_inputs[\"attention_mask\"],\r\n \"decoder_input_ids\": decoder_inputs[\"input_ids\"],\r\n \"decoder_attention_mask\": decoder_inputs[\"attention_mask\"],\r\n}\r\n# Generate ONNX outputs\r\nort_session = ort.InferenceSession(f\"{onnx_path}model.onnx\")\r\nonnx_config = MarianOnnxConfig(ref_model.config, task=feature)\r\nonnx_named_outputs = list(onnx_config.outputs.keys())\r\nonnx_outputs = ort_session.run(onnx_named_outputs, all_inputs)",
"How to get results",
"So... And what your problem here?",
"How to get text",
"The code is not exactly the same, but the problem is the same. The other is Marian",
"The result is a four-dimensional tensor, but no matter how it is processed, it cannot be decoded to get the correct translation, so I think the result is incorrect",
"4 dims = number_of_outputs x batch_size x n_words x embedding",
"The dimensions can still be solved, but the decoder input is incredible, it is obvious that this code knows the result in advance, it is impossible for me to predict the result in advance when I translate",
"from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom onnxruntime import InferenceSession\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(\"opus-mt-en-zh\")\r\nsession = InferenceSession(\"opus-mt-en-zh-onnx-301/model.onnx\")\r\ninputs = tokenizer(\"Using DistilBERT with ONNX Runtime!\", return_tensors=\"pt\")\r\noutputs = session.run(output_names=[\"last_hidden_state\"], input_feed=dict(inputs))",
"Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/xieyouxi/anaconda3/envs/HuggingFace-torch-gpu/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 196, in run\r\n raise ValueError(\"Model requires {} inputs. Input Feed contains {}\".format(num_required_inputs, num_inputs))\r\nValueError: Model requires 4 inputs. Input Feed contains 2",
"https://github.com/huggingface/transformers/issues/18518",
"@CatchDr You did not provide decoder inputs, hence the error message. Have you tried to do what is suggested in #18518?",
"> \r\n\r\nHe and I are actually a problem, he this also did not solve the",
"> > \r\n> \r\n> He and I are actually a problem, he this also did not solve the\r\n\r\nThe code snippet you shared and that fails does not do what is suggested in #18518. Could you try the following?\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom onnxruntime import InferenceSession\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(\"opus-mt-en-zh\")\r\nsession = InferenceSession(\"opus-mt-en-zh-onnx-301/model.onnx\")\r\ninputs = tokenizer(\"Using DistilBERT with ONNX Runtime!\", return_tensors=\"pt\")\r\ninputs[\"decoder_input_ids\"] = torch.tensor([0], dtype=torch.long)\r\ninputs[\"decoder_attention_mask\"] = torch.tensor([1], dtype=torch.long)\r\noutputs = session.run(output_names=[\"last_hidden_state\"], input_feed=dict(inputs))\r\n```",
"Please wait, about 10 minutes.",
"from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom transformers.models.marian import MarianOnnxConfig\r\nimport onnxruntime as ort \r\nmodel_ckpt = \"Helsinki-NLP/opus-mt-en-zh\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_ckpt)\r\nref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt)\r\n# Export model\r\nfeature = \"seq2seq-lm\"\r\nonnx_path = f\"onnx/{model_ckpt}-{feature}/\"\r\n# Run this from a Jupyter notebook\r\n!python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path}\r\n\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom onnxruntime import InferenceSession\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-zh\")\r\nsession = InferenceSession(\"onnx/Helsinki-NLP/opus-mt-en-zh/model.onnx\")\r\ninputs = tokenizer(\"Using DistilBERT with ONNX Runtime!\", return_tensors=\"pt\")\r\ninputs[\"decoder_input_ids\"] = torch.tensor([0], dtype=torch.long)\r\ninputs[\"decoder_attention_mask\"] = torch.tensor([1], dtype=torch.long)\r\noutputs = session.run(output_names=[\"last_hidden_state\"], input_feed=dict(inputs))\r\noutputs\r\n\r\nVery sorry, as a rookie, said many times to describe clearly, here is all the code, run it",
"\r\nI tried to make some changes, but the dimensions seem to be incorrect again",
"@CatchDr The result you get is correct. Some post-processing is necessary to generate the whole sentence. If you just want to convert your model to the ONNX format and translate sentences, I suggest you to use Optimum. It will do all the generation work for you. For example:\r\n```python\r\nfrom transformers import AutoTokenizer, pipeline\r\nfrom optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n\r\nmodel = ORTModelForSeq2SeqLM.from_pretrained(\"Helsinki-NLP/opus-mt-en-zh\", from_transformers=True)\r\ntokenizer = AutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-zh\")\r\n\r\nonnx_translation = pipeline(\"translation_en_to_zh\", model=model, tokenizer=tokenizer)\r\n\r\nresult = onnx_translation(\"Using DistilBERT with ONNX Runtime!\")\r\n```",
"I would like to know how to post-process to get the right result, because I have tried this solution you mentioned and the result is very poor",
"text=\"Vehicle detection technology is of great significance for realizing automatic monitoring and AI-assisted driving systems. The state-of-the-art object detection method, namely, a class of YOLOv5, has often been used to detect vehicles. However, it suffers some challenges, such as a high computational load and undesirable detection rate. To address these issues, an improved lightweight YOLOv5 method is proposed for vehicle detection in this paper. In the presented method, C3Ghost and Ghost modules are introduced into the YOLOv5 neck network to reduce the floating-point operations (FLOPs) in the feature channel fusion process and enhance the feature expression performance. A convolutional block attention module (CBAM) is introduced to the YOLOv5 backbone network to select the information critical to the vehicle detection task and suppress uncritical information, thus improving the detection accuracy of the algorithm. Furthermore, CIoU_Loss is considered the bounding box regression loss function to accelerate the bounding box regression rate and improve the localization accuracy of the algorithm. To verify the performance of the proposed approach, we tested our model via two case studies, i.e., the PASCAL VOC dataset and MS COCO dataset. The results show that the detection precision of the proposed model increased 3.2%, the FLOPs decreased 15.24%, and the number of model parameters decreased 19.37% compared with those of the existing YOLOv5. Through case studies and comparisons, the effectiveness and superiority of the presented approach are demonstrated.\"\r\nYou can try to translate this text for comparison, the result is very poor",
"@CatchDr You can take a look at [this example](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForCausalLM.forward.example) and change the arguments of the `generate` method if you want to decode your outputs in a different way (see [here](https://huggingface.co/blog/how-to-generate) for the possible decoding strategies). But maybe this model is simply not good enough for what you are trying to achieve. ",
"outputs = session.run(output_names=[\"logits\"], input_feed=dict(inputs))\r\nThere should be some errors here",
"I've tried every decoding method I can think of and can't get the results I want, expecting something completely different, so I'm asking for help here",
"> @CatchDr The result you get is correct. Some post-processing is necessary to generate the whole sentence. If you just want to convert your model to the ONNX format and translate sentences, I suggest you to use Optimum. It will do all the generation work for you. For example:\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer, pipeline\r\n> from optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n> \r\n> model = ORTModelForSeq2SeqLM.from_pretrained(\"Helsinki-NLP/opus-mt-en-zh\", from_transformers=True)\r\n> tokenizer = AutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-zh\")\r\n> \r\n> onnx_translation = pipeline(\"translation_en_to_zh\", model=model, tokenizer=tokenizer)\r\n> \r\n> result = onnx_translation(\"Using DistilBERT with ONNX Runtime!\")\r\n> ```\r\n\r\nHi I tried this code and it seems not working ?\r\n\r\nEdit : It indeed works, I forgot to print the result"
] | 1,664
| 1,703
| 1,665
|
NONE
| null |
### System Info
transformers:4.22.2
python3.8.4
win10
raise ValueError(
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of:
2.8133392333984375e-05
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m transformers.onnx --model=Helsinki-NLP/opus-mt-en-zh onnx/
### Expected behavior
Export onnx and translate through onnx
https://www.kaggle.com/code/catchlife/translate-opt
Custom export has incorrect translation results
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19283/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19282
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19282/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19282/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19282/events
|
https://github.com/huggingface/transformers/pull/19282
| 1,393,625,434
|
PR_kwDOCUB6oc4__9rG
| 19,282
|
Fix the error message in run_t5_mlm_flax.py
|
{
"login": "yangky11",
"id": 5431913,
"node_id": "MDQ6VXNlcjU0MzE5MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5431913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangky11",
"html_url": "https://github.com/yangky11",
"followers_url": "https://api.github.com/users/yangky11/followers",
"following_url": "https://api.github.com/users/yangky11/following{/other_user}",
"gists_url": "https://api.github.com/users/yangky11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangky11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangky11/subscriptions",
"organizations_url": "https://api.github.com/users/yangky11/orgs",
"repos_url": "https://api.github.com/users/yangky11/repos",
"events_url": "https://api.github.com/users/yangky11/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangky11/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19282/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19282",
"html_url": "https://github.com/huggingface/transformers/pull/19282",
"diff_url": "https://github.com/huggingface/transformers/pull/19282.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19282.patch",
"merged_at": 1665409871000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19281
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19281/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19281/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19281/events
|
https://github.com/huggingface/transformers/pull/19281
| 1,393,480,457
|
PR_kwDOCUB6oc4__hdK
| 19,281
|
ci(stale.yml): upgrade actions/setup-python to v4
|
{
"login": "oscard0m",
"id": 2574275,
"node_id": "MDQ6VXNlcjI1NzQyNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2574275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oscard0m",
"html_url": "https://github.com/oscard0m",
"followers_url": "https://api.github.com/users/oscard0m/followers",
"following_url": "https://api.github.com/users/oscard0m/following{/other_user}",
"gists_url": "https://api.github.com/users/oscard0m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oscard0m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oscard0m/subscriptions",
"organizations_url": "https://api.github.com/users/oscard0m/orgs",
"repos_url": "https://api.github.com/users/oscard0m/repos",
"events_url": "https://api.github.com/users/oscard0m/events{/privacy}",
"received_events_url": "https://api.github.com/users/oscard0m/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
Update actions/setup-python to v4 in stale.yml
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
**No but I can create one and reference it here if it's necessary.**
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
**I understand no documentation changes are necessary for this change.**
- [x] Did you write any new necessary tests?
**I understand no tests are necessary for this change.**
## Who can review?
🤷🏽, no human committer checking Git Blame
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19281/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19281",
"html_url": "https://github.com/huggingface/transformers/pull/19281",
"diff_url": "https://github.com/huggingface/transformers/pull/19281.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19281.patch",
"merged_at": 1664892454000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19280
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19280/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19280/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19280/events
|
https://github.com/huggingface/transformers/pull/19280
| 1,393,479,346
|
PR_kwDOCUB6oc4__hPr
| 19,280
|
ci(stale.yml): update actions/checkout to v3
|
{
"login": "oscard0m",
"id": 2574275,
"node_id": "MDQ6VXNlcjI1NzQyNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2574275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oscard0m",
"html_url": "https://github.com/oscard0m",
"followers_url": "https://api.github.com/users/oscard0m/followers",
"following_url": "https://api.github.com/users/oscard0m/following{/other_user}",
"gists_url": "https://api.github.com/users/oscard0m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oscard0m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oscard0m/subscriptions",
"organizations_url": "https://api.github.com/users/oscard0m/orgs",
"repos_url": "https://api.github.com/users/oscard0m/repos",
"events_url": "https://api.github.com/users/oscard0m/events{/privacy}",
"received_events_url": "https://api.github.com/users/oscard0m/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
Update actions/checkout to v3 in stale.yml
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
**No but I can create one and reference it here if it's necessary.**
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
**I understand no documentation changes are necessary for this change.**
- [x] Did you write any new necessary tests?
**I understand no tests are necessary for this change.**
## Who can review?
🤷🏽, no human committer checking Git Blame
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19280/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19280",
"html_url": "https://github.com/huggingface/transformers/pull/19280",
"diff_url": "https://github.com/huggingface/transformers/pull/19280.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19280.patch",
"merged_at": 1664892473000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19279
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19279/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19279/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19279/events
|
https://github.com/huggingface/transformers/pull/19279
| 1,393,432,906
|
PR_kwDOCUB6oc4__X7K
| 19,279
|
Wrap Deit integration test forward passes with torch.no_grad()
|
{
"login": "daspartho",
"id": 59410571,
"node_id": "MDQ6VXNlcjU5NDEwNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daspartho",
"html_url": "https://github.com/daspartho",
"followers_url": "https://api.github.com/users/daspartho/followers",
"following_url": "https://api.github.com/users/daspartho/following{/other_user}",
"gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daspartho/subscriptions",
"organizations_url": "https://api.github.com/users/daspartho/orgs",
"repos_url": "https://api.github.com/users/daspartho/repos",
"events_url": "https://api.github.com/users/daspartho/events{/privacy}",
"received_events_url": "https://api.github.com/users/daspartho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,664
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
As proposed in issue #14642, this PR wraps forward passes in Deit integration tests with torch.no_grad(). This way, no unnecessary gradients are computed during inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik could you please take a look at it?
Thanks :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19279/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19279",
"html_url": "https://github.com/huggingface/transformers/pull/19279",
"diff_url": "https://github.com/huggingface/transformers/pull/19279.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19279.patch",
"merged_at": 1664892509000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.