url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/18474
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18474/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18474/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18474/events
|
https://github.com/huggingface/transformers/pull/18474
| 1,328,942,262
|
PR_kwDOCUB6oc48qqM3
| 18,474
|
Update no trainer examples for QA and Semantic Segmentation
|
{
"login": "kiansierra",
"id": 47116198,
"node_id": "MDQ6VXNlcjQ3MTE2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47116198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiansierra",
"html_url": "https://github.com/kiansierra",
"followers_url": "https://api.github.com/users/kiansierra/followers",
"following_url": "https://api.github.com/users/kiansierra/following{/other_user}",
"gists_url": "https://api.github.com/users/kiansierra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiansierra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiansierra/subscriptions",
"organizations_url": "https://api.github.com/users/kiansierra/orgs",
"repos_url": "https://api.github.com/users/kiansierra/repos",
"events_url": "https://api.github.com/users/kiansierra/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiansierra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
Update run_qa_no_trainer.py, run_qa_beam_search_no_trainer.py, run_semantic_segmentation_no_trainer.py examples to include `accelarator.gather_metrics`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # https://github.com/huggingface/transformers/issues/18437
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I ran the scripts locally with the following arguments
```
"program": "examples/pytorch/question-answering/run_qa_no_trainer.py",
"args": [
"--dataset_name",
"squad",
"--dataset_config_name",
"plain_text",
"--model_type",
"bert",
"--tokenizer_name",
"bert-base-uncased",
"--max_train_steps",
"50"
]
```
```
"program": "examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py",
"args": [
"--dataset_name",
"squad",
"--model_name_or_path",
"xlnet-base-cased",
"--max_train_steps",
"50"
]
```
```
"program": "examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py",
"args": [
"--max_train_steps",
"50"
]
```
## Who can review?
@muellerzr , @sgugger , @pacman100
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18474/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18474",
"html_url": "https://github.com/huggingface/transformers/pull/18474",
"diff_url": "https://github.com/huggingface/transformers/pull/18474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18474.patch",
"merged_at": 1659633739000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18473
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18473/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18473/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18473/events
|
https://github.com/huggingface/transformers/pull/18473
| 1,328,695,784
|
PR_kwDOCUB6oc48p1Vq
| 18,473
|
Update no_trainer.py scripts to include accelerate gradient accumulation wrapper
|
{
"login": "Rasmusafj",
"id": 8708133,
"node_id": "MDQ6VXNlcjg3MDgxMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8708133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rasmusafj",
"html_url": "https://github.com/Rasmusafj",
"followers_url": "https://api.github.com/users/Rasmusafj/followers",
"following_url": "https://api.github.com/users/Rasmusafj/following{/other_user}",
"gists_url": "https://api.github.com/users/Rasmusafj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rasmusafj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rasmusafj/subscriptions",
"organizations_url": "https://api.github.com/users/Rasmusafj/orgs",
"repos_url": "https://api.github.com/users/Rasmusafj/repos",
"events_url": "https://api.github.com/users/Rasmusafj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rasmusafj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Let us know when the PR is ready for review (it's in draft mode right now) so we can go ahead and merge.",
"@sgugger @muellerzr I changed PR to review. :)\r\n\r\n",
"@sgugger I removed changes to wav2vec script and fixed the using of wrong constant. \r\n\r\nFeel sure to merge if its fine. Its going to be squash merged right? or do i need to rebase and squash before merge?",
"We squash indeed. Thanks again for your contribution!"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
Updates no_trainer.py scripts to use the new gradient accumulation wrapper feature from accelerate according to #18436.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ N] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ Y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ N] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [N ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18473/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18473",
"html_url": "https://github.com/huggingface/transformers/pull/18473",
"diff_url": "https://github.com/huggingface/transformers/pull/18473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18473.patch",
"merged_at": 1659988367000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18472
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18472/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18472/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18472/events
|
https://github.com/huggingface/transformers/issues/18472
| 1,328,575,297
|
I_kwDOCUB6oc5PMHNB
| 18,472
|
TFEncoderDecoderModel can not be trained with TF Keras fit() method
|
{
"login": "kmkarakaya",
"id": 41159849,
"node_id": "MDQ6VXNlcjQxMTU5ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/41159849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kmkarakaya",
"html_url": "https://github.com/kmkarakaya",
"followers_url": "https://api.github.com/users/kmkarakaya/followers",
"following_url": "https://api.github.com/users/kmkarakaya/following{/other_user}",
"gists_url": "https://api.github.com/users/kmkarakaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kmkarakaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmkarakaya/subscriptions",
"organizations_url": "https://api.github.com/users/kmkarakaya/orgs",
"repos_url": "https://api.github.com/users/kmkarakaya/repos",
"events_url": "https://api.github.com/users/kmkarakaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/kmkarakaya/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @kmkarakaya π Having a popular project like `transformers` means we get many support and feature requests β if we want to maximize how much we help the community, the community has to help us stay productive π\r\n\r\nTo that end, please share a *short* script where the issue is clearly reproducible on *any* computer. Thank you π€",
"Hi @gante, \r\nHere is the script ( https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example ) which **_I modified it to train the model as below_**:\r\n\r\nimport tensorflow as tf\r\nfrom transformers import TFEncoderDecoderModel, BertTokenizer\r\nmodel = TFEncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-cased\", \"gpt2\")\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\nmodel.compile(loss=None)\r\n**model.fit**(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)\r\n\r\n**The error message:**\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n",
"Hi @kmkarakaya -- technically I can't reproduce the script, since I don't have access to your `input_ids`.\r\n\r\nHowever, looking at the code, I can tell that `model.fit` is not being called correctly. Please check its [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit), especially its `x` and `y` arguments :)",
"@gante As I wrote in every message this code belongs to the HF repo https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example \r\n\r\nhere is the complete & full code from the HF link:\r\nI hope this time you can help to fix the problem:\r\n\r\nfrom transformers import TFEncoderDecoderModel, BertTokenizer\r\nmodel = TFEncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-cased\", \"gpt2\")\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\ninput_ids = tokenizer.encode(\r\n \"Hello, my dog is cute\", add_special_tokens=True, return_tensors=\"tf\"\r\n) \r\nmodel.compile(loss=None)\r\nmodel.fit(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)\r\n",
"@gante Please note that my question is related to **TFEncoderDecoderModel** therefore, model.fit(x,y) is not enough! We need to provide **encoder input, decoder input and decoder output** as **the HF suggests in its official documentation**: https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example\r\n\r\n\r\nThus, this bug's title is \"**_TFEncoderDecoderModel can not be trained with TF Keras fit() method_**\". If you know how to train **TFEncoderDecoderModel** with TF or Keras please share with me.\r\n\r\nBecause in the current **model.fit()** I am not able to do it.\r\nThank you for your attention.",
"Hi @kmkarakaya -- the [example you linked](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example) runs fine and, as I've written above, the issue with your example is in the arguments to `model.fit`.\r\n\r\nPlease see our [examples](https://github.com/huggingface/transformers/tree/main/examples/tensorflow) to learn how to prepare the data for training. For instance, see [here](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_mlm.py#L563) -- you need to prepare your data into a dataset in advance.\r\n\r\nFinally, as per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) π€",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-4.15.0-188-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.6.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the example here: https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/encoder-decoder#transformers.TFEncoderDecoderModel.call.example
1. try to fit the model: **model.fit**(input_ids=input_ids, decoder_input_ids=input_ids)
2. You will receive errors "TypeError: fit() got an unexpected keyword argument 'input_ids'"
3.

4. you can try this : **model.fit**(input_ids, input_ids)
5. but you receive many errors:

### Expected behavior
I should be able to train a TFEncoderDecoderModel with TF Keras fit() method
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18472/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18471
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18471/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18471/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18471/events
|
https://github.com/huggingface/transformers/pull/18471
| 1,328,542,256
|
PR_kwDOCUB6oc48pT4Y
| 18,471
|
Let's not cast them all
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks also for giving me the right pointer to the rootcause!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR is an alternative solution (and cleaner) to https://github.com/huggingface/transformers/pull/18467
An issue has been found when running this script:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono", device_map="auto", torch_dtype=torch.float16)
text = "def quicksort(l):"
encoded_input = tokenizer(text, return_tensors='pt')
output_sequences = model.generate(input_ids=encoded_input['input_ids'], attention_mask=encoded_input['attention_mask'])
print(tokenizer.decode(output_sequences[0], skip_special_tokens=True))
```
Since `torch_dtype=torch.float16` will cast all parameters of the models including the buffers, this will include also causal masks for some models.
In some niche cases those buffers are in `uint` or `bool` instead of `int`. This PR should address this issue and check if the parameter is either an `uint`, `int` or `bool` before casting it.
cc @sgugger
Ran codegen slow tests and the tests are passing, let me know if we need more checks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18471/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18471",
"html_url": "https://github.com/huggingface/transformers/pull/18471",
"diff_url": "https://github.com/huggingface/transformers/pull/18471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18471.patch",
"merged_at": 1659995329000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18470
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18470/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18470/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18470/events
|
https://github.com/huggingface/transformers/pull/18470
| 1,328,509,360
|
PR_kwDOCUB6oc48pMzd
| 18,470
|
Fix load of model checkpoints in the Trainer
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
#18221 broke the model reload when the contributor removed the `strict_load` variable (as requested in the review) without setting it to its proper value in the calls to `load_state_dict` after. This PR addresses that.
Fixes #18373
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18470/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18470",
"html_url": "https://github.com/huggingface/transformers/pull/18470",
"diff_url": "https://github.com/huggingface/transformers/pull/18470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18470.patch",
"merged_at": 1659615745000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18469
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18469/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18469/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18469/events
|
https://github.com/huggingface/transformers/pull/18469
| 1,328,333,783
|
PR_kwDOCUB6oc48omt6
| 18,469
|
Add `TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Test failures are `ValueError: Connection error` - irrelevant.",
"Thank you, @ydshieh for this. I appreciate the help. "
] | 1,659
| 1,659
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
The original goal is to fix `TFSegformerModelTest.test_keras_fit`, but it ends up the following
- Add `TF_MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING` to some `__init__` files.
- Add `training` arguments in a few layers for `TFSegformerModel`
- Update `_prepare_for_class` to deal with 2 more image tasks
- Fix `TFData2VecVisionForSemanticSegmentation` loss: we need batch dimension (without this, `test_dataset_conversion` failed - this was previously skipped due to the lack of labels)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18469/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18469",
"html_url": "https://github.com/huggingface/transformers/pull/18469",
"diff_url": "https://github.com/huggingface/transformers/pull/18469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18469.patch",
"merged_at": 1659638475000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18468
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18468/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18468/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18468/events
|
https://github.com/huggingface/transformers/pull/18468
| 1,328,331,767
|
PR_kwDOCUB6oc48omSK
| 18,468
|
Update no trainer scripts for multiple-choice
|
{
"login": "kiansierra",
"id": 47116198,
"node_id": "MDQ6VXNlcjQ3MTE2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47116198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiansierra",
"html_url": "https://github.com/kiansierra",
"followers_url": "https://api.github.com/users/kiansierra/followers",
"following_url": "https://api.github.com/users/kiansierra/following{/other_user}",
"gists_url": "https://api.github.com/users/kiansierra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiansierra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiansierra/subscriptions",
"organizations_url": "https://api.github.com/users/kiansierra/orgs",
"repos_url": "https://api.github.com/users/kiansierra/repos",
"events_url": "https://api.github.com/users/kiansierra/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiansierra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
Update `run_swag_no_trainer` example to include `accelarator.gather_metrics`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to #18437
I ran the script locally with the following arguments
```
"--dataset_name",
"swag",
"--dataset_config_name",
"regular",
"--model_type",
"bert",
"--tokenizer_name",
"bert-base-uncased"
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr , @sgugger, @pacman100
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18468/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18468",
"html_url": "https://github.com/huggingface/transformers/pull/18468",
"diff_url": "https://github.com/huggingface/transformers/pull/18468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18468.patch",
"merged_at": 1659612572000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18467
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18467/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18467/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18467/events
|
https://github.com/huggingface/transformers/pull/18467
| 1,328,248,353
|
PR_kwDOCUB6oc48oUtQ
| 18,467
|
CodeGen Fix causal mask for half precision
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yeah let's move the discussion to: https://github.com/huggingface/transformers/pull/18471"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR forces the causal mask to stay in `torch.uint8`. An error occurs when loading a model in half precision since `torch_dtype=torch.float16` casts also the buffers in fp16. Here is a minimal script to reproduce the error:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-mono", device_map="auto", torch_dtype=torch.float16)
text = "def quicksort(l):"
encoded_input = tokenizer(text, return_tensors='pt')
output_sequences = model.generate(input_ids=encoded_input['input_ids'], attention_mask=encoded_input['attention_mask'])
print(tokenizer.decode(output_sequences[0], skip_special_tokens=True))
```
In a future PR we could address non-casting the buffers (aka keeping them in their native `dtype`)
Can also confirm the slow tests pass!
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18467/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18467",
"html_url": "https://github.com/huggingface/transformers/pull/18467",
"diff_url": "https://github.com/huggingface/transformers/pull/18467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18467.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18466
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18466/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18466/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18466/events
|
https://github.com/huggingface/transformers/issues/18466
| 1,328,006,337
|
I_kwDOCUB6oc5PJ8TB
| 18,466
|
Fused Softmax Kernels
|
{
"login": "Sanger2000",
"id": 17725268,
"node_id": "MDQ6VXNlcjE3NzI1MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/17725268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sanger2000",
"html_url": "https://github.com/Sanger2000",
"followers_url": "https://api.github.com/users/Sanger2000/followers",
"following_url": "https://api.github.com/users/Sanger2000/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanger2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sanger2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanger2000/subscriptions",
"organizations_url": "https://api.github.com/users/Sanger2000/orgs",
"repos_url": "https://api.github.com/users/Sanger2000/repos",
"events_url": "https://api.github.com/users/Sanger2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sanger2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Sanger2000 could you add a link to the kernels from Megatron-LM? I'm curious if it could also be easily combined with a fused kernel for attention-dot-product, like FLASH attention.",
"Raw Cuda and C++ code: https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/fused_kernels\r\n\r\nThis can then be easily added to a model like here: https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L203-L209",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@Sanger2000 Are you still interested in this?",
"Not anymore. But I very much agree with Abhi about using FlashAttention instead of the Megatron kernels, since it is a decent bit faster and consumes far less memory (esp for longer sequences).\r\n\r\nIt would massively speed up all transformer implementations if they had the option of using flash attention for their attention computation. The only downside is it won't be possible for all models since Flash Attention limits the head dimensions that can be used (I don't believe it supports anything larger than 128 last time I checked).",
"I see, thanks for the feedback. There is an integration of the nn.TransformerEncoderLayer fastpath in [Optimum](https://huggingface.co/docs/optimum/bettertransformer/overview), but it is only for inference for now - the training support + flash attention will come in a next pytorch release.\r\n\r\nI've been thinking about massively supporting xformers or HazyResearch/flash-attention for transformers since some people may be interested in already benefiting from memory efficient attention / flash attention for training, and don't want to wait 2 months. I'm not just not aware if other solutions as deepspeed or others already allow to use it or not, in which case I'd rather avoid doing double work.",
"Hi @fxmarty, just want to flag I am also very interested in this. I've been digging around and haven't seen anything active regarding deepspeed integration for training (only inference too).\r\n\r\nIt does look like there's been some activity over the past few days in pytorch on this though. "
] | 1,659
| 1,674
| 1,662
|
NONE
| null |
### Feature request
Optional Fused Softmax Cuda kernels for transformer implementations.
Megatron-LM has implemented these [here](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/fused_kernel), and they offer massive speedups for models under 10B params when training at 2048 sequence lengths. In my experience, this amounts to 2x improvements in throughput. As you can see from [this example](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L203-L209), it's relatively straightforward to add fused kernels to a model.
### Motivation
From profiling the transformers models, it seems like they achieve at best 20% of peak hardware utilization on V100s and A100s for 2048 token contexts. With just the addition of fused kernels from the Megatron codebase, I see around 40% utilization. This is supported by the findings from [YaLM-100B](https://medium.com/yandex/yandex-publishes-yalm-100b-its-the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6).
For massive models, the performance improvements are less substantial (175B-500B params) but NVIDIA notes 10-20% speedups in section 5.8 of [this paper](https://cs.stanford.edu/~matei/papers/2021/sc_megatron_lm.pdf).
### Your contribution
I have my own GPT-2 implementation that uses Megatron's kernels and I would be happy to contribute. I don't have the time to implement the full feature request - which would be providing the ability to use these fused kernels for most of the hugging face models - but I think this would be very valuable for the ecosystem.
A 2x improvement in throughput at medium to large scales (around 100M-1B params) would be a substantial cost improvement for users.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18466/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18466/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18465
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18465/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18465/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18465/events
|
https://github.com/huggingface/transformers/issues/18465
| 1,327,922,988
|
I_kwDOCUB6oc5PJn8s
| 18,465
|
How to embed relational information in a Transformer for NMT task?
|
{
"login": "smith-co",
"id": 102386930,
"node_id": "U_kgDOBhpM8g",
"avatar_url": "https://avatars.githubusercontent.com/u/102386930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smith-co",
"html_url": "https://github.com/smith-co",
"followers_url": "https://api.github.com/users/smith-co/followers",
"following_url": "https://api.github.com/users/smith-co/following{/other_user}",
"gists_url": "https://api.github.com/users/smith-co/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smith-co/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smith-co/subscriptions",
"organizations_url": "https://api.github.com/users/smith-co/orgs",
"repos_url": "https://api.github.com/users/smith-co/repos",
"events_url": "https://api.github.com/users/smith-co/events{/privacy}",
"received_events_url": "https://api.github.com/users/smith-co/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"What is the input to the transformer going to be? Is it more like:\r\n\r\n> He ended his meeting on Tuesday night.\r\n\r\nbut with the graph data encoded into the embeddings somehow? Or more like:\r\n\r\n> end-01 He meet-03 data-entity Tuesday night\r\n\r\nwith the graph data iteslf as input?",
"The graph could be though of like the following:\r\n\r\n```\r\n ________\r\n | |\r\n | \\|/\r\nHe ended his meeting on Tuesday night.\r\n/|\\ | | /|\\\r\n | | | | \r\n |__| |________________| \r\n\r\n```\r\n\r\nEssentially each token in the sentence is a `node` and there could be `edge` embedded between tokens.\r\n",
"In a normal transformer, the tokens are processed into token embeddings, then an encoding of each position is processed into an embedding and added to the token embeddings at the corresponding positions. The result is positional embeddings. This is how each position 'knows' where it is in the sequence.\n\nYou could do something similar with the edge information. You need some trainable network that takes the edge type and the positional encoding of the target node, combines this information, and outputs an embedding. The embeddings of all the edges can be added to the positional embeddings for the corresponding nodes. \n\nMy intuition is that the attention layers could use this encoded information to 'find' related nodes. I don't know how well it will work but that would be my approach. Good luck!",
"@sinking-point thanks for your response. So essentially I need to extend the `positional embedding` generation considering not position in the sentence and instead based on the `edge type`.\r\n\r\nBut there could be different types of edges as well. How could that be combined? I suppose there would be a need to use different weight for different types of edge?\r\n\r\nIs there any such model implementation with hugging face? I have already have a look but can't find anything.",
"You could combine them like this:\n\nEdge type as one hot vector -> nn.Embedding -> edge type embedding\n\nIndex of target node -> positional encoding -> whatever positional embedding method your chosen transformer uses -> target node embedding\n\nSum = edge type embedding + target node embedding\n\nIf we only have a maximum of one edge per node, we can just add this sum to the origin node embedding. However, we might have many edges and if we do this they'll interfere with eachother. We want different edge types to be able to partition themselves into different parts of the vector, so I'd try a multi layer perceptron kinda thing:\n\nSum (embedding width) -> nn.Linear -> hidden (bigger width) -> activation fn -> nn.Linear -> finished edge embedding\n\nAlternatively, you could take each edge, turn it into an embedding, add embeddings for both the origin and target nodes' positional encodings. Then just append these to the transformer input. There's less complexity in that you don't need the MLP I described, but might be more expensive because attention scales quadratically with length in both time and space. ",
"I don't know of any existing transformer that does what you want already.",
"@sinking-point thanks for your response. Can I apply this change in a modular fashion? \r\n\r\nI suppose I need to augment the following snippet?\r\n\r\n```\r\npositional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)\r\n```\r\n\r\nHaving said that how could I pass the edge information π€ \r\n\r\nFor me the it does not need to be optimized. Have you have any code snippet demonstrating something similar π ?\r\n",
"What transformer do you want to use? Take Bart for example, you can pass in inputs_embeds. ",
"I would like to use `Longformer`.",
"I would probably go with my first suggestion then. Putting all the edges at the end might not play well with longformer's local attention.\r\n\r\nLongformer also has inputs_embeds as an argument, so you could do something like:\r\n\r\n```python\r\nclass MyLongformer(nn.Module):\r\n def __init__(...):\r\n self.model = LongformerModel(...)\r\n self.edge_embed = MyEdgeEmbedding(...)\r\n\r\n def forward(...):\r\n inputs_embeddings = self.model.get_input_embeddings()(input_ids, ...)\r\n\r\n for batch, edge_type_id, origin_idx, target_idx in edges:\r\n input_embeddings[batch][origin_idx] += self.edge_embed(edge_type_id, target_idx)\r\n\r\n # might be best to normalise here\r\n\r\n return self.model(inputs_embeds=inputs_embeds, ...)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### Feature request
Embedding relational information for a transformer
### Motivation
I am using Transformer model form huggingface for machine translation. However, my input data has relational information as shown below:

So I have has semantic information using Language Abstract Meaning Representation (AMR) graph in the input graph.
Is there even a way to embed relationship like the above in a transformer model? Is there any model from Huggingface that I can use in this regard?
### Your contribution
If a model is developed, I could beta test the model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18465/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18464
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18464/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18464/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18464/events
|
https://github.com/huggingface/transformers/issues/18464
| 1,327,899,700
|
I_kwDOCUB6oc5PJiQ0
| 18,464
|
mypy typing not working for AutoModelForMaskedLM when used with Trainer
|
{
"login": "harshit-sethi09",
"id": 88991319,
"node_id": "MDQ6VXNlcjg4OTkxMzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/88991319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harshit-sethi09",
"html_url": "https://github.com/harshit-sethi09",
"followers_url": "https://api.github.com/users/harshit-sethi09/followers",
"following_url": "https://api.github.com/users/harshit-sethi09/following{/other_user}",
"gists_url": "https://api.github.com/users/harshit-sethi09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harshit-sethi09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harshit-sethi09/subscriptions",
"organizations_url": "https://api.github.com/users/harshit-sethi09/orgs",
"repos_url": "https://api.github.com/users/harshit-sethi09/repos",
"events_url": "https://api.github.com/users/harshit-sethi09/events{/privacy}",
"received_events_url": "https://api.github.com/users/harshit-sethi09/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi there @harshit-sethi09 π Our code has type annotations, but they are mostly for documentation purposes (and not to be used with `mypy`)\r\n\r\ncc @sgugger ",
"I have very strong thoughts about trying to statically type-checking a dynamically typed language which would take too long to express here. But this is what you get as a result: an object will actually never be of type `AutoModel` has those can only be instantiated via classmethods which actually return other classes, a thing the static type-checker is of course incapable to see. \r\n\r\nWe could pollute the code with tons of useless annotations to please the almighty static typechecker but we have chosen not to :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,663
| 1,663
|
NONE
| null |
### System Info
Python version - 3.9
transformers version - 4.20.1
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Initialise `AutoModelForMaskedLM`
model = `AutoModelForMaskedLM.from_pretrained("xlm-roberta-base")`
2. Pass this to the Trainer
```
trainer = Trainer(
model= model,
args=training_config,
train_dataset=train_data,
eval_dataset=valid_data,
callbacks=None,
data_collator=masking_processor
)
```
3. If you check this against mypy, it produces an error stating
`error: Argument "model" to "Trainer" has incompatible type "AutoModelForMaskedLM"; expected "Union[PreTrainedModel, Module]"`
The model is defined as `model: Union[PreTrainedModel, nn.Module] = None` in the Trainer class.
### Expected behavior
Since it's a valid input to the Trainer class, it should expect the models from AutoModel classes as well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18464/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18463
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18463/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18463/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18463/events
|
https://github.com/huggingface/transformers/issues/18463
| 1,327,847,192
|
I_kwDOCUB6oc5PJVcY
| 18,463
|
Multi-GPU setting: Expected to mark a variable ready only once (RuntimeError)
|
{
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @sajastu Could you share a (minimal) code snippet that could reproduce this issue? Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,663
| 1,663
|
NONE
| null |
### System Info
Iβm trying to run a Huggingface model on a multi-GPU environment. The problem is that when Iβm processing multiple inputs which are bound to each other from a single class (shared-weights encoder), Iβm getting `RuntimeError: Expected to mark a variable ready only once`. While if I use this encoder module only once, for processing the first input, I wonβt get this error, in multi-GPU setting. Should add that I don't get this error when training on single GPU.
To make it clearer, here is the structure (I have simplified the code):
```
class LEDModel():
def __init__(self, ...)
self.encoder = ...
def forward(input_ids, ...):
encoder_outputs = self.encoder(input_ids, ...)
# filter encoder_outputs and construct another tensor called 'input_ids_selected'
encoder_outputs = self.encoder(input_ids_selected, ...)
...
return LEDSeq2SeqModelOutput(
last_hidden_state=decoder_outputs.last_hidden_state,
past_key_values=decoder_outputs.past_key_values,
sent_scores=sent_scores,
sect_scores=sect_scores,
decoder_hidden_states=decoder_outputs.hidden_states,
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
)
```
when I remove this line: `encoder_outputs = self.encoder(input_ids_selected, ...)`, I don't run into this error. Should say that to filter `encoder_outputs` from the first pass of the encoder, Iβm using other modules (linear layers) to find important `input_ids`, retaining those in input_ids_selected. You can perceive this as a two-step summarizer.
### Who can help?
@patrickvonplaten @ydshieh @patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Having a weight-shared module inside LEDModel() class will reproduce this error.
### Expected behavior
Currently, I'm getting `Expected to mark a variable ready only once (RuntimeError)` **in multi-GPU** configuration. I expect to run this model flawlessly in this setting.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18463/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18462
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18462/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18462/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18462/events
|
https://github.com/huggingface/transformers/pull/18462
| 1,327,826,024
|
PR_kwDOCUB6oc48m8g8
| 18,462
|
[FLAX] Add dtype to embedding for gpt2 model
|
{
"login": "merrymercy",
"id": 15100009,
"node_id": "MDQ6VXNlcjE1MTAwMDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/15100009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merrymercy",
"html_url": "https://github.com/merrymercy",
"followers_url": "https://api.github.com/users/merrymercy/followers",
"following_url": "https://api.github.com/users/merrymercy/following{/other_user}",
"gists_url": "https://api.github.com/users/merrymercy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merrymercy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merrymercy/subscriptions",
"organizations_url": "https://api.github.com/users/merrymercy/orgs",
"repos_url": "https://api.github.com/users/merrymercy/repos",
"events_url": "https://api.github.com/users/merrymercy/events{/privacy}",
"received_events_url": "https://api.github.com/users/merrymercy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @merrymercy, I've noticed that we omit the `dtype` arg from all Flax `nn.Embed` modules! @patil-suraj is there a reason why we do this?\r\n\r\nBART:\r\nhttps://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/bart/modeling_flax_bart.py#L841-L845\r\nBERT:\r\nhttps://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/bert/modeling_flax_bert.py#L186-L200\r\nT5:\r\nhttps://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/t5/modeling_flax_t5.py#L1259-L1263\r\n",
"I don't know the reasons, but this dtype is required for half-precision training. I can modify all other classes as well if needed.",
"Let's wait for @patil-suraj to weigh in on this!",
"Gentle ping @patil-suraj ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@sanchit-gandhi could you maybe take a look here? ",
"Sorry @patrickvonplaten - as mentioned in my previous comment https://github.com/huggingface/transformers/pull/18462#issuecomment-1209435023 I'm not sure why we omit the `dtype` from all Flax `nn.Embed` modules, hence the request for @patil-suraj to weight in! Maybe you could shed some light on this? It seems like intrinsic design philosophy given that we do this for all models.",
"The change looks good to me. T5x also puts the embedding in half precision if necessary: https://github.com/google-research/t5x/blob/1f8cec78b1f28f1955d70741792d7b6e7dd76226/t5x/examples/t5/network.py#L287\r\n\r\n@patil-suraj what do you think?",
"Can we merge this?",
"It's interesting that we omit the `dtype` arg in the embedding layer for both PyTorch and Flax:\r\nhttps://github.com/huggingface/transformers/blob/cbb8a37929c3860210f95c9ec99b8b84b8cf57a1/src/transformers/models/gpt2/modeling_gpt2.py#L675-L676\r\n\r\nWondering if this was a deliberate design decision that we're violating in this PR? Otherwise am happy with the change for half-precision training!",
"Your conclusion aligns with the previous observations of embedding dtypes never being down-cast in any Transformer models, both for PyTorch and Flax!\r\n\r\nWondering if you could share the rationale behind _why_ one must not down-cast embedding weights to half-precision? This would be helpful in understanding why this should be avoided and help educate us all!",
"I think my modification does not conflict with t5x.\r\nMy PR only changes the dtype of computation and output tensor, not the parameter type (`param_dtype`).\r\nhttps://github.com/google/flax/blob/0be6f32582b9acafe1741e8641a748eb99501021/flax/linen/linear.py#L732-L733\r\n\r\nThis aligns with @patrickvonplaten 's finding of the code of t5x.\r\n\r\n@patil-suraj Please review. I am working extensively on the flax backend and am happy to contribute more code. ",
"Hey @merrymercy,\r\n\r\nI think `nn.Embed` is an exception in Flax where providing a `dtype` does exactly modify the embedding weights and not just the computation. @patil-suraj can maybe explain better here :-) ",
"By looking at the code, I don't know why `dtype` changes the type of parameters. You can check the code\r\nhttps://github.com/google/flax/blob/0be6f32582b9acafe1741e8641a748eb99501021/flax/linen/linear.py#L739-L742. The type of parameters is controlled by `param_dtype`.\r\n\r\nCould you explain how the \"exception\" happens?",
"The way I see it, `dtype` promotes the whole embedding matrix to `bf16` here: https://flax.readthedocs.io/en/latest/_modules/flax/linen/linear.html#Embed and then takes a bf16 vector from this tensor -> this is different from just doing the matrix computation in bf16 IMO",
"You are right @patrickvonplaten. This is how fp16 mixed precision training with fp32 master weights works.\r\n\r\nMy point is, the current code in hugging face is wrong. The code in t5x is correct . My modification makes hugging faceβs code match t5xβs code.\r\n\r\nReasons:\r\n1. Regard less of self.dtype. The weights is stored in fp32. This holds for both my PR and t5x.\r\n2. If dtype is fp16, the computation is in fp16. This holds for my PR and t5x (https://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/layers.py#L501). But the original hugging face code is wrong\r\n",
"@merrymercy But T5X exactly doesn't set `dtype=jnp.bfloat16` when instantiating the layer, see: https://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/layers.py#L479 but instead wraps the embedding in `dtype=jnp.bfloat16` only during the forward: https://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/layers.py#L501 \r\n\r\nShouldn't we try to match this? ",
"Aha! I think we are talking at different levels. Could my comment below address your concerns?\r\n\r\n## First, I match the way we call βnn.Embedβ with t5x\r\nThis PR doesnβt modify βnn.Embedβ at all. It modifies the way we call βnn.Embedβ. What my pr tries to match is this line in t5x.\r\nhttps://github.com/google-research/t5x/blob/ca3d2e43c8db2e6769073ffa98b7689443e3b2b8/t5x/examples/t5/network.py#L287\r\nYou can see it passes dtype to βnn.Embedβ\r\n\r\n## Then, I match the implementation of βnn.Embedβ with t5x\r\nThe code you refers to is βlayer.Embedβ in t5x, the equivalence of this in our code base is βflax.nn.Embedβ. Both of them are implemented correctly.\r\n\r\nIn t5x, βnn.Embedβ has one argument dtype to control the type of computation and hard code fp32 for the type of parameters.\r\nIn flax, βnn.Embedβ has two arguments. One for dtype of computation and one for the dtype of parameter. I never change the βparam_dtypeβ, so it uses the default value fp32. This makes flax.nn.Embed match t5x.layer.Embed.\r\n\r\nIn summary, after my PR, the hugging face gpt should match t5x. Before my PR, the dtype of computation in mixed precision training is wrong.\r\n",
"Hey @merrymercy, thanks for clarifying and sorry for not making the connection before! The PR looks good to me then :-) \r\n\r\nJust one other thing - it seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"I fixed the circle CI issue, but I don't know how to fix the \"Build PR Documentation\" test"
] | 1,659
| 1,668
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
Add dtype to embedding for gpt2 models. This dtype is necessary for mixed precision training.
## Who can review?
@patrickvonplaten, @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18462/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18462",
"html_url": "https://github.com/huggingface/transformers/pull/18462",
"diff_url": "https://github.com/huggingface/transformers/pull/18462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18462.patch",
"merged_at": 1666282550000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18461
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18461/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18461/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18461/events
|
https://github.com/huggingface/transformers/issues/18461
| 1,327,550,976
|
I_kwDOCUB6oc5PINIA
| 18,461
|
BartForConditionalGeneration output is not dependent on input when trained from scratch
|
{
"login": "sinking-point",
"id": 17532243,
"node_id": "MDQ6VXNlcjE3NTMyMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/17532243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinking-point",
"html_url": "https://github.com/sinking-point",
"followers_url": "https://api.github.com/users/sinking-point/followers",
"following_url": "https://api.github.com/users/sinking-point/following{/other_user}",
"gists_url": "https://api.github.com/users/sinking-point/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinking-point/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinking-point/subscriptions",
"organizations_url": "https://api.github.com/users/sinking-point/orgs",
"repos_url": "https://api.github.com/users/sinking-point/repos",
"events_url": "https://api.github.com/users/sinking-point/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinking-point/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This has been resolved by using a smaller learning rate.",
"hello, \r\n\r\nI meet exactly the same problem. My bart model always generate the same content no matter what the input is. How you solve your problem?\r\n\r\nThanks \r\n\r\n",
"@xienian87 I retrained the model with a smaller learning rate and the problem went away. ",
"I have the same problem. Bart generated the same output, no matter what the model inputs.",
"@enze5088 have you tried a smaller learning rate?",
"> @enze5088 have you tried a smaller learning rate?\r\n\r\nThe problem seems to disappear, when using smaller learning rates."
] | 1,659
| 1,681
| 1,659
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've been trying to pretrain Bart to use as a baseline for comparison with other models I'd like to evaluate. However, I've found that when trained from scratch, its outputs have nothing to do with the inputs. You can set the input as anything you want, and the output will always be the same. It essentially acts like a causal language model. The funny thing is, if you start with the pretrained Bart instead of the randomly initialised Bart, it works fine. Could there be some problem with the way cross attention parameters are initialised? Or maybe there's an issue with Seq2SeqTrainer. Equally likely is that I've made a mistake somewhere. If anyone can help I'd greatly appreciate it. Thanks in advance.
The following code reproduces the issue. This attempts to train the model on the simplest conceivable seq2seq task: output the input, exactly as it is. If Bart can't even learn that, there must surely be something wrong.
```python
from datasets import load_dataset
from transformers import BartForConditionalGeneration, BartTokenizer, Seq2SeqTrainer, Seq2SeqTrainingArguments
dataset = load_dataset("c4", "en", streaming=True)
seed, buffer_size = 42, 10_000
train_set = dataset['train'].shuffle(seed, buffer_size=buffer_size).with_format('torch')
val_set = dataset['validation'].shuffle(seed, buffer_size=buffer_size).take(5000).with_format('torch')
tokeniser = BartTokenizer.from_pretrained("facebook/bart-base")
def transform(data_array):
texts = []
for data in data_array:
texts.append(data['text'])
batch = tokeniser(texts, padding=True, truncation=True, max_length=max_length)
with tokeniser.as_target_tokenizer():
labels = tokeniser(texts, padding=True, truncation=True, max_length=max_length)
batch['labels'] = labels['input_ids']
for k in batch:
batch[k] = torch.tensor(batch[k])
return dict(batch)
config = BartConfig.from_pretrained('facebook/bart-base')
model = BartForConditionalGeneration(config)
batch_size = 2
args = Seq2SeqTrainingArguments(
output_dir="checkpoints-bart-baseline-2",
do_train=True,
do_eval=True,
evaluation_strategy="steps",
eval_steps=5000,
save_strategy="steps",
save_steps=5000,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
learning_rate=1e-4,
max_steps=50_000,
fp16=True,
remove_unused_columns=False,
)
trainer = Seq2SeqTrainer(
model=model,
args=args,
data_collator=transform,
train_dataset=train_set,
eval_dataset=val_set,
)
trainer.train()
model.eval()
input_texts = [
"Please provide a code sample that reproduces the problem you ran into.",
"It can be a Colab link or just a code snippet.",
"If you have code snippets, error messages, stack traces please provide them here as well.",
]
inputs = tokeniser(input_texts, padding=True)
input_ids = torch.tensor(inputs['input_ids']).cuda()
model.cuda()
output_ids = model.generate(input_ids)
print(tokeniser.batch_decode(output_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False))
```
Output:
```
['</s><s>This entry was posted in Uncategorized. Bookmark the permalink.</s>',
'</s><s>This entry was posted in Uncategorized. Bookmark the permalink.</s>',
'</s><s>This entry was posted in Uncategorized. Bookmark the permalink.</s>']
```
### Expected behavior
I would expect the Bart model to learn an approximation of the function represented by the training data. Specifically, I would expect even the poorest approximation to produce different outputs depending on what input is given.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18461/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18460
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18460/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18460/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18460/events
|
https://github.com/huggingface/transformers/pull/18460
| 1,327,452,367
|
PR_kwDOCUB6oc48luC2
| 18,460
|
Fix torch version comparisons (helps with +cu*** or +cpu official builds)
|
{
"login": "LSinev",
"id": 12072891,
"node_id": "MDQ6VXNlcjEyMDcyODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LSinev",
"html_url": "https://github.com/LSinev",
"followers_url": "https://api.github.com/users/LSinev/followers",
"following_url": "https://api.github.com/users/LSinev/following{/other_user}",
"gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LSinev/subscriptions",
"organizations_url": "https://api.github.com/users/LSinev/orgs",
"repos_url": "https://api.github.com/users/LSinev/repos",
"events_url": "https://api.github.com/users/LSinev/events{/privacy}",
"received_events_url": "https://api.github.com/users/LSinev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Wow very cool @LSinev !"
] | 1,659
| 1,661
| 1,659
|
CONTRIBUTOR
| null |
Comparisons like `version.parse(torch.__version__) > version.parse("1.6")`
are `True` for `torch==1.6.0+cu101` or `torch==1.6.0+cpu` (which is not intended, I suppose).
So `version.parse(version.parse(torch.__version__).base_version)` comparisons are preferred (and used in pytorch_utils.py but not in other places).
# What does this PR do?
* Updated all comparisons to failsafe (when used in copypasting inspiration) `version.parse(version.parse(torch.__version__).base_version)`
* added some often used patterns to `pytorch_utils.py`
Did not check if original version checks were eligible. Believe original authors just missed this version check caveat.
Only torch version check changed (not sure if other packages may be affected).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
As it is touching many parts of code: @patrickvonplaten, @LysandreJik, @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18460/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18460",
"html_url": "https://github.com/huggingface/transformers/pull/18460",
"diff_url": "https://github.com/huggingface/transformers/pull/18460.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18460.patch",
"merged_at": 1659548238000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18459
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18459/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18459/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18459/events
|
https://github.com/huggingface/transformers/pull/18459
| 1,327,444,299
|
PR_kwDOCUB6oc48lsTK
| 18,459
|
Add machine type in the artifact of Examples directory job
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(I should have run the job and make sure it works before requesting review - ending up a few more commits to fix things, sorry)"
] | 1,659
| 1,662
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
We have
<img width="257" alt="Screenshot 2022-08-03 174611" src="https://user-images.githubusercontent.com/2521628/182652043-02a031a1-c8b9-457a-8876-130a37099075.png">
even when there are some errors in `Examples directory` test.
(relevant run: https://github.com/huggingface/transformers/actions/runs/2786567567)
Adding the machine type (single-gpu / multi-gpu) in the artifact names should make things work.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18459/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18459",
"html_url": "https://github.com/huggingface/transformers/pull/18459",
"diff_url": "https://github.com/huggingface/transformers/pull/18459.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18459.patch",
"merged_at": 1659631921000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18458
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18458/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18458/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18458/events
|
https://github.com/huggingface/transformers/pull/18458
| 1,327,425,868
|
PR_kwDOCUB6oc48loWc
| 18,458
|
Compute true loss Flax examples
|
{
"login": "duongna21",
"id": 38061659,
"node_id": "MDQ6VXNlcjM4MDYxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38061659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duongna21",
"html_url": "https://github.com/duongna21",
"followers_url": "https://api.github.com/users/duongna21/followers",
"following_url": "https://api.github.com/users/duongna21/following{/other_user}",
"gists_url": "https://api.github.com/users/duongna21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duongna21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duongna21/subscriptions",
"organizations_url": "https://api.github.com/users/duongna21/orgs",
"repos_url": "https://api.github.com/users/duongna21/repos",
"events_url": "https://api.github.com/users/duongna21/events{/privacy}",
"received_events_url": "https://api.github.com/users/duongna21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@duongna21 super sorry it seems like the git commit history got messed up :-/ Any chance you could re-submit your PR? "
] | 1,659
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
'True' losses should be computed in Flax examples, as [discussed](https://github.com/huggingface/transformers/pull/18297#discussion_r931971230) with @sanchit-gandhi.
## Who can review?
cc @sanchit-gandhi @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18458/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18458",
"html_url": "https://github.com/huggingface/transformers/pull/18458",
"diff_url": "https://github.com/huggingface/transformers/pull/18458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18458.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18457
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18457/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18457/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18457/events
|
https://github.com/huggingface/transformers/pull/18457
| 1,327,409,481
|
PR_kwDOCUB6oc48lkvD
| 18,457
|
HFTracer.trace can now take callables and torch.nn.Module
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
MEMBER
| null |
# What does this PR do?
This PR enables to use the `HFTracer` "meta-tracing" features to trace any Python callable / `torch.nn.Module`.
For `transformers.PreTrainedModel`s, the method `HFTracer._generate_dummy_inputs` already takes care of creating the original dummy inputs needed to handle data-dependent control-flow in the forward pass.
Now, the user can specify `dummy_inputs` directly to the `HFTracer.trace` method in order to be able to trace other things than `transformers.PreTrainedModel`s. This is useful for pattern matching for instance.
This becomes possible:
```python
def f(x, y, z=None):
temp = x * y
if z is not None:
temp += z
return temp
traced_f = HFTracer().trace(f, dummy_inputs={"x": torch.rand(1, 2), "y": torch.rand(1, 2)})
```
By default, if `dummy_inputs` is specified, every argument to `root` that is not in `dummy_inputs` will be considered a concrete arg (and thus added to `concrete_args`). You can disable that by setting `infer_concrete_args_from_dummy_inputs` to `False`. This is useful if want to provide custom dummy inputs for some inputs, while still keeping the `HFTracer._generate_dummy_inputs` doing the work for other inputs (provided that `root` is a `transformers.PreTrainedModel` since only this case is supported for automatic dummy inputs generation).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18457/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18457",
"html_url": "https://github.com/huggingface/transformers/pull/18457",
"diff_url": "https://github.com/huggingface/transformers/pull/18457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18457.patch",
"merged_at": 1659612559000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18456
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18456/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18456/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18456/events
|
https://github.com/huggingface/transformers/pull/18456
| 1,327,399,379
|
PR_kwDOCUB6oc48lih7
| 18,456
|
fix ONNX support for bloom
|
{
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
MEMBER
| null |
Merged on https://github.com/huggingface/transformers/pull/18344
This PR aims to fix ONNX export of bloom. All the following tests are passing:
```
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "bloom"
RUN_SLOW=1 pytest tests/models/bloom/test_modeling_bloom.py
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18456/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18456",
"html_url": "https://github.com/huggingface/transformers/pull/18456",
"diff_url": "https://github.com/huggingface/transformers/pull/18456.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18456.patch",
"merged_at": 1659602551000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18455
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18455/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18455/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18455/events
|
https://github.com/huggingface/transformers/issues/18455
| 1,327,377,278
|
I_kwDOCUB6oc5PHit-
| 18,455
|
understand differences in tokenization
|
{
"login": "bariluz93",
"id": 28778130,
"node_id": "MDQ6VXNlcjI4Nzc4MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/28778130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bariluz93",
"html_url": "https://github.com/bariluz93",
"followers_url": "https://api.github.com/users/bariluz93/followers",
"following_url": "https://api.github.com/users/bariluz93/following{/other_user}",
"gists_url": "https://api.github.com/users/bariluz93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bariluz93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bariluz93/subscriptions",
"organizations_url": "https://api.github.com/users/bariluz93/orgs",
"repos_url": "https://api.github.com/users/bariluz93/repos",
"events_url": "https://api.github.com/users/bariluz93/events{/privacy}",
"received_events_url": "https://api.github.com/users/bariluz93/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is because the languages you're translating between (Engish and German in this case) have different tokenization vocabularies. This implies that tokens will get tokenized differently. MarianMT models have seq2seq (encoder-decoder) architectures, and both the encoder and decoder each have their own embedding matrix. This means that the encoder will have an embedding vector for the token 'βdoctor', whereas the decoder will learn an embedding vector for the token 'βdo', an embedding vector for the token 'ctor', etc.\r\n\r\nTokenization vocabularies are typically built per language (although models like BLOOM just have one large vocabulary for all language tokens).",
"thank you very much for your answer\r\nis it per language? some languages don't have a separate embedding matrix for the encoder and the decoder?\r\nis there a way to know in advance which language has separate matrices and which doesn't?\r\nthanks\r\nBar\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@NielsRogge Can you please explain a confusion i have. Let say I have [en_ur translation](https://huggingface.co/Helsinki-NLP/opus-mt-en-ur) model. I can see both urdu and english words in vocab. \r\n\r\nThis is how data is trained.\r\n```\r\nmodel_inputs = tokenizer(inputs, max_length=max_Length, truncation=True)\r\n # Setup the tokenizer for targets\r\n with tokenizer.as_target_tokenizer():\r\n labels = tokenizer(targets, max_length=max_Length, truncation=True)\r\n```\r\nSo when I defined as_target_tokenizer is it read the same voab file? If yes then why we need to define this line because its the only vocab file",
"When using the `as_target_tokenizer` context manager, it will use the target vocabulary to tokenize the input sentence (rather than the source vocabulary).\r\n\r\nHowever, in v4.22 we deprecated this context manager. Now\r\n\r\n```\r\nwith tokenizer.as_target_tokenizer():\r\n encoded_labels = tokenizer(labels, padding=True)\r\n```\r\n\r\ncan be replaced by:\r\n\r\n```\r\nencoded_labels = tokenizer(text_target=labels, padding=True)\r\n```\r\n"
] | 1,659
| 1,665
| 1,662
|
NONE
| null |
hi,
I'm trying to understand the tokenization in MarianTokenizer
I run the following code
from transformers import MarianTokenizer
model_name='Helsinki-NLP/opus-mt-en-de'
model = MarianMTModel.from_pretrained(model_name,tokenizer= tokenizer, **kwargs)
tokenizer.tokenize("doctor")
['βdoctor']
with tokenizer.as_target_tokenizer():
indices = tokenizer("doctor", return_tensors="pt", padding=True)['input_ids'][0]
tokens = tokenizer.convert_ids_to_tokens(indices)
indices
tensor([ 156, 24889, 0])
tokens
['βdo', 'ctor', '</s>']
can someone please explain the difference between the tokenization of tokenizer.as_target_tokenizer(), and the tokenization of tokenizer.tokenize()? and what is used in fact when translating?
each one gives a different separation and different indices
thank you
Bar
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18455/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18454
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18454/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18454/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18454/events
|
https://github.com/huggingface/transformers/pull/18454
| 1,327,304,302
|
PR_kwDOCUB6oc48lN1x
| 18,454
|
disable Onnx test for google/long-t5-tglobal-base
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Adding @regisss as a reviewer as this is suggested by GitHub automatically π ",
"_The documentation is not available anymore as the PR was closed or merged._",
"I will tag lewis when he is back.",
"Hi @lewtun !\r\n\r\nCould you take a look at this ONNX test? Thank you."
] | 1,659
| 1,662
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
For `("longt5", "google/long-t5-tglobal-base")`, we get
```bash
Floating point exception (core dumped)
```
in this call
https://github.com/huggingface/transformers/blob/fc546332d7a9395323f656635362c9e0f3c4161a/src/transformers/onnx/convert.py#L404
Let's disable it for now, so other Onnx tests could be run.
[Failed job run](https://github.com/huggingface/transformers/runs/6892306185?check_suite_focus=true)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18454/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18454",
"html_url": "https://github.com/huggingface/transformers/pull/18454",
"diff_url": "https://github.com/huggingface/transformers/pull/18454.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18454.patch",
"merged_at": 1659720439000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18453
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18453/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18453/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18453/events
|
https://github.com/huggingface/transformers/pull/18453
| 1,327,258,869
|
PR_kwDOCUB6oc48lD-l
| 18,453
|
Add zero-shot obj detection notebook to docs
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for adding! Although I think we need to split up that long list of notebooks by modality/task.\r\n\r\nI agree! I will add another PR to organize the notebooks page."
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
Adds OWL-ViT demo notebook links to the official notebooks docs.
I'm currently working on adding TF support for this model and we'll be promoting it soon.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18453/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18453",
"html_url": "https://github.com/huggingface/transformers/pull/18453",
"diff_url": "https://github.com/huggingface/transformers/pull/18453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18453.patch",
"merged_at": 1659536079000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18452
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18452/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18452/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18452/events
|
https://github.com/huggingface/transformers/issues/18452
| 1,327,219,841
|
I_kwDOCUB6oc5PG8SB
| 18,452
|
Unable to Infer on Bloom Model-2b5 using Deepspeed
|
{
"login": "Ravisankar13",
"id": 31944166,
"node_id": "MDQ6VXNlcjMxOTQ0MTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/31944166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ravisankar13",
"html_url": "https://github.com/Ravisankar13",
"followers_url": "https://api.github.com/users/Ravisankar13/followers",
"following_url": "https://api.github.com/users/Ravisankar13/following{/other_user}",
"gists_url": "https://api.github.com/users/Ravisankar13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ravisankar13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ravisankar13/subscriptions",
"organizations_url": "https://api.github.com/users/Ravisankar13/orgs",
"repos_url": "https://api.github.com/users/Ravisankar13/repos",
"events_url": "https://api.github.com/users/Ravisankar13/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ravisankar13/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @Ravisankar13, could you please provide the script you have used, the deepspeed config, as well as the full stacktrace? It will be hard to help you with so little information.\r\n\r\ncc @stas00 ",
"@Ravisankar13, please see the work-in-progress here:\r\nhttps://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/308\r\n\r\nYou have a variety of different working solutions there. \r\n\r\nwe will soon move those here.",
"Thanks for your response. Let me try them and get back to you",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,663
| 1,663
|
NONE
| null |
### System Info
I was able to load the Bloom-2b5 Model onto my Colab notebook for text generation(Inference). When I use Deepspeed to load the model and try to inference the memory is not sufficient. I don't understand because with the help of deepspeed i should be able to load the larger model or atleast the same model.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Load the 2b5 model- https://huggingface.co/bigscience/bloom-2b5
2. Infer with and without deepspeed - https://huggingface.co/docs/transformers/main_classes/deepspeed
### Expected behavior
Cuda error: Insufficient memory
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18452/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18451
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18451/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18451/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18451/events
|
https://github.com/huggingface/transformers/pull/18451
| 1,327,202,974
|
PR_kwDOCUB6oc48k31n
| 18,451
|
TF Examples Rewrite
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is now ready for review @sgugger @gante! I'm tracking down a couple of remaining bugs in the tests and doing some final manual checks, but almost everything should be finished by now.\r\n\r\nI realize it's a very large PR, but you can see from the checklist above what the main changes are.",
"@sgugger Tests are now enabled in `config.yml` and everything still looks green!",
"That might because your new job did not run ;-) \r\nYou need to add at the end [here](https://github.com/huggingface/transformers/blob/d7e2d7b40b1070cddfe878e13705725f49a2cf1f/.circleci/config.yml#L1000) for the one at each commit and [there](https://github.com/huggingface/transformers/blob/d7e2d7b40b1070cddfe878e13705725f49a2cf1f/.circleci/config.yml#L1024) for the nigthly one ;-)",
"\r\n",
"@sgugger tests are now actually passing! I had to skip one - it fails because of a known issue with shape inference on small datasets in `to_tf_dataset`. There is a PR to fix that at https://github.com/huggingface/datasets/pull/4763 , we just need to wait for that to be merged before we can re-enable the test!"
] | 1,659
| 1,660
| 1,660
|
MEMBER
| null |
This PR is a rewrite of the TF examples, including several modern methods. I'm focusing on updating everything to use modern methods like `prepare_tf_dataset` and the `evaluate` library as well as adding features and functionality I missed when I first ported them, since `transformers` TF support was much shakier when these were first written (and I didn't know the library as well).
Just a draft for now, will ping reviewers when it's ready!
TO DO:
- [x] Draft rewrite for all scripts
- [x] Test run all scripts
- [x] Make sure we're handling batch sizes correctly in multi-GPU/TPU scopes
- [x] Make sure we're correctly using AdamW + LR decay everywhere
- [x] Make sure we're using `evaluate` instead of `load_metric`
- [x] Add metadata for `push_to_hub`
- [x] Add explanatory comments for things like `KerasMetricCallback`, `jit_compile` and `PushToHubCallback` where appropriate
- [x] Replace all the old ad-hoc data loading code with `prepare_tf_dataset`
- [x] Add explanatory links to the docs whenever we use `prepare_tf_dataset`
- [x] Add example tests
- [x] Make sure there's no case where we pass `optimizer=None` to `compile()`
- [ ] Final manual testing
- [x] ~Add HF metrics like MaskedAccuracy?~
Fixes #18334
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18451/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18451",
"html_url": "https://github.com/huggingface/transformers/pull/18451",
"diff_url": "https://github.com/huggingface/transformers/pull/18451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18451.patch",
"merged_at": 1660146592000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18450
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18450/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18450/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18450/events
|
https://github.com/huggingface/transformers/pull/18450
| 1,327,195,067
|
PR_kwDOCUB6oc48k2Hd
| 18,450
|
[WIP] Add TF support for OWL-ViT
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18450). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
Adds TensorFlow support for the [OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) model.
- Creates transformers/models/owlvit/modeling_tf_owlvit.py
- Creates tests/models/owlvit/test_modeling_tf_owlvit.py
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18450/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18450",
"html_url": "https://github.com/huggingface/transformers/pull/18450",
"diff_url": "https://github.com/huggingface/transformers/pull/18450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18450.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18449
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18449/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18449/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18449/events
|
https://github.com/huggingface/transformers/pull/18449
| 1,327,058,188
|
PR_kwDOCUB6oc48kYYS
| 18,449
|
Bugfix for the bloom model. The tensor is not moved to the right gpu causing error.
|
{
"login": "prajdabre",
"id": 8413449,
"node_id": "MDQ6VXNlcjg0MTM0NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8413449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajdabre",
"html_url": "https://github.com/prajdabre",
"followers_url": "https://api.github.com/users/prajdabre/followers",
"following_url": "https://api.github.com/users/prajdabre/following{/other_user}",
"gists_url": "https://api.github.com/users/prajdabre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajdabre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajdabre/subscriptions",
"organizations_url": "https://api.github.com/users/prajdabre/orgs",
"repos_url": "https://api.github.com/users/prajdabre/repos",
"events_url": "https://api.github.com/users/prajdabre/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajdabre/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18449). All of your documentation changes will be reflected on that endpoint."
] | 1,659
| 1,660
| 1,660
|
NONE
| null |
# What does this PR do?
In the implementation of the BLOOM model, on line 307, a tensor is made but not moved to a device. By default this is cpu but if someone wants to use a gpu then this will cause the code to throw an error. @patrickvonplaten, @LysandreJik
## Before submitting
- [N] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [N] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [N] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [N] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18449/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18449",
"html_url": "https://github.com/huggingface/transformers/pull/18449",
"diff_url": "https://github.com/huggingface/transformers/pull/18449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18449.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18448
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18448/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18448/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18448/events
|
https://github.com/huggingface/transformers/pull/18448
| 1,327,046,048
|
PR_kwDOCUB6oc48kVv5
| 18,448
|
Update pinned hhub version
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I still need to update https://github.com/huggingface/transformers/blob/main/src/transformers/dependency_versions_table.py, will do once back in laptop and let you know",
"Thank you :hugs: "
] | 1,659
| 1,659
| 1,659
|
MEMBER
| null |
# What does this PR do?
This PR updates the `huggingface_hub` pinned version. https://github.com/huggingface/transformers/pull/18366 uses new versions from the library which are not supported in previous versions, so we need to upgrade here.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18448/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18448",
"html_url": "https://github.com/huggingface/transformers/pull/18448",
"diff_url": "https://github.com/huggingface/transformers/pull/18448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18448.patch",
"merged_at": 1659530263000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18447
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18447/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18447/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18447/events
|
https://github.com/huggingface/transformers/issues/18447
| 1,327,031,769
|
I_kwDOCUB6oc5PGOXZ
| 18,447
|
'MarianTokenizer' object has no attribute 'target_encoder'
|
{
"login": "bariluz93",
"id": 28778130,
"node_id": "MDQ6VXNlcjI4Nzc4MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/28778130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bariluz93",
"html_url": "https://github.com/bariluz93",
"followers_url": "https://api.github.com/users/bariluz93/followers",
"following_url": "https://api.github.com/users/bariluz93/following{/other_user}",
"gists_url": "https://api.github.com/users/bariluz93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bariluz93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bariluz93/subscriptions",
"organizations_url": "https://api.github.com/users/bariluz93/orgs",
"repos_url": "https://api.github.com/users/bariluz93/repos",
"events_url": "https://api.github.com/users/bariluz93/events{/privacy}",
"received_events_url": "https://api.github.com/users/bariluz93/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.19.0.dev0
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
model_name='Helsinki-NLP/opus-mt-en-he'
tokenizer = MarianTokenizer.from_pretrained(model_name)
tokenizer.get_tgt_vocab()
### Expected behavior
I expected to get the target vocab but instead, i got the error
AttributeError: 'MarianTokenizer' object has no attribute 'target_encoder'
I need to find a way to separate vocab into the source and target vocabs instead of the current vocab which contains a mix of both languages.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18447/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18446
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18446/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18446/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18446/events
|
https://github.com/huggingface/transformers/issues/18446
| 1,327,014,548
|
I_kwDOCUB6oc5PGKKU
| 18,446
|
Add depth estimation pipeline
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"What would be the output like @NielsRogge ?\r\n\r\nMy understanding is that depth is just a gray scale image (black = infinitely far, white = infinitely close).\r\n\r\n\r\nIf that's the case It seems really close to `image-segmentation` in the sense that it's generating a new image from the original image, so we should try and reuse as much as possible.\r\n\r\nAlso maybe we could have something like `image-generation` to try and keep the name generic ? (And have an alias for `depth-estimation` for instance ?)\r\n",
"Hi @NielsRogge I would like to add this pipeline. ",
"Hi @Narsil,\r\n\r\nI'm not sure whether we should add this to the existing `image-segmentation` pipeline. Depth estimation is basically pixel regression, rather than pixel classification (the latter is image segmentation). It would be quite confusing to add it there.\r\n\r\nDepth estimation is quite a different field, see e.g. https://paperswithcode.com/task/depth-estimation\r\n\r\nAnd hi @nandwalritik, thanks for your interest in this. Feel free to start a draft PR.",
"Thanks I will start working on it.",
"> I'm not sure whether we should add this to the existing image-segmentation pipeline.\r\n\r\nI said we should inspire from it, not reuse it, but I suggested using an `image-generation`one. (Just to be slightly more general)\r\nThe output is a grayscale image, right ?"
] | 1,659
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### Feature request
We currently have 2 monocular depth estimation models in the library, namely [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) and [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn).
It would be great to have a pipeline for this task, with the following API:
```
from transformers import pipeline
pipe = pipeline("depth-estimation")
pipe("cats.png")
```
This pipeline could default to the https://huggingface.co/Intel/dpt-large checkpoint. Also check out the [Space](https://huggingface.co/spaces/nielsr/dpt-depth-estimation) that showcases the model.
This can be implemented similar to [other pipelines](https://github.com/huggingface/transformers/tree/main/src/transformers/pipelines). For an example PR that added a pipeline, see https://github.com/huggingface/transformers/pull/11598.
### Motivation
Pipelines are a great way to quickly perform inference with a model for a given task, abstracting away all the complexity.
### Your contribution
I can assist with this, together with @Narsil.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18446/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18446/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18445
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18445/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18445/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18445/events
|
https://github.com/huggingface/transformers/issues/18445
| 1,327,008,522
|
I_kwDOCUB6oc5PGIsK
| 18,445
|
Add zero-shot object detection pipeline
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"As seen with @alaradirik this morning, this could also be leveraging the custom pipeline feature that was implemented last week, especially if this pipeline works with a very limited number of artchitectures.\r\n\r\ncf https://github.com/huggingface/transformers/pull/18079",
"cc @alaradirik ",
"Can I take this up and work on it?",
"Hi @MocktaiLEngineer! Of course, you can also @NielsRogge, @Narsil or me if you need any help or have any questions.",
"cc @sgugger as we chatted about it as well ",
"I was going to do a custom pipeline on this today actually, as the dev advocates want more examples of it :-)",
"Hi @NielsRogge , if no one is working on it, can i take this up?",
"Hi @sahamrit, I don't think anyone is working on this right now but I'd need to double check with @NielsRogge and @sgugger ",
"Yes you can take a stab at it. Pinging @OlivierDehaene that might be able to provide guidance too."
] | 1,659
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
### Feature request
We currently have [OWL-ViT](https://huggingface.co/docs/transformers/main/model_doc/owlvit) in the library, which is capable of performing zero-shot object detection.
It would be great to have a pipeline for this task, with the following API:
```
from transformers import pipeline
pipe = pipeline("zero-shot-object-detection")
pipe("cats.png", ["cat", "remote"])
```
This pipeline could default to the https://huggingface.co/google/owlvit-base-patch32 checkpoint. Also check out the [demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) that showcases the model.
This can be implemented similar to [other pipelines](https://github.com/huggingface/transformers/tree/main/src/transformers/pipelines) (we already have one for zero-shot image classificaiton with CLIP, so it would be very similar to that one). For an example PR that added a pipeline, see https://github.com/huggingface/transformers/pull/11598.
### Motivation
Pipelines are great for abstracting away all the complexity for quick inference with a model.
### Your contribution
I can assist with this, together with @Narsil.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18445/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18444
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18444/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18444/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18444/events
|
https://github.com/huggingface/transformers/pull/18444
| 1,326,873,772
|
PR_kwDOCUB6oc48jwUG
| 18,444
|
Add stop sequence to text generation pipeline
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @Narsil. I've managed to get this working for greedy decoding and multimodal sampling. For beam-search, what would be the best approach to deal with a stop_sequence? I've assumed that if a stop_sequence appears in any of the beams then we stop the generation process.\r\n\r\nShould it instead be that we wait until each beam reaches the stop_sequence or any other stopping criteria before stopping the generation process?",
"> Should it instead be that we wait until each beam reaches the stop_sequence or any other stopping criteria before stopping the generation process?\r\n\r\n@KMFODA I think `eos_token_id` is already handled for beam search, see my comment on the `StoppingCriteria`.\r\n\r\nI will let others comment on the best way to do this in `.generate` but I think we don't need the criteria, just let `eos_token_id` regular logic apply (it's handled separately from `StoppingCriteria`).",
"For the tests removing the breakpoint should help then for code quality.\r\n\r\n```\r\npip install -e .[quality]\r\nmake fixup\r\n```\r\nShould do the trick.",
"@Narsil @KMFODA I'm in favor of moving it to a `StoppingCriteria`, so that all conditions that can terminate generation fall under the same class. However, it should be noted that it is not a requirement to complete the issue, i.e. to add a stop sequence to the text generation pipeline :P \r\n\r\nIt is already implemented on the multiple generation strategies (e.g. [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L1744) for greedy search). Also, the existing implementation is different from the current PR -- the existing implementation only checks whether the `eos_token` is present in newly generated tokens. This is because models like `GPT-2` often set `pad_token_id` to `eos_token_id`, and we don't want the pad tokens to trigger this condition.",
"Thanks @Narsil @gante. Okay so for the sake of deploying iteratively I've removed the `eos_token_id` from the `StoppingCriteria` and will add it as a separate PR.\r\n\r\nI've added a test for the `stop_sequence` being fed in at the pipeline level. When @Narsil's comment around wether the stop sequence should be handled in the `pipeline` or in the `generation_kwargs` is addressed I can alter this test accordingly.",
"> We should implement `stop_sequence` only once (probably in `generate`) but we could have 2 tests if you want to test the full pipeline too. (Probably in `tests/pipelines/test_pipelines_text_generation.py` for instance.)\r\n\r\nIf we were to move `stop_sequence` to be in `generate` wouldn't we have to tokenise it first. In that case what's the reasoning behind feeding it as a `stop_sequence` instead of a `eos_token_id`?",
"> If we were to move stop_sequence to be in generate wouldn't we have to tokenise it first. In that case what's the reasoning behind feeding it as a stop_sequence instead of a eos_token_id?\r\n\r\nYou're entirely right, oversight on my part. `eos_token_id` already does the job. So we just need to implement `stop_sequence` in the pipeline to tokenize the `stop_sequence` and produce the `eos_token_id` and just feed it to generate.\r\nSo no additional code in `generate` should be needed actually.\r\n\r\nSorry, failed to see that. ",
"No problem I've just moved the stop_sequence back to the pipeline function and added the tests you requested in the `tests/pipelines/test_pipelines_text_generation.py` folder. This should make this PR ready for review now.\r\n\r\nWhen I was playing with the stop_sequence though I found that sometime when I add a specific stop_sequence the output changes and avoids mentioning the word entirely. I don't have live examples now but I just wanted to check if this is normal behaviour? If not I can find examples on public models and share it in a different issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@KMFODA I think your PR is almost ready to be merged! Would you like to try to fix the final problems and apply the review suggestions? :-) ",
"Hey @patrickvonplaten. My apologies I was out sick over the past month. I worked on the suggestions now. Hopefully this should be good to merge now but if not let me know!",
"I'm happy with the PR, except for the `EndOfStringCriteria` class -- it is not being used, and it is not a good practice to add unused classes/functions. \r\n\r\n@KMFODA can you remove it for now, and perhaps reintroduce it in a follow-up PR (with use cases)? :) ",
"Hi @gante yes of course. I had removed it locally but somehow the changes didn't push through with one of the commits. Forced changed it now. Hopefully that looks good now :)."
] | 1,659
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
As per the conversation in https://github.com/huggingface/transformers/issues/17562, creating this draft PR to add a stop_sequence option to text generation pipelines.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil
Models:
All
Library:
- text generation: @patrickvonplaten
- pipelines: @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18444/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18444",
"html_url": "https://github.com/huggingface/transformers/pull/18444",
"diff_url": "https://github.com/huggingface/transformers/pull/18444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18444.patch",
"merged_at": 1664544411000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18443
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18443/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18443/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18443/events
|
https://github.com/huggingface/transformers/pull/18443
| 1,326,734,989
|
PR_kwDOCUB6oc48jS8q
| 18,443
|
Update no trainer scripts for language modeling and image classification examples
|
{
"login": "nandwalritik",
"id": 48522685,
"node_id": "MDQ6VXNlcjQ4NTIyNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nandwalritik",
"html_url": "https://github.com/nandwalritik",
"followers_url": "https://api.github.com/users/nandwalritik/followers",
"following_url": "https://api.github.com/users/nandwalritik/following{/other_user}",
"gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions",
"organizations_url": "https://api.github.com/users/nandwalritik/orgs",
"repos_url": "https://api.github.com/users/nandwalritik/repos",
"events_url": "https://api.github.com/users/nandwalritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/nandwalritik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks again for your contribution!"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18437
Updated no_trainer scripts for `examples/pytorch/image-classification/run_image_classification_no_trainer.py`, `examples/pytorch/language-modeling/run_clm_no_trainer.py` and `examples/pytorch/language-modeling/run_mlm_no_trainer.py` to include`gather_for_metrics`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18443/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18443",
"html_url": "https://github.com/huggingface/transformers/pull/18443",
"diff_url": "https://github.com/huggingface/transformers/pull/18443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18443.patch",
"merged_at": 1659529998000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18442
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18442/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18442/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18442/events
|
https://github.com/huggingface/transformers/pull/18442
| 1,326,693,458
|
PR_kwDOCUB6oc48jKFX
| 18,442
|
Update perf_train_gpu_one.mdx
|
{
"login": "thepurpleowl",
"id": 21123710,
"node_id": "MDQ6VXNlcjIxMTIzNzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thepurpleowl",
"html_url": "https://github.com/thepurpleowl",
"followers_url": "https://api.github.com/users/thepurpleowl/followers",
"following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}",
"gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions",
"organizations_url": "https://api.github.com/users/thepurpleowl/orgs",
"repos_url": "https://api.github.com/users/thepurpleowl/repos",
"events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}",
"received_events_url": "https://api.github.com/users/thepurpleowl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18442/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18442",
"html_url": "https://github.com/huggingface/transformers/pull/18442",
"diff_url": "https://github.com/huggingface/transformers/pull/18442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18442.patch",
"merged_at": 1662379596000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18441
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18441/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18441/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18441/events
|
https://github.com/huggingface/transformers/issues/18441
| 1,326,532,968
|
I_kwDOCUB6oc5PEUlo
| 18,441
|
Conversion from TF BERT Checkpoint to HF Model Breaks
|
{
"login": "vladd-i",
"id": 55069026,
"node_id": "MDQ6VXNlcjU1MDY5MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/55069026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vladd-i",
"html_url": "https://github.com/vladd-i",
"followers_url": "https://api.github.com/users/vladd-i/followers",
"following_url": "https://api.github.com/users/vladd-i/following{/other_user}",
"gists_url": "https://api.github.com/users/vladd-i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vladd-i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vladd-i/subscriptions",
"organizations_url": "https://api.github.com/users/vladd-i/orgs",
"repos_url": "https://api.github.com/users/vladd-i/repos",
"events_url": "https://api.github.com/users/vladd-i/events{/privacy}",
"received_events_url": "https://api.github.com/users/vladd-i/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### System Info
I'm trying to convert the BERT TF1 checkpoint provided in [MLPerf GDrive](https://drive.google.com/drive/folders/1oQF4diVHNPCclykwdvQJw8n_VIWwV0PT?usp=sharing) to a HF BERT model using the following transformer-cli command provided in [HF documentation](https://huggingface.co/docs/transformers/converting_tensorflow_models):
```
transformers-cli convert --model_type bert --tf_checkpoint model.ckpt-28252 --config bert_config.json --pytorch_dump_output pytorch_model.bin
```
but it breaks with the following error:
```
Traceback (most recent call last):
File "/workdisk/vlad/composer_venv/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/commands/transformers_cli.py", line 55, in main
service.run()
File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/commands/convert.py", line 103, in run
convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 172, in load_tf_weights_in_bert
if pointer.shape != array.shape:
File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Embedding' object has no attribute 'shape'
```
### Who can help?
@LysandreJik
@Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Download the following 4 files from [MLPerf GDrive](https://drive.google.com/drive/folders/1oQF4diVHNPCclykwdvQJw8n_VIWwV0PT?usp=sharing) and put them in the same directory:
- tf1_ckpt/model.ckpt-28252.data-00000-of-00001
- tf1_ckpt/model.ckpt-28252.index
- tf1_ckpt/model.ckpt-28252.meta
- bert_config.json
2. Run the command to convert TF checkpoint to HF BERT model:
```
transformers-cli convert --model_type bert --tf_checkpoint model.ckpt-28252 --config bert_config.json --pytorch_dump_output pytorch_model.bin
```
### Expected behavior
The command should convert TF checkpoint to HF BERT model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18441/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18441/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18440
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18440/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18440/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18440/events
|
https://github.com/huggingface/transformers/pull/18440
| 1,326,515,691
|
PR_kwDOCUB6oc48ilS-
| 18,440
|
Fix model list
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
MEMBER
| null |
This PR moves GroupViT and LXMert to their correct sections. As pointed out by @NielsRogge and @LysandreJik, GroupViT and LXMert are both multimodal models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18440/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18440",
"html_url": "https://github.com/huggingface/transformers/pull/18440",
"diff_url": "https://github.com/huggingface/transformers/pull/18440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18440.patch",
"merged_at": 1659522361000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18439
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18439/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18439/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18439/events
|
https://github.com/huggingface/transformers/pull/18439
| 1,326,483,184
|
PR_kwDOCUB6oc48ifAU
| 18,439
|
Integrate FlashAttention into HF OPT
|
{
"login": "erichan1",
"id": 30481032,
"node_id": "MDQ6VXNlcjMwNDgxMDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30481032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erichan1",
"html_url": "https://github.com/erichan1",
"followers_url": "https://api.github.com/users/erichan1/followers",
"following_url": "https://api.github.com/users/erichan1/following{/other_user}",
"gists_url": "https://api.github.com/users/erichan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erichan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erichan1/subscriptions",
"organizations_url": "https://api.github.com/users/erichan1/orgs",
"repos_url": "https://api.github.com/users/erichan1/repos",
"events_url": "https://api.github.com/users/erichan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/erichan1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18439). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, is there any updates? Coming from https://github.com/HazyResearch/flash-attention/blob/main/usage.md",
"Looking forward to the update!",
"> Looking forward to the update!\r\n\r\nHey there @puyuanOT! Not working on this actively anymore. Check out [torch SDP](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html#:~:text=Scaled%20dot%20product%20attention%20attempts,for%20enabling%20and%20disabling%20implementations.) to use FlashAttn in native torch! ",
"Thanks @erichan1 ! I will check it out.",
"@erichan1 Could you explain the reason for stopping to work on this feature? I think it would be a great implementation for the transformers library.\r\nRegarding the torch SDP link, could you give instructions on how to use this torch feature when using a model in Huggingface transformers?\r\n\r\nEdit: Is it the case that flash attention is now activated by default with recent versions of torch? If so, I would recommend a HuggingFace blog article to advertise this feature and explain its workings. Currently documentation is rather lacking on flash-attention support.",
"Within the Hugging Face ecosystem, it's possible to use BetterTransformer and the optimum library to improve model performance: [[1](https://huggingface.co/docs/optimum/bettertransformer/tutorials/convert)], [[2](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)]. @younesbelkada Is flash attention available yet through this? ",
"@amyeroberts @vincentmin I'm from the PyTorch team. We decided that the best way to provide FlashAttention was to create a new module that was just the component FlashAttention covers, [Scaled Dot Product Attention](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html#:~:text=Scaled%20dot%20product%20attention%20attempts,for%20enabling%20and%20disabling%20implementations.). This is the part which does softmax(Q@K)@V, and doesn't include the in projection and out projection. Since we built this abstraction, we also decided that we could use it to offer some other implementations of SDP, including a memory efficient one that we've built in house which uses less memory than FlashAttn, but is slower. \r\n\r\nYou can just directly use SDP by replacing the necessary chunk of code in your transformer definition. But I'm unsure about a way to use it with a flag you flip in HuggingFace. I'll let @younesbelkada speak to that. I believe BetterTransformer and SDP (which is part of BetterTransformer) support is already part of Optimum. ",
"@erichan1 @amyeroberts Thank you for the clarifications. I now understand that BetterTransformer should offer the features I am looking for. I encourage you to write a blog post on Huggingface to advertise this to the world!",
"Hi @erichan1 @amyeroberts @vincentmin \r\nThis is correct, SDPA is now part of the optimum's `BetterTransformer` API, however this is only available for decoder-based models right now. \r\nWe are indeed panning to write a blogpost soon with Pytorch to publicly announce the feature soon. We will keep you posted here!",
"Hi, any recent updates on this blogpost for `BetterTransformer` that you mentioned earlier?",
"Hi @KatarinaYuan \r\nYes the blogpost is out and is here: https://pytorch.org/blog/out-of-the-box-acceleration/",
"Thank you!\r\n\r\n> On Jun 14, 2023, at 3:33 AM, Younes Belkada ***@***.*** ***@***.***>> wrote:\r\n> \r\n> \r\n> Hi @KatarinaYuan <https://github.com/KatarinaYuan>\r\n> Yes the blogpost is out and is here: https://pytorch.org/blog/out-of-the-box-acceleration/ <https://pytorch.org/blog/out-of-the-box-acceleration/>\r\n> β\r\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/pull/18439#issuecomment-1590637126>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AKL7G2YPEJ4QVA2DTHM5EBDXLFSOJANCNFSM55M2CJGA>.\r\n> You are receiving this because you were mentioned.\r\n> \r\n\r\n",
"I use the transformer trainer + FSDP llama training options, model cannot be saved, and unable to use bettertransformer.reverse() convert to original model. I don't know how to deal with this problem.",
"Are there any updates on the integration of FlashAttention into HuggingFace Transformers?",
"@EwoutH \r\nFlashattention should be used as a backend for torch.SDPA which is itself integrated into `BetterTransformer` API. Make sure to install the latest transformers and optimum libraries and run:\r\n```python\r\nmodel = model.to_bettertransformer()\r\n```\r\nCheck the blogpost: https://pytorch.org/blog/out-of-the-box-acceleration/ for reference\r\n\r\ncc @fxmarty as well",
"is BetterTransformer up to date with FlashAttention v2?",
"Hi, BetterTransformer integrates with PyTorch SDPA (for now), and PyTorch has not integrated flash v2 yet: https://github.com/pytorch/pytorch/pull/105602. Hopefully it will be there in Pytorch 2.1."
] | 1,659
| 1,690
| 1,662
|
NONE
| null |
Integrate FlashAttention.
- Requires https://github.com/pytorch/pytorch/pull/81434 to work. torch._scaled_dot_product_attention is only there.
- Turn on fast path or go back to slow path using fast_attention=True/False flag.
- Turn on causal mask or turn it off for the fast attention path using fast_attention_causal = True/False.
- Does not support attention mask or padding mask on the fast path.
- Currently requires us to do an unnecessary conversion to Nestedtensor and back because the current FlashAttn implementation only takes NestedTensor. Will remove once torch._scaled_dot_product_attention supports regular tensor.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18439/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18439",
"html_url": "https://github.com/huggingface/transformers/pull/18439",
"diff_url": "https://github.com/huggingface/transformers/pull/18439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18439.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18438
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18438/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18438/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18438/events
|
https://github.com/huggingface/transformers/pull/18438
| 1,326,423,220
|
PR_kwDOCUB6oc48iR4_
| 18,438
|
Use new huggingface_hub tools for download models
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yay! TYSM!!!"
] | 1,659
| 1,660
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
This PR migrates Transformers to fully rely on `huggingface_hub` for the internal download and cache of all objects in Transformers (models, configs, tokenizers, feature extractors etc). To achieve this, a new function `cached_file` is introduced to replace the old `cached_path`, which relies on `hf_hub_download`. The whole refining of exceptions is left in this function, which allows for a lot of refactor of duplicate code across files, as well as removing some ugly try/except chains when we try several files in a row.
The cache is in the new format of huggingface_hub after this PR. To avoid breaking changes, a script will automatically converts the cache of users from the old format to the new format at the first transformers import. A warning is raised for offline users, and if the move fails for any reason or is interrupted, the user can still try later on with `transformers.util.move_cache()` (did not make a CLI command of it, but it's doable if we want).
**Note:** To avoid this PR being too heavy, all uses of `cached_path` and other old hub utils are not changed, only the main ones in the `from_pretrained` methods. A follow-up PR will hunt all remaining instances and remove those utils from the lib.
In the CI we trust! As you can see from the tests, this all comes at zero breaking changes (detected by the CI). The only modifications to the tests are small adaptations needed for some mock tests simulating no connection. Anticipated small breaking changes are:
- some error messages have slightly changed.
- if a user relied on offline mode and does not update their cache while updating Transformers, it will break. They need to be online to convert their cache to the new format.
For full backward compatibility with what `cached_path` used to do though, a few hacks were necessary. Some of those can be removed in the future if changes are made to `huggingface_hub`:
1. having methods to allow for enabling/disabling progress bars
2. throwing a `FileNotFoundError` instead of a `ValueError` when in offline mode and the file is not in the cache
3. having `hf_hub_download` look for files in the cache in case of connection errors (so if a user has the filed cached and hf.co is down, they still get their last updated version).
For 1, I had to do a contextmanager that patches huggingface_hub.
For 2, I match the exact error message for the exception I want to catch, but if no change is made in hf hub, we'll at least need a comment in bold telling the maintainers there to never update the message.
For 3, a new function `try_to_load_from_cache` is created, which can definitely leave in Transformers forever if it's not deemed suitable for `huggingface_hub`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18438/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18438",
"html_url": "https://github.com/huggingface/transformers/pull/18438",
"diff_url": "https://github.com/huggingface/transformers/pull/18438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18438.patch",
"merged_at": 1659708760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18437
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18437/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18437/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18437/events
|
https://github.com/huggingface/transformers/issues/18437
| 1,326,413,217
|
I_kwDOCUB6oc5PD3Wh
| 18,437
|
Update no_trainer scripts to include gather_for_metrics
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hi @muellerzr opened PR #18443 for first three examples in the list.",
"Hi @muellerzr I opened this PR https://github.com/huggingface/transformers/pull/18468 for the 4th example and ran it locally.\r\nPlease let me know if there is any changes, you would like done on this example, And I'll update it and add the feedback while I work on examples 5 and 6",
"Hi @muellerzr I opened this PR https://github.com/huggingface/transformers/pull/18474 for examples 5,6 and 7.",
"@muellerzr In the 7th subtask (semantic segmentation), I think it is already updated if I am not wrong.\n\nI want to work on this issue",
"Hi @muellerzr I opened this PR #18877 for example 8. Please let me know if there is any changes",
"This issue needs to be closed. All the work is already done it seems. ",
"Seems like [example 9](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py#L692) was already fixed but not checked off.",
"@muellerzr Can you close this issue?",
"Thanks to everyone who worked on this!"
] | 1,659
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
### Feature request
π€ Accelerate has a wrapper to help with distributed metric calculation (a tough problem!), and the `no_trainer` scripts should be updated to include it!
An example can be seen [here](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py#L163-L169), below is an example diff of what the integration would look like:
```diff
- predictions, references = accelerator.gather((predictions, batch["labels"]))
- # If we are in a multiprocess environment, the last batch has duplicates
- if accelerator.num_processes > 1:
- if step == len(eval_dataloader) - 1:
- predictions = predictions[: len(eval_dataloader.dataset) - samples_seen]
- references = references[: len(eval_dataloader.dataset) - samples_seen]
- else:
- samples_seen += references.shape[0]
+ predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"]))
```
The list of available scripts to update include:
- [x] examples/pytorch/image-classification/run_image_classification_no_trainer.py
- [x] examples/pytorch/language-modeling/run_clm_no_trainer.py
- [x] examples/pytorch/language-modeling/run_mlm_no_trainer.py
- [x] examples/pytorch/multiple-choice/run_swag_no_trainer.py
- [x] examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py
- [x] examples/pytorch/question_answering/run_qa_no_trainer.py
- [x] examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py
- [x] examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py
- [x] examples/pytorch/summarization/run_summarization_no_trainer.py
### Motivation
This is a great first issue for someone who wants to learn how to use some of the latest bits in Accelerate and get an easy beginner contribution to the library π€
### Your contribution
If you decide to pick up this issue, feel free to ping myself (@muellerzr), @sgugger, or @pacman100 to review π€
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18437/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18436
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18436/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18436/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18436/events
|
https://github.com/huggingface/transformers/issues/18436
| 1,326,409,645
|
I_kwDOCUB6oc5PD2et
| 18,436
|
Update no_trainer scripts to include gradient accumulation
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi @muellerzr \r\n\r\nI took a go at this (accelerate seems awesome!), and implemented the changes quickly. However, I noticed some performance degredation when using the the gradient accumulation wrapper. \r\n\r\nAfter some debugging, I think it stems from the lr_scheduler implementation in accelerate updating learning rate at every step in training loop whereas the example script updates the learning rate every optimizer step. \r\n\r\nSo I think either accelerate needs to add something like \r\n\r\n```python\r\n# Otherwise, first make sure the optimizer was stepped.\r\nfor opt in self.optimizers:\r\n if opt.step_was_skipped or not opt.gradient_state.sync_gradients:\r\n return\r\n```\r\n\r\nto scheduler.py implementation at line 59 \r\n\r\nOr the script should have\r\n\r\n```python\r\nif accelerator.sync_gradients:\r\n lr_scheduler.step()\r\n```\r\n\r\nI think this should be changed in accelerate. Let me know what you think or if im totally off! I'll be happy to do issue + PR to fix in accelerate and I'll definetly fix the example scripts in transformers. :) \r\n",
"No we can't do this as then the user would have to know in advance the number of optimization steps when they create their scheduler (which they don't since Accelerate handles gradient accumulation behind the scenes). That's why the learning rate scheduler should be created with the full number of training batches prior to gradient accumulation, then stepped at each batch (which is roughly equivalent to creating it with the right number of optimization batches and step at every optimization step).",
"@sgugger Cool! \r\n\r\nSo if I understand you comment, \r\n\r\n* learning rate scheduler should not know anything about the actual optimization steps, but assume every batch is a step\r\n\t- Hence, num_training_steps for the lr_scheduler is num_training_steps=math.ceil(len(train_dataloader)) * args.num_train_epochs, instead of taking gradient_accumulation_steps into account\r\n\t- This means that if gradient_accumulation_steps is 5, we will take 4 steps of scheduling learning rate without actually using it for gradient updates\r\n\r\nI've made a WIP pull request for the image examples/pytorch/image-classification/run_image_classification_no_trainer.py script (I'll update the rest of the scripts once i'm certain its the correct approach), \r\n\r\n* The current functionality of progress_bar / completed_steps is only increment when doing an optimization step i.e.\r\n\r\n```python\r\nif step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:\r\n progress_bar.update(1)\r\n completed_steps += 1\r\n```\r\n\r\nSo to keep the functionality, we need to know if optimization step occurred here which I think we can use\r\n\r\n```python\r\nif accelerator.sync_gradients\r\n progress_bar.update(1)\r\n completed_steps += 1\r\n```\r\n\r\nbut is this also something that should be kept away i.e. change logic a bit so that completed_steps == completed_batches instead of optimization_steps ?\r\n\r\n\r\n",
"It's going to be easier to just have the progress bar display the number of completed steps. Also, we should multiply `max_steps` by the number of gradient accumulation steps for the same reason (if the user provides it).",
"I think either option would work fine as well. The reason behind `sync_gradients` as part of the Accelerator is to provide this open interface to perform a check like this, so from an API design it's correct.\r\n\r\nMy $0.02 is to either explain in a comment what `sync_gradients` checks briefly, or to do as Sylvain recommended here. ",
"Hi @muellerzr opened PR https://github.com/huggingface/transformers/pull/18601 for second example in the list.",
"Hi @muellerzr opened a PR for 8th example on the list. Please let me know if something is wrong. (This is my first contribution ever). ",
"Hi @muellerzr!\r\nAny script to update yet?",
"Hi, I believe there is an issue with this PR (Rasmusafj:issue_18436), particularly for run_mlm_no_trainer.py. I am running BERT pretraining with this script and I run with the following arguments on 8 GPUs:\r\n`\r\n--num_warmup_steps 10000\r\n--max_train_steps 200000\r\n--checkpointing_steps 500\r\n--per_device_batch_size 256\r\n--gradient_accumulation_steps 2\r\n`\r\n\r\nWhen tracking the learning rate, the learning rate peaks at step 2500 (`completed_steps == 2500`), even though the training will stop at 200k completed_steps. My guess is the learning_rate is stepped for each of the 8 GPUs so the warmup is only actually 10k / 8 = 1.25k. Multiplied by the 2 gradient accum steps which are likely accounted for by the accumulate wrapper we end up with 2.5k warmup steps. \r\n\r\nI saw it suggested above by @Rasmusafj that we only step the learning rate when sync_gradients is true, which I believe would solve this issue for me, and bring about the right expected behavior. I saw @sgugger recommended against this, however.\r\n\r\nI am tracking the learning rate by printing `lr_scheduler.get_last_lr()[0]` every `checkpointing_steps` interval. \r\nNOTE: I am using accelerate with the deepspeed plugin.",
"cc @muellerzr so it's on your radar. It's True that then we use number of steps instead of number of epochs for a given training, the logic we have for the scheduler fails",
"I meet the same problem as @sameerreddy13",
"Maybe we should make it clear what does `step` mean in warmup_steps? one step fetching data from dataloader or one completed_step?",
"It should always be one gradient update step because that is the common assumption in literature as it is tied to the learning rate scheduler. In practice if we have batch size K and grad accum A we report the effective batch size as K * A. To fully fix this issue I did the following:\r\n\r\n```\r\nlr_scheduler = get_scheduler(\r\n name=args.lr_scheduler_type,\r\n optimizer=optimizer,\r\n num_warmup_steps=args.num_warmup_steps * accelerator.num_processes,\r\n num_training_steps=args.max_train_steps * accelerator.num_processes,\r\n)\r\n...\r\nif step % args.gradient_accumulation_steps != 0:\r\n # Gradients only accumulate\r\n with accelerator.no_sync(model):\r\n outputs = model(**batch)\r\n accelerator.backward(outputs.loss)\r\n else:\r\n # Gradients finally sync\r\n outputs = model(**batch)\r\n accelerator.backward(outputs.loss)\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n if (\r\n completed_steps < args.num_warmup_steps\r\n or lr_scheduler.get_last_lr()[0] > args.min_learning_rate\r\n ):\r\n lr_scheduler.step()\r\n```",
"It's been a while since I made this change but I manually used `no_sync`. iirc there was some underlying issue with the `accelerator.accumulate(model)` . I believe when I did a validation loop inside the training loop (say every K batches you want to get validation loss) that this broke the gradient accumulation, and only one gradient accum step would happen irregardless of the configured argument. You can see this at a coarse grained level by putting a validation step inside the train loop, setting grad_accum to something like 4 and observing the training suddenly speed up after the first evaluation. ",
"@sameerreddy13 , I agree with you. I also write a snippet about this at https://github.com/huggingface/accelerate/issues/1382#issuecomment-1534924835 with two different points:\r\n- first, I initialize my `lr_scheduler` without `*accelerate.num_processes` and not pass it to `prepare`, do you think this is equivalent to yours?\r\n- I still use `accelerator.accumulate(model)` because I didn't notice the underlying issue, if that is really the case, what about only validating after certain `completed steps` rather than certain batches ?",
"is this issue still open? can the relevant people mark which PRs have been are merged/w.i.p ?\r\n\r\nI see there is https://github.com/huggingface/transformers/pull/18601 from @vedant-z but it's been closed?",
"Sry, any update or final answer here?"
] | 1,659
| 1,698
| null |
CONTRIBUTOR
| null |
### Feature request
π€ Accelerate has a gradient accumulation wrapper, and the `no_trainer` scripts should be updated to include it!
An example can be seen [here](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation.py), below is an example diff of what the integration would look like:
```diff
- accelerator = (
- Accelerator(log_with=args.report_to, logging_dir=args.output_dir) if args.with_tracking else Accelerator()
- )
+ accelerator = (
+ Accelerator(log_with=args.report_to, logging_dir=args.output_dir, gradient_accumulation_steps=args.gradient_accumulation_steps) if args.with_tracking else Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps)
+ )
```
As well as:
```diff
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
+ num_update_steps_per_epoch = len(train_dataloader)
...
for step, batch in enumerate(train_dataloader):
+ with accelerator.accumulate(model):
```
```diff
- loss = loss / args.gradient_accumulation_steps
accelerator.backward(loss)
- if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
completed_steps += 1
```
The list of available scripts to update include:
- [ ] examples/pytorch/image-classification/run_image_classification_no_trainer.py
- [ ] examples/pytorch/language-modeling/run_clm_no_trainer.py
- [ ] examples/pytorch/language-modeling/run_mlm_no_trainer.py
- [ ] examples/pytorch/multiple-choice/run_swag_no_trainer.py
- [ ] examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py
- [ ] examples/pytorch/question_answering/run_qa_no_trainer.py
- [ ] examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py
- [ ] examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py
- [ ] examples/pytorch/summarization/run_summarization_no_trainer.py
### Motivation
This is a great first issue for someone who wants to learn how to use some of the latest bits in Accelerate and get an easy beginner contribution to the library π€
### Your contribution
If you decide to pick up this issue, feel free to ping myself (@muellerzr), @sgugger, or @pacman100 to review π€
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18436/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18435
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18435/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18435/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18435/events
|
https://github.com/huggingface/transformers/pull/18435
| 1,326,351,330
|
PR_kwDOCUB6oc48iCUE
| 18,435
|
fixing error when using sharded ddp
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #18410
1. conditional logic fixed
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18435/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18435",
"html_url": "https://github.com/huggingface/transformers/pull/18435",
"diff_url": "https://github.com/huggingface/transformers/pull/18435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18435.patch",
"merged_at": 1659496198000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18434
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18434/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18434/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18434/events
|
https://github.com/huggingface/transformers/pull/18434
| 1,326,311,781
|
PR_kwDOCUB6oc48h53o
| 18,434
|
Update BLOOM Overview in Doc: Add programming languages
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
The current wording makes it sound as if the programming languages are part of the 46 natural languages. This PR adds the exact number of programming languages to avoid confusion.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
I'm not sure who to tag for this, so maybe @osanseviero @younesbelkada and @sgugger ? :smiley_cat:
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18434/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18434/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18434",
"html_url": "https://github.com/huggingface/transformers/pull/18434",
"diff_url": "https://github.com/huggingface/transformers/pull/18434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18434.patch",
"merged_at": 1659470546000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18433
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18433/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18433/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18433/events
|
https://github.com/huggingface/transformers/issues/18433
| 1,326,245,069
|
I_kwDOCUB6oc5PDOTN
| 18,433
|
BlenderBot-Distil-400M training fails if the input or target length exceeds a certain threshold, even when truncation and padding is on
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Adding `padding=True` when tokenizing both the input and targets does not fix the issue.",
"Also, when running the script using the CPU only, I get this error:\r\n\r\n```\r\nroot@pc:~ # CUDA_VISIBLE_DEVICES=\"\" python script_blenderbot_length.py \r\n100%|ββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 4.95ba/s]\r\n100%|ββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 5.46ba/s]\r\nmax_steps is given, it will override any value given in num_train_epochs\r\nThe following columns in the training set don't have a corresponding argument in `BlenderbotForConditionalGeneration.forward` and have been ignored: target, input. If target, input are not expected by `BlenderbotForConditionalGeneration.forward`, you can safely ignore this message.\r\n/miniconda/lib/python3.7/site-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\r\n FutureWarning,\r\n***** Running training *****\r\n Num examples = 500\r\n Num Epochs = 80\r\n Instantaneous batch size per device = 4\r\n Total train batch size (w. parallel, distributed & accumulation) = 4\r\n Gradient Accumulation steps = 1\r\n Total optimization steps = 10000\r\n 0%| | 0/10000 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"script_blenderbot_length.py\", line 103, in <module>\r\n main()\r\n File \"script_blenderbot_length.py\", line 99, in main\r\n trainer.train()\r\n File \"/miniconda/lib/python3.7/site-packages/transformers/trainer.py\", line 1502, in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n File \"/miniconda/lib/python3.7/site-packages/transformers/trainer.py\", line 1740, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/miniconda/lib/python3.7/site-packages/transformers/trainer.py\", line 2470, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/miniconda/lib/python3.7/site-packages/transformers/trainer.py\", line 2502, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py\", line 1340, in forward\r\n return_dict=return_dict,\r\n File \"/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nFile \"/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py\", line 1181, in forward\r\n return_dict=return_dict,\r\n File \"/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py\", line 738, in forward\r\n embed_pos = self.embed_positions(input_shape)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py\", line 125, in forward\r\n return super().forward(positions)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 160, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/nn/functional.py\", line 2044, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nIndexError: index out of range in self\r\n 0%| | 0/10000 [00:00<?, ?it/s]\r\n```",
"I've found out why the error seems to appear. I modified `transformers/src/transformers/models/blenderbot/modeling_blenderbot.py:BlenderbotLearnedPositionalEmbedding:forward` (approximately near line 125).\r\n\r\n```diff\r\n positions = torch.arange(\r\n past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device\r\n )\r\n+ print(positions)\r\n+ print(self.weight.shape)\r\n return super().forward(positions)\r\n```\r\n\r\nWhen running the script, I get this in the output:\r\n```\r\ntensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,\r\n 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,\r\n 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,\r\n 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,\r\n 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,\r\n 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83,\r\n 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,\r\n 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111,\r\n 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125,\r\n 126, 127, 128, 129])\r\ntorch.Size([128, 1280])\r\n```\r\n\r\nClearly, the positional embeddings are beyond the maximum range available. The question is why... Perhaps this can be configured in the constructor?",
"The length of the positions seems to be equal to `2*CRITICAL_NUMBER + 1`.",
"And... it goes to a maximum of the tokenizer's max_length-1, which is expected, I guess.",
"Ah. So the issue is that in the `BlenderbotConfig`, `max_position_embeddings` is set to 128. The publicly available weights only have position embeddings with those dimensions, so either I'd have to train from scratch or reduce the max tokenizer length to 128.",
"But seriously, this exception should be caught and re-raised with a more human-readable expression.",
"(I can contribute a fix after my internship ends, not before)",
"Catching and re-raising the exception during GPU training doesn't result in a more human-readable expression (It's still `RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)`, but at least the flood of CUDA asserts are gone). Getting a more human-readable exception seems to be only possible for CPU-only training.",
"cc @sgugger for usage with the `Trainer`!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
### System Info
transformers version: 4.20.1, 4.21.0
Platform: Linux
Python version: 3.7.6
Huggingface_hub version: 0.8.1
PyTorch version (GPU?): 1.10.2 (Yes)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes (2+ Tesla V100)
Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following script with `python script_blenderbot_length.py`
```python
# The contents of script_blenderbot_length.py
# To make the code crash, set CRITICAL_NUMBER=64
# To make it pass, set CRITICAL_NUMBER=63
# The code fails if EITHER the input or the target is repeated 64+ times.
from __future__ import annotations
import functools
import typing as tp
import datasets
import transformers
from transformers import (
DataCollatorForSeq2Seq,
PreTrainedTokenizer,
Seq2SeqTrainingArguments,
Seq2SeqTrainer,
)
CRITICAL_NUMBER = 64
increment_en = [
{"input": "One", "target": "Two"},
{"input": "Three "*2, "target": "Four "*2},
{"input": "Five "*4, "target": "Six "*4},
{"input": "Seven "*8, "target": "Eight "*8},
{"input": "Nine "*CRITICAL_NUMBER, "target": "Ten "*CRITICAL_NUMBER},
]
increment_en = increment_en * 100
def lod_to_dol(list_of_dicts: tp.List[tp.Dict[str, tp.Any]]) -> tp.Dict[str, list]:
dict_of_lists = {
key: [dct[key] for dct in list_of_dicts] for key in list_of_dicts[0]
}
return dict_of_lists
increment_en = lod_to_dol(increment_en)
def preprocess_function_(
examples,
tokenizer: PreTrainedTokenizer,
max_input_length: int,
max_target_length: int,
):
inputs = examples["input"]
targets = examples["target"]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
def main():
tokenizer = transformers.BlenderbotTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
model = transformers.BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill")
args = Seq2SeqTrainingArguments(
"script_debug",
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
fp16=True,
push_to_hub=False,
max_steps=10000,
logging_steps=5000,
save_steps=5000
)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, padding=True)
dataset = datasets.DatasetDict(
{
"train": datasets.Dataset.from_dict(increment_en),
"test": datasets.Dataset.from_dict(increment_en),
}
)
preprocess_function = functools.partial(
preprocess_function_,
tokenizer=tokenizer,
max_input_length=512,
max_target_length=512
)
processed_ds = dataset.map(preprocess_function, batched=True)
processed_ds.set_format(
type="torch", columns=["input_ids", "attention_mask", "labels"]
)
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=processed_ds["train"],
eval_dataset=processed_ds["test"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
if __name__ == "__main__":
main()
```
Running the code when `CRITICAL_NUMBER` is set to 64 or greater leads to the bizarre series of CUDA asserts:
```
<Similar messages appear above, which are omitted for brevity>
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSi
ze` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
0%| | 0/10000 [00:07<?, ?it/s]
root@bolt-imq45r3c3y-8dfzr73qqa:/mnt/task_runtime# python script_blenderbot_length.py
100%|ββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 5.30ba/s]
100%|ββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 5.72ba/s]
max_steps is given, it will override any value given in num_train_epochs
Using cuda_amp half precision backend
The following columns in the training set don't have a corresponding argument in `BlenderbotForConditionalGeneration.forward` and have been ignored: target, input. If target, input are not expected by `BlenderbotForConditionalGeneration.forward`, you can safely ignore this message.
/miniconda/lib/python3.7/site-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
FutureWarning,
***** Running training *****
Num examples = 500
Num Epochs = 313
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 10000
0%| | 0/10000 [00:00<?, ?it/s]Traceback (most recent call last):
File "script_blenderbot_length.py", line 101, in <module>
main()
File "script_blenderbot_length.py", line 97, in main
trainer.train()
File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1502, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1740, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2470, in training_step
loss = self.compute_loss(model, inputs)
File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2502, in compute_loss
outputs = model(**inputs)
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/miniconda/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 1340, in forward
return_dict=return_dict,
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 1181, in forward
return_dict=return_dict,
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 785, in forward
output_attentions=output_attentions,
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 318, in forward
output_attentions=output_attentions,
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 180, in forward
query_states = self.q_proj(hidden_states) * self.scaling
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
File "/miniconda/lib/python3.7/site-packages/torch/nn/functional.py", line 1848, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
```
### Expected behavior
The training code should not crash, especially when there are far fewer tokens than the tokenization limit.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18433/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18432
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18432/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18432/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18432/events
|
https://github.com/huggingface/transformers/pull/18432
| 1,326,160,381
|
PR_kwDOCUB6oc48hYeV
| 18,432
|
Improve generate docstring (for TF and FLAX)
|
{
"login": "JoaoLages",
"id": 17574157,
"node_id": "MDQ6VXNlcjE3NTc0MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoaoLages",
"html_url": "https://github.com/JoaoLages",
"followers_url": "https://api.github.com/users/JoaoLages/followers",
"following_url": "https://api.github.com/users/JoaoLages/following{/other_user}",
"gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions",
"organizations_url": "https://api.github.com/users/JoaoLages/orgs",
"repos_url": "https://api.github.com/users/JoaoLages/repos",
"events_url": "https://api.github.com/users/JoaoLages/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoaoLages/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like we just need a quick `make style` to be ready to merge :-)",
"Mmm no the formatting has done something very wrong here, we can't have that. There is likely some syntax error in the docstring.\r\nProblem looks to be in the input_ids argument at line 422 of the TF generation files, the type should all be on one line.",
"> Mmm no the formatting has done something very wrong here, we can't have that. There is likely some syntax error in the docstring. Problem looks to be in the input_ids argument at line 422 of the TF generation files, the type should all be on one line.\r\n\r\n`make extra_style_checks` does that automatically. What do you propose?",
"As I said, there is a syntax error in the docstring that makes the styling script behave erratically. The first step is to revert the changes, fix the syntax error then re-run it.",
"I changed this line\r\n```\r\n input_ids (`tf.Tensor` of shape `(batch_size, sequence_length)`, `(batch_size, sequence_length,\r\n feature_dim)` or `(batch_size, num_channels, height, width)`, *optional*):\r\n```\r\nto this \r\n```\r\n input_ids (`tf.Tensor` of shape `(batch_size, sequence_length)`, `(batch_size, sequence_length, feature_dim)` or `(batch_size, num_channels, height, width)`, *optional*):\r\n```\r\nbut then `make extra_style_checks` reverts the change ",
"Ah yes, just tried locally and it's due to the empty line between `Parameters:` and `input_ids`. If you remove it, then your changes should not be overwritten.",
"> Ah yes, just tried locally and it's due to the empty line between `Parameters:` and `input_ids`. If you remove it, then your changes should not be overwritten.\r\n\r\nNice catch, but it still does look strange in the docs π€ \r\n\r\n\r\n",
"Uhmm, there is something wrong with the automatic styler -- e.g. the pytorch generate file should not be touched at all in this PR. As Sylvain wrote, the easiest solution is to start from a new branch π€ ",
"> Uhmm, there is something wrong with the automatic styler -- e.g. the pytorch generate file should not be touched at all in this PR. As Sylvain wrote, the easiest solution is to start from a new branch π€\r\n\r\n[The docs seem fine now](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18432/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin), right? It's just that this test, `doc-builder style src/transformers docs/source --max_len 119 --check_only --path_to_docs docs/source`, is not allowing to have more than 119 characters per line, but we need it here.",
"Nope, the docs are not fine for the PyTorch side with all the changes in this PR (and as @gante mentioned that file should not be touched at all). The doc-style is completely comfortable with lines that are more than 119 chars when it identifies they are parameter introduction lines, you just needed to remove the blank line between Parameters: and the first argument in `generate`.",
"Ah right, it has these strange hyphens...\r\n\r\n\r\nI will close the PR then, let's disregard these changes."
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
Just a continuation PR of https://github.com/huggingface/transformers/pull/18198 for TF and FLAX code
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18432/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18432/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18432",
"html_url": "https://github.com/huggingface/transformers/pull/18432",
"diff_url": "https://github.com/huggingface/transformers/pull/18432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18432.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18431
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18431/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18431/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18431/events
|
https://github.com/huggingface/transformers/issues/18431
| 1,326,110,614
|
I_kwDOCUB6oc5PCteW
| 18,431
|
Make Tokenizers serializable to TF SavedModel format
|
{
"login": "piEsposito",
"id": 47679710,
"node_id": "MDQ6VXNlcjQ3Njc5NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/47679710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piEsposito",
"html_url": "https://github.com/piEsposito",
"followers_url": "https://api.github.com/users/piEsposito/followers",
"following_url": "https://api.github.com/users/piEsposito/following{/other_user}",
"gists_url": "https://api.github.com/users/piEsposito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piEsposito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piEsposito/subscriptions",
"organizations_url": "https://api.github.com/users/piEsposito/orgs",
"repos_url": "https://api.github.com/users/piEsposito/repos",
"events_url": "https://api.github.com/users/piEsposito/events{/privacy}",
"received_events_url": "https://api.github.com/users/piEsposito/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1, I believe you've been looking into something similar.",
"Hi @piEsposito, this is something we've been working on! Right now it's only available for BERT, but we intend to expand it to other models, particularly now that we're seeing interest in it. You can use the [TFBertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.TFBertTokenizer) class for this - check out [this gist](https://gist.github.com/Rocketknight1/b479d57e3d2f94420b11ca8d319cc68f) for an example of how to use it.\r\n\r\nIf you're using a different model class than BERT, or you have any difficulties when using this, please let us know! It's a recent feature in `transformers` so we're still looking for user feedback on it.",
"This is great, really, and exactly what I'm looking into. I'm specifically looking into doing that with CLIP, which uses BPE. Do you have anything on works on that? If not, how can I ramp-up on that and help?\r\n\r\nThanks! ",
"Hi @piEsposito - we think that should be possible, but we just haven't had time to implement any tokenizers of that class yet! If you look at the [source for TFBertTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert_tf.py#L67) you can see that we just use `FastBertTokenizer` from `tensorflow_text`. However, correctly implementing a BPE tokenizer that gives identical results to the existing tokenizers will probably be more complex than a single class.\r\n\r\nIf you want to attempt it, feel free! We'd be happy to accept a PR. If not, we'll still work on it ourselves, but there are several competing priorities for the TF team right now.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@Rocketknight1 Hi, it would be great to also add support for models including RoBerta etc., as they may use different methods for tokenization (e.g. BPE, Byte-level BPE, SentencePiece). TBH, I think that the tokenizers should be independent from models, because tokenizer - model is not a 1 : 1 mapping.",
"@jamie0725 That's a good point! For most of our tokenizers that's how we do things - we have a separate `tokenizers` library. Right now we're still experimenting with in-graph tokenizers, but we might move them to the `tokenizers` library at a later stage, and adding tokenizers for other common models like `RoBERTa` is definitely on the list too!\r\n\r\nThe main issue is just that we have a lot of competing priorities - we'll get to it eventually, but if anyone wants to submit a PR before then we'd be very happy to review it!",
"do you plan to add CLIPTokenizer support for TF serving?"
] | 1,659
| 1,683
| 1,662
|
CONTRIBUTOR
| null |
### Feature request
It would be great if we could serialize our tokenizers into the TF SavedModel format, so that we could deploy it to TF Serving without a handler to tokenize our inputs.
### Motivation
It is frustrating to have to write a Python, JS or Rust handler every time I want to deploy a huggingface/transformers model to TF Serving, and it would be great if we could just bake everything into a serialized model and seamlessly serve it with TF Serving.
### Your contribution
Sadly I can't submit a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18431/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18431/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18430
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18430/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18430/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18430/events
|
https://github.com/huggingface/transformers/issues/18430
| 1,326,087,760
|
I_kwDOCUB6oc5PCn5Q
| 18,430
|
Increase notebooks page visibility
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I wouldn't merge the community notebooks page with the notebooks page, just make the link work again. I think it's important to separate what we officially support from what we don't. Organizing things a bit better would certainly be welcome, as always, happy to look at a PR!\r\n\r\nSame for fixing some of the notebooks if they don't run anymore. They are not tested like the examples scripts so it's possible that some API changes broke them.",
"Makes sense, I will reorganize the Community page and double check the official notebooks then!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
Hi, I was looking at the [notebooks page](https://huggingface.co/docs/transformers/notebooks) and I noticed that the community notebooks link is broken and some of the official notebooks are outdated and throw errors when run.
It would be great to reorganize the notebooks page such that:
1) The [community page ](https://huggingface.co/docs/transformers/v4.21.0/en/community) is renamed as Community Notebooks or merged with the Notebooks page
2) We add tags or organize the community notebooks page by task and topic (fine-tuning, image classification, etc.)
3) Existing official notebooks are updated
4) Notebooks page/s are promoted on the homepage
@NielsRogge @sgugger @amyeroberts @LysandreJik could you comment on this?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18430/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18429
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18429/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18429/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18429/events
|
https://github.com/huggingface/transformers/pull/18429
| 1,326,087,102
|
PR_kwDOCUB6oc48hI1P
| 18,429
|
[TENTATIVe] Attempt to reduce number of HEAD calls during model warmup.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18429). All of your documentation changes will be reflected on that endpoint.",
"Ok, marking as draft while the other PR is being worked on.",
"@sgugger If you want to take a look a the tests.\r\n\r\nRight now the tests are failing since the cache was written for the previous HEAD code.\r\nWe can do the cache here or on `huggingface_hub`, I am not sure where is the most appropriate.\r\n\r\nThis PR now hold a few (relatively hackish) attempts to preserve information from the `user-agent`.\r\n\r\nThe one thing that's surpised me, is that the pipeline load the model use `from_auto_class/False` because it looks directly within the config to fetch the model with the correct head. While it's technically correct, I am not sure if that's correct \"intentionally\" or for telemetry purposes, since it is actually using a lot of magic.\r\n\r\nWDYT ?",
"> The caching part should go more in the huggingface_hub IMO, especially now that we rely on it for everything. But I also think people might have strong opinion on it (if a user just updated a model and don't see it downloaded for instance, they'll be mad and won't necessarily understand there is some cache to clear).\r\n\r\nI'll wait for you work for the cache, that should clear the need for it, and the tests here should cover, I'll update this PR to make sure user-agent is correct afterwards.\r\n\r\nI had a TTL of 10s for the cache which is largely enough in most cases (well you still would have multiple hit if you were actually downloading the files but .. )\r\n\r\nRemoving the need for the cache is the best solution, so let's go with that for now.",
"@Narsil in case it's relevant, I was observing a significant perf disparity with the following repro (all files have been cached) if I specify `TRANSFORMERS_OFFLINE=1` or not:\r\n\r\ntest.py:\r\n```python\r\nfrom transformers import pipeline\r\nnlp = pipeline(\"question-answering\", model='distilbert-base-cased-distilled-squad')\r\n```\r\n\r\n```bash\r\n$ time TRANSFORMERS_OFFLINE=1 python transformers_test.py\r\nvocab_file vocab.txt\r\ntokenizer_file tokenizer.json\r\nadded_tokens_file added_tokens.json\r\nspecial_tokens_map_file special_tokens_map.json\r\ntokenizer_config_file tokenizer_config.json\r\n\r\nreal 0m1.759s\r\nuser 0m1.815s\r\nsys 0m2.443s\r\n\r\n\r\n$ time python transformers_test.py\r\nvocab_file vocab.txt\r\ntokenizer_file tokenizer.json\r\nadded_tokens_file added_tokens.json\r\nspecial_tokens_map_file special_tokens_map.json\r\ntokenizer_config_file tokenizer_config.json\r\n\r\n\r\nreal 0m6.187s\r\nuser 0m2.184s\r\nsys 0m2.248s\r\n```\r\n\r\nI then searched around and stumbled upon your PR. I tried patching it down into a venv linked to my repo and saw that the time was roughly the same:\r\n\r\n```bash\r\n(venv) ankur-m1:~/projects/layoutlm ankur$ time python transformers_test.py\r\nvocab_file vocab.txt\r\ntokenizer_file tokenizer.json\r\nadded_tokens_file added_tokens.json\r\nspecial_tokens_map_file special_tokens_map.json\r\ntokenizer_config_file tokenizer_config.json\r\n\r\nreal 0m6.034s\r\nuser 0m2.019s\r\nsys 0m2.346s\r\n```\r\n\r\nI may have completely misunderstood the purpose of this change, so please ignore me if this comment is irrelevant, but since I was poking around with perf in a similar fashion, I thought I'd share if helpful!",
"@ankrgyl\r\n\r\nThis PR caches files for a very small amount of time (10s) because most of the time users will want the new models when they exist.\r\n\r\nYou can try to increase the TTL if you want. \r\nAlso the cache isn't kept through different python sessions.\r\n\r\nYou may want to check were the overhead of network is occurring too, if could be DNS issues on your end, or just high latency with the HF servers.",
"Ahh, okay, that makes sense. Let me dig around a bit further with my repro, and see if I can find anything useful. Thanks for the pointers.",
"Btw @sgugger is working on a better fix we should reduce the amount of network calls as close to 1 as possible.",
"@sgugger if it's helpful to have a guinea pig test your PR in the wild, I'm happy to help! For background context, the reason I'm trying to optimize this cold start is that I'm trying to use transformers in a command line utility where cold start time matters quite a bit.",
"The PR is #18534, but nothing will beat using offline mode with the model cached, since you are then doing 0 calls to the API.",
"Thanks for the pointer @sgugger. I agree, however, the disparity exists even if you pin the revision:\r\n\r\n```\r\n$ cat transformers_test.py\r\nfrom transformers import pipeline\r\nnlp = pipeline(\"question-answering\", model='distilbert-base-cased-distilled-squad', revision=\"1b9d42b637aed70c9f3cd27e13b66ee9f847ed03\")\r\n\r\n$ time python transformers_test.py\r\n\r\nreal 0m5.680s\r\nuser 0m1.694s\r\nsys 0m2.252s\r\n\r\n$ time TRANSFORMERS_OFFLINE=1 python transformers_test.py\r\n\r\nreal 0m1.321s\r\nuser 0m1.653s\r\nsys 0m1.997s\r\n\r\n```\r\n\r\nWhile playing around with this, I noticed issue #18537, which might be leading to extra network calls (since the revision isn't pinned for the model) in my repro.\r\n\r\nApologies if I'm missing something obvious here, but I'd expect that (a) if the revision is specified and (b) it's cached, then there shouldn't be any network calls.",
"If the revision is specified as a commit sha, then yes, the cache should be used. This is not implemented by #18534 however, but could be some follow up work.\r\n\r\nThe only exception is for files that don't exist, as we don't know if they haven't been downloaded yet or if they are not present. That's why we still have extra calls in #18534 and would still have extra calls in this case as well.",
"Got it, I pulled down your PR and ran the same test, and saw much better results:\r\n\r\n```\r\n$ time python transformers_test.py\r\n\r\nreal 0m2.384s\r\nuser 0m1.869s\r\nsys 0m2.222s\r\n\r\n$ time TRANSFORMERS_OFFLINE=1 python transformers_test.py\r\n\r\nreal 0m1.588s\r\nuser 0m1.722s\r\nsys 0m2.229s\r\n```\r\n\r\nI'd be happy to help with the follow up work/exploration if helpful. I think you could theoretically handle the \"all files downloaded\" case too, by caching a file that simply marks _that_ you've downloaded all files associated with a revision.",
"Yes there is probably some way to cache that the file does not exist for a given commit sha. Pinged a few people internally to see if they like that idea and will let you know when I hear back!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Unstale. I'll come back to this at some poind.",
"There is only one call to head now once the model is cached @Narsil ",
"You're right, closing."
] | 1,659
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
# What does this PR do?
When doing model loading with the various tools within `transformers` there's actually
a lot of duplicate HEAD calls that cost network time and duplicate usage in an unecessary fashion.
For instance when doing a simple `pipeline(model="gpt2")`
You're getting
```
Fetching https://huggingface.co/gpt2/resolve/main/config.json
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.39k/1.39k [00:00<00:00, 1.20MB/s]
----
Fetching https://huggingface.co/gpt2/resolve/main/config.json
----
Fetching https://huggingface.co/gpt2/resolve/main/pytorch_model.bin
----
Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json
----
Fetching https://huggingface.co/gpt2/resolve/main/config.json
----
Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json
----
Fetching https://huggingface.co/gpt2/resolve/main/vocab.json
----
Fetching https://huggingface.co/gpt2/resolve/main/merges.txt
----
Fetching https://huggingface.co/gpt2/resolve/main/tokenizer.json
----
Fetching https://huggingface.co/gpt2/resolve/main/added_tokens.json
----
Fetching https://huggingface.co/gpt2/resolve/main/special_tokens_map.json
----
Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json
----
Fetching https://huggingface.co/gpt2/resolve/main/config.json
```
So you're doing a HEAD 4 to 5 times the `config.json`, 3 times to `tokenizer_config.json`.
Each of these is doing a call on the HUB which requires loading some resources which could be avoided.
@Xcid
In addition it adds a lot of noise within our logs since we're getting a lot of random multiple
HEAD calls for the actual same code being run.
Fixing it "cleanly" is hard, since there are many pathways to load the various elements
and checking every single path is hard.
The proposed fix is to simply introduce a `timed_cache` wrapper on top of the `request.head`
function.
We can keep a very short ttl since it's only to reduce duplicates when the model is unlikely to have changed.
We need to keep in mind jupyter or long lived users, so we need a TTL so that model updates can still be seen and downloaded.
In addition to that, it seems each code path calls the HEAD part with a different user-agent which (afaik) makes
it harder to understand our user's usage.
This is a tentative PR, proposed to reduce redundant network calls. If this is thought as a correct direction
I will then add unit testing for this `timed_cache` function.
After the PR:
```
----
Fetching https://huggingface.co/gpt2/resolve/main/config.json
----
Fetching https://huggingface.co/gpt2/resolve/main/pytorch_model.bin
----
Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json
----
Fetching https://huggingface.co/gpt2/resolve/main/vocab.json
----
Fetching https://huggingface.co/gpt2/resolve/main/merges.txt
----
Fetching https://huggingface.co/gpt2/resolve/main/tokenizer.json
----
Fetching https://huggingface.co/gpt2/resolve/main/added_tokens.json
----
Fetching https://huggingface.co/gpt2/resolve/main/special_tokens_map.json
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18429/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18429",
"html_url": "https://github.com/huggingface/transformers/pull/18429",
"diff_url": "https://github.com/huggingface/transformers/pull/18429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18429.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18428
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18428/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18428/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18428/events
|
https://github.com/huggingface/transformers/pull/18428
| 1,326,063,760
|
PR_kwDOCUB6oc48hD65
| 18,428
|
Accept `trust_remote_code` and ignore it in `PreTrainedModel.from_pretrained`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger FYI:\r\n\r\nI applied the change to TF and Flax `PreTrainedModel` (which is necessary, at least for TF), as well as `PretrainedConfig`.\r\nLeave the tokenizer class and feature extractor class for now"
] | 1,659
| 1,662
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
Hope my understanding is correct. Let me know if I should apply the same change to `ProcessorMixin`, `PreTrainedTokenizerBase` etc.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18428/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18428",
"html_url": "https://github.com/huggingface/transformers/pull/18428",
"diff_url": "https://github.com/huggingface/transformers/pull/18428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18428.patch",
"merged_at": 1659467039000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18427
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18427/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18427/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18427/events
|
https://github.com/huggingface/transformers/pull/18427
| 1,326,040,773
|
PR_kwDOCUB6oc48g-8d
| 18,427
|
fix: data2vec-vision Onnx ready-made configuration.
|
{
"login": "NikeNano",
"id": 22057410,
"node_id": "MDQ6VXNlcjIyMDU3NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikeNano",
"html_url": "https://github.com/NikeNano",
"followers_url": "https://api.github.com/users/NikeNano/followers",
"following_url": "https://api.github.com/users/NikeNano/following{/other_user}",
"gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions",
"organizations_url": "https://api.github.com/users/NikeNano/orgs",
"repos_url": "https://api.github.com/users/NikeNano/repos",
"events_url": "https://api.github.com/users/NikeNano/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikeNano/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lewtun @LysandreJik maybe you could take a look or point me to the correct person. Thanks!",
"cc @michaelbenayoun @JingyaHuang as Lewis is off for a few weeks :)",
"What is the best practice in this case for the test that failed? As far as I an see it is not related to the changes? How do I rerun it? @JingyaHuang thanks!",
"> What is the best practice in this case for the test that failed? As far as I an see it is not related to the changes? How do I rerun it? @JingyaHuang thanks!\r\n\r\nCan you rebase your branch with the main branch of transformers and re-launch the failed test?",
"LGTM!\r\nHave you ran this command?\r\n```\r\nRUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py \r\n```\r\nI was able to export the model on your branch, with your command, but I want to make sure all the tests pass before merging.",
"> LGTM! Have you ran this command?\r\n> \r\n> ```\r\n> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py \r\n> ```\r\n> \r\n> I was able to export the model on your branch, with your command, but I want to make sure all the tests pass before merging.\r\n\r\nI missed this but will run this tomorrow and fix it if it needs to!"
] | 1,659
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the missing config for the ONNX data2vec config for images. It is stated in the [docs](https://huggingface.co/docs/transformers/serialization) that there is a default config for the facebook/data2vec-vision-base but this is currently not working and gives
```
...
File "/transformers/src/transformers/onnx/features.py", line 486, in get_supported_features_for_model_type
raise KeyError(
KeyError: "data2vec-vision is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'ibert', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'marian', 'mbart', 'mobilebert', 'mobilevit', 'm2m-100', 'perceiver', 'resnet', 'roberta', 'roformer', 'squeezebert', 't5', 'vit', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support data2vec-vision please propose a PR or open up an issue."
```
to recreate using docker:
```bash
docker run -it huggingface/transformers-all-latest-gpu /bin/bash
```
```python
python3 -m transformers.onnx --model=facebook/data2vec-vision-base onnx/
```
Should I create an issue for this and link to it?
Thanks for the help!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18427/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18427",
"html_url": "https://github.com/huggingface/transformers/pull/18427",
"diff_url": "https://github.com/huggingface/transformers/pull/18427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18427.patch",
"merged_at": 1660030506000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18426
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18426/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18426/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18426/events
|
https://github.com/huggingface/transformers/pull/18426
| 1,325,980,678
|
PR_kwDOCUB6oc48gx9W
| 18,426
|
Add Speech-to-Speech Translation (S2ST)
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Pipeline:\r\n\r\n\r\n\r\nThis is effectively an encoder-decoder-vocoder configuration:\r\n\r\n- The feature extractor normalises the audio inputs.\r\n- The encoder maps the (normalised) audio inputs to a sequence of encoder hidden-states.\r\n- The decoder auto-regressively generates a sequence of tokens (interpreted as speech βhidden unitsβ).\r\n- The vocoder maps these discrete tokens to a sequence of continuous audio outputs.\r\n\r\nThe encoder-decoder portion is a standard seq2seq mapping, entirely equivalent to the [speech encoder-decoder model](https://huggingface.co/docs/transformers/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) we have in Transformers. The vocoder aspect is new.\r\n\r\n",
"Question 1 - Explicit modelling code?\r\n\r\nThe pre-trained checkpoints only use a Wav2Vec2 encoder and mBART decoder (see [enhanced_direct_s2st_discrete_units.md](https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/enhanced_direct_s2st_discrete_units.md#finetuned-model-checkpoints)). There are no other encoder or decoder checkpoints/architectures used. Thus, we have two options:\r\n\r\n1. Explicitly add the modelling code for Wav2Vec2 and the mBART decoder to `modeling_speech_to_speech.py` -> there are no abstractions, all the relevant modelling code for the encoder and decoder is in one file\r\n2. Follow whatβs done in speech encoder-decoder model and add modelling code for a generic encoder and generic decoder -> there is one layer of abstraction, neither the modelling code for the encoder or decoder are in `modeling_speech_to_speech.py`, they are instead called through AutoModel:\r\nhttps://github.com/huggingface/transformers/blob/6dda14dc47d82f0e32df05fea8ba6444ba52b90a/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L213-L217\r\n-> this is quite nasty as it abstracts the encoder and decoder, when really they can be in the same file if we know that the encoder is always Wav2Vec2 and the decoder always mBART.\r\n\r\nOption 1 is more clear to the user -> all the relevant code sits in the relevant modelling file. \r\n\r\nOption 2 is what we have for speech encoder decoder. It facilitates for different combinations of models should people wish to train different encoder-decoder combos themselves, but this is highly unlikely to ever happen! Option 2 gives one layer of abstraction that makes the code much harder to follow -> this is similar to what they do in fariseq, and itβs a struggle to jump around and find the right modelling files. \r\n\r\nWe currently have option 2 in the PR, but my preference would be for 1 unless there are any objections.",
"Question 2 - Where should the vocoder go?\r\n\r\nThe model is trained to predict target tokens (speech βhidden unitsβ). These target tokens are converted to continuous speech by action of a vocoder. This vocoder is not trained. It is loaded standalone to the seq2seq model after training.\r\n\r\n\r\n\r\nShould the vocoder be included in the modelling file as part of the pre-trained model? Or should it operate in a similar way to a tokenizer (an object that isnβt trained, purely used to map generated tokens to the final output)?\r\n\r\nMy preference would be to include it in the modelling file as an `nn.Module`. If we go for option 1 from the previous question (explicitly adding the Wav2Vec2 and mBART code to modling_speech_to_speech), we would then have the following structure:\r\n\r\n- Wav2Vec2 encoder\r\n\r\n- mBART decoder\r\n\r\n- CodeHiFiGAN vocoder\r\n\r\n- SpeechToSpeechTranslationModel (Wav2Vec2 encoder - mBART decoder)\r\n\r\n- SpeechToSpeechTranslationWithCodeHiFiGANVocoder (encoder-decoder-vocoder)\r\n\r\n -> this design treats the vocoder like a head to the base model\r\n",
"Question 3 - What about the configs?\r\n\r\nSpeech encoder decoder partitions its configuration into an encoder config and decoder config (see [speech-encoder-decoder-config](https://huggingface.co/docs/transformers/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig)):\r\n\r\n* Encoder config\r\n* Decoder config\r\n\r\nDo we do the same with the encoder-decoder-vocoder model:\r\n\r\n* Encoder config\r\n* Decoder config\r\n* Vocoder config -> note: not used for the SpeechToUnitTranslationModel, only the SpeechToSpeechWithCodeHiFiGANVocoder. Weβll have to handle it differently in each case.\r\n\r\nOr combine them into a single config file for all the modelling components (as is done with [T5Config](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Config) for example, the encoder and decoder parts of the config are prefixed by `encoder_` and `decoder_` respectively).\r\n\r\nMy preference would be for combining them into a single config and prefixing by `encoder_`, `decoder_` and `vocoder_` -> I think this is cleaner than having three different sub-configs on the go.",
"Question 4 - How to make compatible with generation?\r\n\r\nThis model currently isnβt a good fit for Transformers with regards to generation: we want to auto-regressively generate using the decoder and then pass the generation outputs through **another** stage of the model (vocoder) -> this currently isn't possible with `.generate` alone. We either have to add this functionality to generate, or override the generate method for SpeechToSpeechWithCodeHiFiGANVocoder.\r\n\r\nWe're currently doing something very hacky to make this work:\r\nhttps://github.com/huggingface/transformers/blob/43b744283b57d156845af0eda77b806b88957395/src/transformers/models/speech_to_speech/modeling_speech_to_speech.py#L1012-L1022",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds the S2ST models from the paper [Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation (Popuri et al. 2022)](https://arxiv.org/abs/2204.02967).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18426/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/18426/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18426",
"html_url": "https://github.com/huggingface/transformers/pull/18426",
"diff_url": "https://github.com/huggingface/transformers/pull/18426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18426.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18425
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18425/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18425/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18425/events
|
https://github.com/huggingface/transformers/issues/18425
| 1,325,929,386
|
I_kwDOCUB6oc5PCBOq
| 18,425
|
BartLearnedPositionalEmbedding's forward method signature obstructs private (Opacus) training of BART
|
{
"login": "donebydan",
"id": 15520428,
"node_id": "MDQ6VXNlcjE1NTIwNDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15520428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donebydan",
"html_url": "https://github.com/donebydan",
"followers_url": "https://api.github.com/users/donebydan/followers",
"following_url": "https://api.github.com/users/donebydan/following{/other_user}",
"gists_url": "https://api.github.com/users/donebydan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donebydan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donebydan/subscriptions",
"organizations_url": "https://api.github.com/users/donebydan/orgs",
"repos_url": "https://api.github.com/users/donebydan/repos",
"events_url": "https://api.github.com/users/donebydan/events{/privacy}",
"received_events_url": "https://api.github.com/users/donebydan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Tagging @sgugger as you have previously shown support for private training of HF models via Opacus.",
"Happy to look at a PR that fixes the issue!",
"Fantastic, I'll create it now π",
"@donebydan Hi, have you generated the fine-tuned BART with OPACUS? I'm working on it and changed the code to a merged one. But the model generation is weird like repeating `the the..`",
"Hi @SeolhwaLee, we are integrating a BART with Opacus example in our [`dp-transformers`](https://github.com/microsoft/dp-transformers) library. It is [this PR](https://github.com/microsoft/dp-transformers/pull/5), but it is pending some updates to newer Opacus (1.13) and HF versions right now."
] | 1,659
| 1,666
| 1,660
|
CONTRIBUTOR
| null |
### System Info
-`transformers` version: 4.20.1
-Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.17
-Python version: 3.8.13
-Huggingface_hub version: 0.8.1
-PyTorch version (GPU?): 1.9.1+cu102 (False)
-Tensorflow version (GPU?): not installed (NA)
-Flax version (CPU?/GPU?/TPU?): not installed (NA)
-Jax version: not installed
-JaxLib version: not installed
-Using GPU in script?: yes (NA)
-Using distributed or parallel set-up in script?: no (NA)
### Who can help?
Tagging @patil-suraj as BART model owner.
Details:
The signature of `BartLearnedPositionalEmbedding`'s forward method takes an input of type `torch.Size`, which breaks in Opacus. The reason is that Opacus makes a (reasonable) assumption that all layers take input of type `torch.Tensor`.
In particular, opacus/grad_sample/grad_sample_module.py line 190 (the `capture_activations_hook` method) tries to detach the input from device via:
`module.activations.append(forward_input[0].detach())`
If we pass the tensor instead, this will allow fine-tuning BART-type summarization models with differential privacy.
Only a few lines of code need to be changed in `modeling_bart.py`. In particular, the forward signature of `BartLearnedPositionalEmbedding.forward()` and references to this method.
I already have a change implemented with BART-related tests passing. More than happy to create a PR which I can tag you in @patil-suraj.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers.models.bart.modeling_bart import BartLearnedPositionalEmbedding
from opacus.tests.grad_samples.common import GradSampleHooks_test
class TestPositionalEmbedding(GradSampleHooks_test):
def test_grad_sample(self):
"""
Verify that our custom implementation of the grad sample for huggingface's
BartLearnedPositionalEmbedding layer works. Built on the test routines in opacus's library.
"""
register_grad_sampler()
batch_size = 1
max_pos_embs = 10
embed_dim = 3
x = torch.randint(0, max_pos_embs - 1, (batch_size, embed_dim))
layer = BartLearnedPositionalEmbedding(max_pos_embs, embed_dim)
self.run_test(x, layer, batch_first=True)
```
where a custom `register_grad_sampler()` method is called for `BartLearnedPositionalEmbedding` layer.
### Expected behavior
Test above should pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18425/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18424
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18424/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18424/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18424/events
|
https://github.com/huggingface/transformers/issues/18424
| 1,325,916,378
|
I_kwDOCUB6oc5PB-Da
| 18,424
|
Cannot replicate T5 performance on WMT14
|
{
"login": "ekurtulus",
"id": 66876436,
"node_id": "MDQ6VXNlcjY2ODc2NDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/66876436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekurtulus",
"html_url": "https://github.com/ekurtulus",
"followers_url": "https://api.github.com/users/ekurtulus/followers",
"following_url": "https://api.github.com/users/ekurtulus/following{/other_user}",
"gists_url": "https://api.github.com/users/ekurtulus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekurtulus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekurtulus/subscriptions",
"organizations_url": "https://api.github.com/users/ekurtulus/orgs",
"repos_url": "https://api.github.com/users/ekurtulus/repos",
"events_url": "https://api.github.com/users/ekurtulus/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekurtulus/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @ekurtulus, such a low BLEU score looks indeed suspicious! Do you have any training stats / logs / graphs to share? ",
"Just a tip: It might be a good idea to save the predictions (here the translation) during evaluation, so we can look into them to see what might goes wrong.\r\n\r\nWhen saving the translation, it's better to save the source text and the label (target text) too. I do this in a manual way though, this is not directly available in the official training scripts.",
"Sorry for being late. I will take a look.",
"> Hey @ekurtulus, such a low BLEU score looks indeed suspicious! Do you have any training stats / logs / graphs to share? \n\nMy experiments are on an HPC system, so since it's been a while, I unfortunately do not have the logs or the graphs.",
"@patrickvonplaten @patil-suraj Do you know if `--dataset_name stas/wmt14-en-de-pre-processed` (which is pre-processed using a script from fairseq) is the good dataset for T5 (En -> German)?\r\n\r\n`T5` is from Google, and in the paper, I can't find any mention of `fairseq`. I think T5 doesn't use this particular pre-processing, but I am not 100% sure.\r\n",
"@ekurtulus I also think the checkpoints `t5-small`, `t5-base` etc. have been trained on WMT / CNN Dailymail datasets, as shown in the code snippet below. So using those checkpoints to replicate the results (by finetuning on those datasets) doesn't really make sense IMO.\r\n\r\n### Code snippet\r\n\r\n```python\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"t5-small\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\r\n\r\ninputs = tokenizer(\r\n \"translate English to German: I am a good student.\",\r\n return_tensors=\"pt\",\r\n)\r\noutputs = model.generate(inputs[\"input_ids\"], max_length=64, num_beams=4, early_stopping=True)\r\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n\r\ninputs = tokenizer(\r\n \"translate English to French: I am a good student.\",\r\n return_tensors=\"pt\",\r\n)\r\noutputs = model.generate(inputs[\"input_ids\"], max_length=64, num_beams=4, early_stopping=True)\r\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"t5-base\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-base\")\r\n\r\ninputs = tokenizer(\r\n \"\"\"WASHINGTON (CNN) -- Doctors removed five small polyps from President Bush's colon on Saturday, and \"none appeared worrisome,\" a White House spokesman said. The polyps were removed and sent to the National Naval Medical Center in Bethesda, Maryland, for routine microscopic examination, spokesman Scott Stanzel said. Results are expected in two to three days. All were small, less than a centimeter [half an inch] in diameter, he said. Bush is in good humor, Stanzel said, and will resume his activities at Camp David. During the procedure Vice President Dick Cheney assumed presidential power. Bush reclaimed presidential power at 9:21 a.m. after about two hours. Doctors used \"monitored anesthesia care,\" Stanzel said, so the president was asleep, but not as deeply unconscious as with a true general anesthetic. He spoke to first lady Laura Bush -- who is in Midland, Texas, celebrating her mother's birthday -- before and after the procedure, Stanzel said. Afterward, the president played with his Scottish terriers, Barney and Miss Beazley, Stanzel said. He planned to have lunch at Camp David and have briefings with National Security Adviser Stephen Hadley and White House Chief of Staff Josh Bolten, and planned to take a bicycle ride Saturday afternoon. Cheney, meanwhile, spent the morning at his home on Maryland's eastern shore, reading and playing with his dogs, Stanzel said. Nothing occurred that required him to take official action as president before Bush reclaimed presidential power. The procedure was supervised by Dr. Richard Tubb, Bush's physician, and conducted by a multidisciplinary team from the National Naval Medical Center in Bethesda, Maryland, the White House said. Bush's last colonoscopy was in June 2002, and no abnormalities were found, White House spokesman Tony Snow said. The president's doctor had recommended a repeat procedure in about five years. A colonoscopy is the most sensitive test for colon cancer, rectal cancer and polyps, small clumps of cells that can become cancerous, according to the Mayo Clinic. Small polyps may be removed during the procedure. Snow said on Friday that Bush had polyps removed during colonoscopies before becoming president. Snow himself is undergoing chemotherapy for cancer that began in his colon and spread to his liver. Watch Snow talk about Bush's procedure and his own colon cancer Β» . \"The president wants to encourage everybody to use surveillance,\" Snow said. The American Cancer Society recommends that people without high risk factors or symptoms begin getting screened for signs of colorectal cancer at age 50. E-mail to a friend .\"\"\"\r\n return_tensors=\"pt\",\r\n)\r\noutputs = model.generate(inputs[\"input_ids\"], max_length=64, num_beams=4, early_stopping=True)\r\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n```\r\n\r\n### Outputs\r\n\r\n```bash\r\nIch bin ein guter Student.\r\nJe suis un bon Γ©tudiant.\r\n```\r\n\r\n```bash\r\nfive small polyps were removed from president Bush's colon on Saturday. none of the polyps appeared worrisome, a white house spokesman said. During the procedure, vice president Dick Cheney assumed presidential power.\r\n```",
"> @patrickvonplaten @patil-suraj Do you know if `--dataset_name stas/wmt14-en-de-pre-processed` (which is pre-processed using a script from fairseq) is the good dataset for T5 (En -> German)?\r\n> \r\n> `T5` is from Google, and in the paper, I can't find any mention of `fairseq`. I think T5 doesn't use this particular pre-processing, but I am not 100% sure.\r\n\r\nFairseq preprocessed version is suggested [at the official repository](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation).",
"> @ekurtulus I also think the checkpoints `t5-small`, `t5-base` etc. have been trained on WMT / CNN Dailymail datasets, as shown in the code snippet below. So using those checkpoints to replicate the results (by finetuning on those datasets) doesn't really make sense IMO.\r\n> \r\n> ### Code snippet\r\n> ```python\r\n> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n> \r\n> model = AutoModelForSeq2SeqLM.from_pretrained(\"t5-small\")\r\n> tokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\r\n> \r\n> inputs = tokenizer(\r\n> \"translate English to German: I am a good student.\",\r\n> return_tensors=\"pt\",\r\n> )\r\n> outputs = model.generate(inputs[\"input_ids\"], max_length=64, num_beams=4, early_stopping=True)\r\n> print(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n> \r\n> inputs = tokenizer(\r\n> \"translate English to French: I am a good student.\",\r\n> return_tensors=\"pt\",\r\n> )\r\n> outputs = model.generate(inputs[\"input_ids\"], max_length=64, num_beams=4, early_stopping=True)\r\n> print(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n> \r\n> model = AutoModelForSeq2SeqLM.from_pretrained(\"t5-base\")\r\n> tokenizer = AutoTokenizer.from_pretrained(\"t5-base\")\r\n> \r\n> inputs = tokenizer(\r\n> \"\"\"WASHINGTON (CNN) -- Doctors removed five small polyps from President Bush's colon on Saturday, and \"none appeared worrisome,\" a White House spokesman said. The polyps were removed and sent to the National Naval Medical Center in Bethesda, Maryland, for routine microscopic examination, spokesman Scott Stanzel said. Results are expected in two to three days. All were small, less than a centimeter [half an inch] in diameter, he said. Bush is in good humor, Stanzel said, and will resume his activities at Camp David. During the procedure Vice President Dick Cheney assumed presidential power. Bush reclaimed presidential power at 9:21 a.m. after about two hours. Doctors used \"monitored anesthesia care,\" Stanzel said, so the president was asleep, but not as deeply unconscious as with a true general anesthetic. He spoke to first lady Laura Bush -- who is in Midland, Texas, celebrating her mother's birthday -- before and after the procedure, Stanzel said. Afterward, the president played with his Scottish terriers, Barney and Miss Beazley, Stanzel said. He planned to have lunch at Camp David and have briefings with National Security Adviser Stephen Hadley and White House Chief of Staff Josh Bolten, and planned to take a bicycle ride Saturday afternoon. Cheney, meanwhile, spent the morning at his home on Maryland's eastern shore, reading and playing with his dogs, Stanzel said. Nothing occurred that required him to take official action as president before Bush reclaimed presidential power. The procedure was supervised by Dr. Richard Tubb, Bush's physician, and conducted by a multidisciplinary team from the National Naval Medical Center in Bethesda, Maryland, the White House said. Bush's last colonoscopy was in June 2002, and no abnormalities were found, White House spokesman Tony Snow said. The president's doctor had recommended a repeat procedure in about five years. A colonoscopy is the most sensitive test for colon cancer, rectal cancer and polyps, small clumps of cells that can become cancerous, according to the Mayo Clinic. Small polyps may be removed during the procedure. Snow said on Friday that Bush had polyps removed during colonoscopies before becoming president. Snow himself is undergoing chemotherapy for cancer that began in his colon and spread to his liver. Watch Snow talk about Bush's procedure and his own colon cancer Β» . \"The president wants to encourage everybody to use surveillance,\" Snow said. The American Cancer Society recommends that people without high risk factors or symptoms begin getting screened for signs of colorectal cancer at age 50. E-mail to a friend .\"\"\"\r\n> return_tensors=\"pt\",\r\n> )\r\n> outputs = model.generate(inputs[\"input_ids\"], max_length=64, num_beams=4, early_stopping=True)\r\n> print(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n> ```\r\n> \r\n> ### Outputs\r\n> ```shell\r\n> Ich bin ein guter Student.\r\n> Je suis un bon Γ©tudiant.\r\n> ```\r\n> \r\n> ```shell\r\n> five small polyps were removed from president Bush's colon on Saturday. none of the polyps appeared worrisome, a white house spokesman said. During the procedure, vice president Dick Cheney assumed presidential power.\r\n> ```\r\n\r\nWhat checkpoint should use then ? \r\n",
"> > @patrickvonplaten @patil-suraj Do you know if `--dataset_name stas/wmt14-en-de-pre-processed` (which is pre-processed using a script from fairseq) is the good dataset for T5 (En -> German)?\r\n> > `T5` is from Google, and in the paper, I can't find any mention of `fairseq`. I think T5 doesn't use this particular pre-processing, but I am not 100% sure.\r\n> \r\n> Fairseq preprocessed version is suggested [at the official repository](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation).\r\n\r\nI think my colleagues @patil-suraj and @patrickvonplaten are the best persons for this question. The trainer script could work with several models (T5, Bart, etc.). Bart is from facebook/fairseq (so probably used the pre-processed dataset), but T5 is from Google. I am not 100% sure if the combination `stas/wmt14-en-de-pre-processed` + `T5` is the best choice to compare against the original T5 checkpoint performance (which seems to be trained already on the translation task).\r\n\r\nIf you would like to, one thing you could try is to measure the T5 checkpoint performance against the original [WMT14 dataset](https://huggingface.co/datasets/wmt14) without any finetuning. And probably against the preprocessed dataset version too. From there, we might get better ideas.",
"Note that we cannot guarantee perfect replication of all models for every result in their respective paper. Given the extremely low results of your training though there is probably a bug. \r\n\r\nHere I'd suggest to try out different learning rates, learning rate schedulers (e.g. --lr_scheduler_type constant looks weird to me, I think a linear decrease makes more sense). Also note that the original model was trained on TPU with Tensorflow in bfloat16 where as here we're training on GPU with PyTorch. Good that you have a A100 - could you try simply using:\r\n- AdamW (not adafactor as we don't have the official implementation)\r\n- linear warmup + linear descent for learning rate scheduler\r\n\r\ninstead?\r\n\r\n",
"Agree with @patrickvonplaten, especially for the `AdamW` optimizer.\r\n\r\nI think [Hugging Face Forums](https://discuss.huggingface.co/) would be a better place for this question - if you want to post there too. If a bug (say in the model or in the training script) is found, don't hesitate to report here :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\nI am trying to reproduce the performance of transformer-base (from attention is all you need) on WMT14.\r\nI am using FSMT because I cannot find an implementation of the transformer.\r\nI was wondering which dataset and tokenizer are the best choices. \r\n1. `stas/wmt14-en-de-pre-processed` with `facebook/wmt19-en-de`\r\n2. `wmt14` with `facebook/wmt19-en-de`\r\nEspecially, I do not know which tokenizer should be used.\r\n\r\nThanks in advance if you could provide some suggestions!",
"unstale"
] | 1,659
| 1,669
| 1,667
|
NONE
| null |
### System Info
I am trying to replicate T5 finetuning on WMT with the following hyperparameters (as close as possible to the paper https://www.jmlr.org/papers/volume21/20-074/20-074.pdf):
--model_name_or_path t5-small
--source_lang en
--target_lang de
--dataset_name stas/wmt14-en-de-pre-processed
--max_source_length 512
--max_target_length 512
--val_max_target_length 512
--source_prefix="translate English to German: "
--predict_with_generate
--save_steps 5000
--eval_steps 5000
--learning_rate 0.001
--max_steps 262144
--optim adafactor
--lr_scheduler_type constant
--gradient_accumulation_steps 2 --per_device_train_batch_size 64
However, the best model performance I get is around 13 BLEU whereas in the paper reported BLEU is around 27. Any comments on how to fix this ?
Script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py
Environment:
- `transformers` version: 4.20.1
- Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes - A100
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten, @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the script with the hyperparameters above : https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py
### Expected behavior
BLEU score should be around 27.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18424/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18423
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18423/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18423/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18423/events
|
https://github.com/huggingface/transformers/pull/18423
| 1,325,885,532
|
PR_kwDOCUB6oc48gdfU
| 18,423
|
update maskformer docs
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,665
| 1,659
|
CONTRIBUTOR
| null |
- Updates the MaskFormer docs: _is_thing_map_ -> _label_ids_to_fuse_
See this [issue](https://github.com/huggingface/transformers/issues/18157).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18423/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18423",
"html_url": "https://github.com/huggingface/transformers/pull/18423",
"diff_url": "https://github.com/huggingface/transformers/pull/18423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18423.patch",
"merged_at": 1659455038000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18422
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18422/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18422/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18422/events
|
https://github.com/huggingface/transformers/pull/18422
| 1,325,843,007
|
PR_kwDOCUB6oc48gUO6
| 18,422
|
Fix `test_load_default_pipelines_tf` test error
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,662
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
My change in #18292 needs to add `tf` under `default` key (for `image-classification`), otherwise we have
```bash
FAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf
```
with error message
```bash
else:
# normal case - non-translation pipeline
> model_id, revision = task_dict["default"]["model"][framework]
E KeyError: 'tf'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18422/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18422",
"html_url": "https://github.com/huggingface/transformers/pull/18422",
"diff_url": "https://github.com/huggingface/transformers/pull/18422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18422.patch",
"merged_at": 1659459070000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18421
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18421/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18421/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18421/events
|
https://github.com/huggingface/transformers/pull/18421
| 1,325,614,649
|
PR_kwDOCUB6oc48fjCC
| 18,421
|
Change audio kwarg to images in TROCR processor
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,662
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
Fix a bug in TROCR processor introduced in #18325
Currently, we have [failed job run](https://github.com/huggingface/transformers/runs/7603716998?check_suite_focus=true) with error
```bash
if audio is None and text is None:
> raise ValueError("You need to specify either an `audio` or `text` input to process.")
E ValueError: You need to specify either an `audio` or `text` input to process.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18421/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18421",
"html_url": "https://github.com/huggingface/transformers/pull/18421",
"diff_url": "https://github.com/huggingface/transformers/pull/18421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18421.patch",
"merged_at": 1659445486000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18420
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18420/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18420/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18420/events
|
https://github.com/huggingface/transformers/pull/18420
| 1,325,577,554
|
PR_kwDOCUB6oc48fbE1
| 18,420
|
add `transformers-cli pt-to-flax`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18420). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,669
| 1,669
|
COLLABORATOR
| null |
# What does this PR do?
Following the addition of the `transformers-cli pt-to-tf` command, this uses the same scipt to convert to `FLAX`. [Since another PR](https://github.com/huggingface/transformers/pull/18419)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18420/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18420",
"html_url": "https://github.com/huggingface/transformers/pull/18420",
"diff_url": "https://github.com/huggingface/transformers/pull/18420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18420.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18419
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18419/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18419/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18419/events
|
https://github.com/huggingface/transformers/pull/18419
| 1,325,576,361
|
PR_kwDOCUB6oc48fa0S
| 18,419
|
Load sharded pt to flax
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Okay ππ» Thanks for the review gonna fix that asap ! "
] | 1,659
| 1,660
| 1,660
|
COLLABORATOR
| null |
# What does this PR do?
Add conversion to `flax` from sharded `pytroch` checkpoints. Follows #18026 which was closed to rename the branch (no really necessary, sorry for the inconvenience). Should fix #17537
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18419/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18419",
"html_url": "https://github.com/huggingface/transformers/pull/18419",
"diff_url": "https://github.com/huggingface/transformers/pull/18419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18419.patch",
"merged_at": 1660290490000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18418
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18418/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18418/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18418/events
|
https://github.com/huggingface/transformers/pull/18418
| 1,325,555,228
|
PR_kwDOCUB6oc48fWML
| 18,418
|
Fix the hub user name in a longformer doctest checkpoint
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Checked with the last run (May 18th -- sorry I should check doctest status much earlier), it worked with the previous checkpoint string, which suggests that the user renamed since then.\r\n\r\nI agree with you regarding `not the most strategic thing` (I raised the doubt before, but we decided to continue with this approach to see how things go)",
"@LysandreJik just reminded me that the migration to the new cache system on huggingface_hub will magically support repo renames so this won't be a problem in the future!"
] | 1,659
| 1,662
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
`jpelhaw` not exist on hub, and test fails at this moment.
Run locally with this PR: doctest pass for this model now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18418/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18418",
"html_url": "https://github.com/huggingface/transformers/pull/18418",
"diff_url": "https://github.com/huggingface/transformers/pull/18418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18418.patch",
"merged_at": 1659445450000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18417
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18417/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18417/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18417/events
|
https://github.com/huggingface/transformers/issues/18417
| 1,325,470,607
|
I_kwDOCUB6oc5PAROP
| 18,417
|
run_clip.py RuntimeError
|
{
"login": "gongshaojie12",
"id": 6407116,
"node_id": "MDQ6VXNlcjY0MDcxMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6407116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gongshaojie12",
"html_url": "https://github.com/gongshaojie12",
"followers_url": "https://api.github.com/users/gongshaojie12/followers",
"following_url": "https://api.github.com/users/gongshaojie12/following{/other_user}",
"gists_url": "https://api.github.com/users/gongshaojie12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gongshaojie12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gongshaojie12/subscriptions",
"organizations_url": "https://api.github.com/users/gongshaojie12/orgs",
"repos_url": "https://api.github.com/users/gongshaojie12/repos",
"events_url": "https://api.github.com/users/gongshaojie12/events{/privacy}",
"received_events_url": "https://api.github.com/users/gongshaojie12/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @gongshaojie12, thanks for the issue! Could you please provide the full command you run? cc @ydshieh; I believe you have worked with this script in the past",
"@gongshaojie12 Could you also provide a bit more information. For example, do you download and use the COCO dataset as in the README?",
"Hey @ydshieh @LysandreJik thank you very much for your replies.\r\n\r\nMy steps are as follows:\r\n\r\n1,Create a `VisionTextDualEncoderModel`\r\n\r\n```\r\nfrom transformers import (\r\n VisionTextDualEncoderModel, \r\n VisionTextDualEncoderProcessor, \r\n AutoTokenizer, \r\n AutoFeatureExtractor\r\n)\r\n\r\nmodel = VisionTextDualEncoderModel.from_vision_text_pretrained(\r\n \"openai/clip-vit-base-patch32\", \"roberta-base\"\r\n)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\r\nfeat_ext = AutoFeatureExtractor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nprocessor = VisionTextDualEncoderProcessor(feat_ext, tokenizer)\r\n\r\nmodel.save_pretrained(\"clip-roberta\")\r\nprocessor.save_pretrained(\"clip-roberta\")\r\n```\r\n\r\n2,Manually download COCO dataset to /home/gsj/data directory\r\n\r\n\r\n\r\n3,Full run command:\r\n```\r\npython run_clip.py \\\r\n--output_dir clip-roberta-finetuned \\ \r\n--model_name_or_path clip-roberta/ \\ \r\n--data_dir /home/gsj/data \\ \r\n--dataset_name ydshieh/coco_dataset_script \\ \r\n--dataset_config_name=2017 \\ \r\n--image_column image_path \\ \r\n--caption_column caption \\ \r\n--remove_unused_columns=False \\ \r\n--do_train \\\r\n--do_eval \\ \r\n--per_device_train_batch_size=\"64\" \\ \r\n--per_device_eval_batch_size=\"64\" \\ \r\n--learning_rate=\"5e-5\" \\\r\n--warmup_steps=\"0\" \\\r\n--weight_decay 0.1 \\ \r\n--overwrite_output_dir\r\n```",
"In addition, I commented the line `image_transformations = torch.jit.script(image_transformations)` and added some `prints`. The complete `run_clip.py` is as follows:\r\n[run_clip.zip](https://github.com/huggingface/transformers/files/9267204/run_clip.zip)\r\n",
"Hi @gongshaojie12 , I have to change\r\n\r\n```python\r\n train_dataset = dataset[\"train\"][:2000]\r\n```\r\nto\r\n```\r\n train_dataset = dataset[\"train\"]\r\n data_args.max_train_samples = 2000\r\n```\r\notherwise get an attribute error. (Ideally, we should specify this limits in the command line). Same for the validation dataset.\r\n\r\nI am wondering if you face the issue when running on the whole dataset. With the limits of `2000` and `500` (that are in your script), I am not able to reproduce.",
"@gongshaojie12 I want to double check if you use multiple GPUs ??",
"> Hi @gongshaojie12 , I have to change\r\n> \r\n> ```python\r\n> train_dataset = dataset[\"train\"][:2000]\r\n> ```\r\n> \r\n> to\r\n> \r\n> ```\r\n> train_dataset = dataset[\"train\"]\r\n> data_args.max_train_samples = 2000\r\n> ```\r\n> \r\n> otherwise get an attribute error. (Ideally, we should specify this limits in the command line). Same for the validation dataset.\r\n> \r\n> I am wondering if you face the issue when running on the whole dataset. With the limits of `2000` and `500` (that are in your script), I am not able to reproduce.\r\n\r\nHi @ydshieh thank you for your reply. Because the GPU machine is in the company, I can't run it on the whole dataset right now, when I come back to the company in two days I will run on the whole dataset,and feedback the results.\r\n\r\nAt the same time, after adding the code `data_args.max_train_samples = 2000`, I will also test whether it is running normally on my GPU machine",
"> @gongshaojie12 I want to double check if you use multiple GPUs ??\r\n\r\nHi @ydshieh ,yes, I used two GPUs for training",
"> Hi @gongshaojie12 , I have to change\r\n> \r\n> ```python\r\n> train_dataset = dataset[\"train\"][:2000]\r\n> ```\r\n> \r\n> to\r\n> \r\n> ```\r\n> train_dataset = dataset[\"train\"]\r\n> data_args.max_train_samples = 2000\r\n> ```\r\n> \r\n> otherwise get an attribute error. (Ideally, we should specify this limits in the command line). Same for the validation dataset.\r\n> \r\n> I am wondering if you face the issue when running on the whole dataset. With the limits of `2000` and `500` (that are in your script), I am not able to reproduce.\r\n\r\nHi, @ydshieh When running on the whole dataset, I still get the following error:\r\n```\r\n\r\n[INFO|configuration_utils.py:446] 2022-08-07 22:36:34,789 >> Configuration saved in clip-roberta-finetuned/checkpoint-1500/config.json\r\n[INFO|modeling_utils.py:1567] 2022-08-07 22:36:36,673 >> Model weights saved in clip-roberta-finetuned/checkpoint-1500/pytorch_model.bin\r\n/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n{'loss': 0.5521, 'learning_rate': 4.2791234140715114e-05, 'epoch': 0.43}\r\n 14%|ββββββ | 2000/13872 [50:51<4:57:11, 1.50s/it][INFO|trainer.py:2644] 2022-08-07 22:49:16,799 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-2000\r\n[INFO|configuration_utils.py:446] 2022-08-07 22:49:16,800 >> Configuration saved in clip-roberta-finetuned/checkpoint-2000/config.json\r\n[INFO|modeling_utils.py:1567] 2022-08-07 22:49:18,741 >> Model weights saved in clip-roberta-finetuned/checkpoint-2000/pytorch_model.bin\r\n/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n{'loss': 0.5047, 'learning_rate': 4.098904267589389e-05, 'epoch': 0.54}\r\n 18%|βββββββ | 2500/13872 [1:03:34<4:45:42, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:01:59,622 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-2500\r\n[INFO|configuration_utils.py:446] 2022-08-07 23:01:59,624 >> Configuration saved in clip-roberta-finetuned/checkpoint-2500/config.json\r\n[INFO|modeling_utils.py:1567] 2022-08-07 23:02:01,520 >> Model weights saved in clip-roberta-finetuned/checkpoint-2500/pytorch_model.bin\r\n/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n{'loss': 0.4655, 'learning_rate': 3.9186851211072664e-05, 'epoch': 0.65} ^[[B^[[B^[[B\r\n 22%|ββββββββ | 3000/13872 [1:16:13<4:34:09, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:14:38,286 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-3000\r\n[INFO|configuration_utils.py:446] 2022-08-07 23:14:38,287 >> Configuration saved in clip-roberta-finetuned/checkpoint-3000/config.json\r\n[INFO|modeling_utils.py:1567] 2022-08-07 23:14:40,239 >> Model weights saved in clip-roberta-finetuned/checkpoint-3000/pytorch_model.bin\r\n/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n{'loss': 0.4323, 'learning_rate': 3.7384659746251445e-05, 'epoch': 0.76} ^[[B^[[B^[[B\r\n 25%|βββββββββ | 3500/13872 [1:28:56<4:20:36, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:27:21,056 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-3500\r\n[INFO|configuration_utils.py:446] 2022-08-07 23:27:21,057 >> Configuration saved in clip-roberta-finetuned/checkpoint-3500/config.json\r\n[INFO|modeling_utils.py:1567] 2022-08-07 23:27:22,967 >> Model weights saved in clip-roberta-finetuned/checkpoint-3500/pytorch_model.bin\r\n/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n{'loss': 0.4047, 'learning_rate': 3.558246828143022e-05, 'epoch': 0.87}\r\n 29%|ββββββββββ | 4000/13872 [1:41:37<4:07:33, 1.50s/it][INFO|trainer.py:2644] 2022-08-07 23:40:02,408 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-4000\r\n[INFO|configuration_utils.py:446] 2022-08-07 23:40:02,409 >> Configuration saved in clip-roberta-finetuned/checkpoint-4000/config.json\r\n[INFO|modeling_utils.py:1567] 2022-08-07 23:40:04,339 >> Model weights saved in clip-roberta-finetuned/checkpoint-4000/pytorch_model.bin\r\n/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n{'loss': 0.3859, 'learning_rate': 3.3780276816608994e-05, 'epoch': 0.97}\r\n 32%|βββββββββββ | 4500/13872 [1:54:19<3:55:46, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:52:44,544 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-4500\r\n[INFO|configuration_utils.py:446] 2022-08-07 23:52:44,546 >> Configuration saved in clip-roberta-finetuned/checkpoint-4500/config.json\r\n[INFO|modeling_utils.py:1567] 2022-08-07 23:52:46,431 >> Model weights saved in clip-roberta-finetuned/checkpoint-4500/pytorch_model.bin\r\n/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n 33%|ββββββββββββ | 4623/13872 [1:57:33<3:51:51, 1.50s/it]Traceback (most recent call last):\r\n File \"/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py\", line 539, in <module>\r\n main()\r\n File \"/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py\", line 510, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py\", line 1502, in train\r\n return inner_training_loop(\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py\", line 1744, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py\", line 2474, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py\", line 2506, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py\", line 169, in forward\r\n return self.gather(outputs, self.output_device)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py\", line 181, in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py\", line 78, in gather\r\n res = gather_map(outputs)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py\", line 69, in gather_map\r\n return type(out)((k, gather_map([d[k] for d in outputs]))\r\n File \"<string>\", line 10, in __init__\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/utils/generic.py\", line 188, in __post_init__\r\n for element in iterator:\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py\", line 69, in <genexpr>\r\n return type(out)((k, gather_map([d[k] for d in outputs]))\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py\", line 63, in gather_map\r\n return Gather.apply(target_device, dim, *outputs)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py\", line 75, in forward\r\n return comm.gather(inputs, ctx.dim, ctx.target_device)\r\n File \"/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/comm.py\", line 235, in gather\r\n return torch._C._gather(tensors, dim, destination)\r\nRuntimeError: Input tensor at index 1 has invalid shape [4, 4], but expected [4, 5]\r\n 33%|ββββββββββββ | 4623/13872 [1:57:34<3:55:12, 1.53s/it]\r\nYou have new mail in /var/spool/mail/root\r\n\r\n```\r\n\r\nAlso, when adding the code `data_args.max_train_samples = 2000`, it works fine\r\n",
"Hi, it turns out that the last batch has only 9 examples, and it is splitted to a batch of `4` and another `5` elements (as we use 2 GPUs). This causes some issue for CLIP model. You can actually get the same issue very quickly by specifying\r\n```python\r\n--max_train_samples=137 --max_eval_samples=137\r\n```\r\n(remember to **remove the places of `2000` and `500` in your code first**)\r\nHere `137 = 128 + 9 = 2 * 64 + 9` (so we have a complete batch and a remaining batch)\r\n\r\nA quick solution is to add \r\n```\r\n--dataloader_drop_last True\r\n```\r\n\r\n",
"Hi, @ydshieh I got it, thanks a lot!"
] | 1,659
| 1,659
| 1,659
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Hi, @patil-suraj When I run `run_clip.py` following the steps in the [README](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/README.md), I get an error like the following:
```
[INFO|trainer.py:2644] 2022-08-02 04:07:15,699 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-4500
[INFO|configuration_utils.py:446] 2022-08-02 04:07:15,701 >> Configuration saved in clip-roberta-finetuned/checkpoint-4500/config.json
[INFO|modeling_utils.py:1567] 2022-08-02 04:07:17,602 >> Model weights saved in clip-roberta-finetuned/checkpoint-4500/pytorch_model.bin
/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
33%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 4623/13872 [1:56:27<3:50:22, 1.49s/it]Traceback (most recent call last):
File "/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 537, in <module>
main()
File "/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 508, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1502, in train
return inner_training_loop(
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1744, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 2474, in training_step
loss = self.compute_loss(model, inputs)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 2506, in compute_loss
outputs = model(**inputs)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward
return self.gather(outputs, self.output_device)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 181, in gather
return gather(outputs, output_device, dim=self.dim)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 78, in gather
res = gather_map(outputs)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 69, in gather_map
return type(out)((k, gather_map([d[k] for d in outputs]))
File "<string>", line 10, in __init__
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/utils/generic.py", line 188, in __post_init__
for element in iterator:
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 69, in <genexpr>
return type(out)((k, gather_map([d[k] for d in outputs]))
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py", line 75, in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/comm.py", line 235, in gather
return torch._C._gather(tensors, dim, destination)
RuntimeError: Input tensor at index 1 has invalid shape [4, 4], but expected [4, 5]
```
How to solve this error. Thanks!
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python run_clip.py
### Expected behavior
run run_clip.py success
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18417/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18416
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18416/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18416/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18416/events
|
https://github.com/huggingface/transformers/pull/18416
| 1,325,426,325
|
PR_kwDOCUB6oc48e6ZR
| 18,416
|
Add missing lang tokens in M2M100Tokenizer.get_vocab
|
{
"login": "guillaumekln",
"id": 4805513,
"node_id": "MDQ6VXNlcjQ4MDU1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4805513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaumekln",
"html_url": "https://github.com/guillaumekln",
"followers_url": "https://api.github.com/users/guillaumekln/followers",
"following_url": "https://api.github.com/users/guillaumekln/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaumekln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaumekln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaumekln/subscriptions",
"organizations_url": "https://api.github.com/users/guillaumekln/orgs",
"repos_url": "https://api.github.com/users/guillaumekln/repos",
"events_url": "https://api.github.com/users/guillaumekln/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaumekln/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"A friendly re-ping to @patil-suraj :hugs: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Maybe of interest to @ArthurZucker :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Re-ping of @ArthurZucker "
] | 1,659
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
The lang tokens were missing from `M2M100Tokenizer.get_vocab`. The `get_vocab` method is updated to match other multilingual tokenizers such as `NllbTokenizer` and `MBart50Tokenizer`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@n1t0, @LysandreJik, @SaulLu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18416/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18416",
"html_url": "https://github.com/huggingface/transformers/pull/18416",
"diff_url": "https://github.com/huggingface/transformers/pull/18416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18416.patch",
"merged_at": 1666703904000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18415
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18415/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18415/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18415/events
|
https://github.com/huggingface/transformers/pull/18415
| 1,325,332,406
|
PR_kwDOCUB6oc48emsQ
| 18,415
|
Add Spanish translation of run_scripts.mdx
|
{
"login": "donelianc",
"id": 7807897,
"node_id": "MDQ6VXNlcjc4MDc4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donelianc",
"html_url": "https://github.com/donelianc",
"followers_url": "https://api.github.com/users/donelianc/followers",
"following_url": "https://api.github.com/users/donelianc/following{/other_user}",
"gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donelianc/subscriptions",
"organizations_url": "https://api.github.com/users/donelianc/orgs",
"repos_url": "https://api.github.com/users/donelianc/repos",
"events_url": "https://api.github.com/users/donelianc/events{/privacy}",
"received_events_url": "https://api.github.com/users/donelianc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@omarespejel, can you help me review this PR, please?",
"Hi @donelianc! Amazing translation. I only found a few nits in my review.",
"@omarespejel, thanks for your great review! I submitted the suggested changes in my previous commit.\r\n\r\nI'll keep my translator streak if you assign me `converting_tensorflow_models.mdx` π ",
"Thanks, @donelianc for the translation! @sgugger LGTM :)\r\n\r\n@donelianc thanks, I will add you for `converting_tensorflow_models.mdx` π"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add the Spanish translation for `run_scripts.mdx` as part of the #15947 issue.
Changes include the Spanish version of the original document and the `updated _toctree.yml` file.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Task assignment [here](https://github.com/huggingface/transformers/issues/15947#issuecomment-1196245514).
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18415/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18415",
"html_url": "https://github.com/huggingface/transformers/pull/18415",
"diff_url": "https://github.com/huggingface/transformers/pull/18415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18415.patch",
"merged_at": 1659526340000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18414
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18414/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18414/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18414/events
|
https://github.com/huggingface/transformers/pull/18414
| 1,325,312,818
|
PR_kwDOCUB6oc48einN
| 18,414
|
Add DocumentQuestionAnswering pipeline
|
{
"login": "ankrgyl",
"id": 565363,
"node_id": "MDQ6VXNlcjU2NTM2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankrgyl",
"html_url": "https://github.com/ankrgyl",
"followers_url": "https://api.github.com/users/ankrgyl/followers",
"following_url": "https://api.github.com/users/ankrgyl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions",
"organizations_url": "https://api.github.com/users/ankrgyl/orgs",
"repos_url": "https://api.github.com/users/ankrgyl/repos",
"events_url": "https://api.github.com/users/ankrgyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankrgyl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Narsil this is basically a skeleton implementation that I thought I'd send out sooner than later to start getting your input.\r\n\r\nI've left a few questions throughout tagged with \"TODO\" in the comments. The big question is how much/whether to reuse the code in QuestionAnsweringPipeline, which has a lot of overlap (notably preparing the spans and post-processing the output). For example, I could refactor out methods like `QuestionAnsweringPipeline.decode` to share the implementation, inherit from `QuestionAnsweringPipeline`, etc.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@Narsil thank you for the review! Before I go in and apply the comments, I thought it might be worth discussing the required (or not) `image` argument at a high level.\r\n\r\nThe reason (I think) It's important to allow users to pass in words/boxes _instead of_ an image is that users often either want to run their own OCR (e.g. using a proprietary system like Google/Microsoft) OR are extracting data from documents that have embedded text (e.g. many PDF documents, Word, Excel, etc.). Furthermore, there are a lot of pre-processing tricks that are relevant to certain OCR implementations (e.g. some try to order words by line, others by block, etc.) that have a very significant impact on BERT-inspired models like LayoutLM (because of the attention mechanism, position ids, etc.). tl;dr, having some control over words/boxes is very important if you're trying to use the pipeline in a production scenario.\r\n\r\nNow, you could argue that if they want to do this, they could use the question answering pipeline. In fact, when I started exploring HuggingFace/transformers, I did just that! The problem is that if you join everything together (into `context`), you actually lose some valuable information about how the words are separated in the document (including the distance between them). In other words -- it's very important that you retain information about which words correspond to which coordinates.\r\n\r\nI could also see an argument that if a user wants this level of control, they shouldn't use a pipeline in the first place, but the implementation of QA preprocessing and postprocessing are really compelling -- which kind of drew us to really wanting to take advantage of them vs. try to reinvent them elsewhere.\r\n\r\nHopefully that makes sense and adds some context for why I proposed making `image` optional. I'm very open to alternate solutions too, but just wanted to clarify the use case a bit. For example, another option could be to add a new pipeline called `DocumentQuestionAnswering` (or similar) that handles inputs of this shape. Let me know your thoughts.",
"> @Narsil thank you for the review! Before I go in and apply the comments, I thought it might be worth discussing the required (or not) `image` argument at a high level.\r\n\r\nYes very much so !\r\nI think it's great to have a clear conversation about this.\r\n\r\nI will try and convey this part of the library's perspective, but having your view is great too since I probably know less about OCRs and overall document processing than you.\r\n\r\n> \r\n> The reason (I think) It's important to allow users to pass in words/boxes _instead of_ an image is that users often either want to run their own OCR (e.g. using a proprietary system like Google/Microsoft) OR are extracting data from documents that have embedded text (e.g. many PDF documents, Word, Excel, etc.). Furthermore, there are a lot of pre-processing tricks that are relevant to certain OCR implementations (e.g. some try to order words by line, others by block, etc.) that have a very significant impact on BERT-inspired models like LayoutLM (because of the attention mechanism, position ids, etc.). tl;dr, having some control over words/boxes is very important if you're trying to use the pipeline in a production scenario.\r\n\r\nThis is very interesting to know !\r\nI was under the impression that using an OCR could be streamlined much more.\r\n\r\nThe fact that the OCR has much impact on the quality of the results doesn't surprise me (and the `is_split_into_words` might play a non negligible role here)\r\n\r\n> \r\n> Now, you could argue that if they want to do this, they could use the question answering pipeline. In fact, when I started exploring HuggingFace/transformers, I did just that! The problem is that if you join everything together (into `context`), you actually lose some valuable information about how the words are separated in the document (including the distance between them). In other words -- it's very important that you retain information about which words correspond to which coordinates.\r\n\r\nYes I felt the same thing when reading your code and pondering whether it should actually belonged or not in `QA` instead of `VQA`. I think you are right, the image information is too valuable to be thrown away.\r\n\r\n> \r\n> I could also see an argument that if a user wants this level of control, they shouldn't use a pipeline in the first place, but the implementation of QA preprocessing and postprocessing are really compelling -- which kind of drew us to really wanting to take advantage of them vs. try to reinvent them elsewhere.\r\n\r\nVery reasonable ! :)\r\n\r\n> \r\n> Hopefully that makes sense and adds some context for why I proposed making `image` optional. I'm very open to alternate solutions too, but just wanted to clarify the use case a bit. Let me know your thoughts.\r\n\r\nOk, I am going to recap the main goal of the pipeline:\r\n\r\n```\r\nA pipeline is a tool to make ML accessible to non ML practitioners.\r\n```\r\nThat's the first goal, and in doing that, we don't want to hide any ML details that might hurt users unknowingly (like chunking things that can hurt output quality without being opt-in). So hide as many things as possible, when the defaults are correct, but don't hide any magic that could be use-case dependent. For instance, truncating without asking for explicit user consent (via a parameter) means the user will try and send large chunks of texts, and get an output that will correspond only to a tiny chunk of it, without him realizing it.\r\n\r\nAnother secondary goal is to make them as reusable/extendable as possible, but only when it doesn't contradict any of the previous goals. \r\n\r\nWith that in mind, you see why having inputs/outputs that depend on the actual model type, forces non ML practitioners to know about model types, where the goal is to try and lift that burden. If we can ensure that sending the same input, will receive the same output, it means users can jump very easily between models. So when AwesomeModelA comes out, you can just swap its name and make it work. Same goes for iterations/fine-tuning of the same model or different models and so one.\r\n\r\nHere I can see I think two solutions:\r\n\r\n1/ We create a new pipeline (`DocumentQuestionAnsweringPipeline` ?). The set of I/O is different so we should have different pipelines for these. For this pipeline it seems the input is `boxes` + `words` (which I would call `texts` personally as OCRs probably extract full string and don't necessarily reason about words). It's easy, but puts all the burden of the OCR on the user upfront.(If OCR choice is super tricky and we cannot realistically make that choice in a general fashion for users, it's probably the way to go).\r\n\r\n2/ We keep using `VisualQuestionAnswering` but we enable a very easy way to use a custom `OCR`:\r\n - Most users will trigger an initial error that `pytesseract` (or something else) is not present and get suggested to install it to get an easy impression about results (mention all the caveats/link to some docs on how to choose the OCR for advanced users).\r\n - When those sane defaults are present, the pipelines will use those.\r\n - For experienced users that know about how OCR can impact deeply the results we can enable easy overriding like:\r\n\r\n```python\r\npipe = pipeline(\"mymodel-id\", ocr=MyOCR())\r\n\r\nclass MyOCR:\r\n def forward(self, image):\r\n ...\r\n return texts, boxes\r\n```\r\n\r\nWhat do you think ? Which solution makes most sense from your perspective ?\r\n\r\nAlso regardless of choice here, we can extract whatever makes sense as an individual function within `qa` so you can reuse it, in a pipeline or anywhere else.\r\n\r\n\r\n",
"For Option 1, to clarify, would you be open to allowing the user to pass in (optional) words and boxes? I think this is conceptually similar to your point about audio pipelines using ffmpeg but I may be misunderstanding something. Essentially, we'd run OCR on the image if the words/boxes are not passed in. And either way, depending on the model, pass the image into the model as well. If we made the words/boxes an optional input, then users could basically assert control where they'd like to, but the simple use cases will just work out of the box.\r\n\r\nPersonally, I think option 1 is the way to go. I can at least sketch out the code as a next step and we can take a look and reevaluate.",
"> would you be open to allowing the user to pass in (optional) words and boxes\r\n\r\nI have the same uneasiness with **any** optional inputs. Either the pipeline needs the data or it doesn't. IMO the incoming data should be as strongly typed as possible, and definitely the computation should not depend on what the user actually sent (because then it becomes really hard to reason about what actually happened on a piece of data, which OCR was used ? Were the boxes correct ? etc...).\r\n\r\nI feel like I am missing a piece of the puzzle here, so maybe we can do the other way around, let's try to devise what we would actually like to write as a user for this document processing.\r\n\r\nIMO the simplest is something like:\r\n\r\n```python\r\npipe = pipeline(task=\"visual-question-answering\", model=\"layoutlmv3-xxx\")\r\n\r\nout = pipe(image=Image.load(\"id_card.jpg\"), question=\"What is this person's address ?\")\r\n# out == [{answer: \"24 nerdy street\", score:0.8}, {\"answer\": \"John travolta\", \"score\": 0.1}]\r\n```\r\n\r\nOr maybe be a little more strict:\r\n```python\r\npipe = pipeline(task=\"visual-question-answering\", model=\"layoutlmv3-xxx\")\r\n# ValueError : This model is a document processing model, and requires an OCR to be able to use this pipeline,\r\n# please pass an OCR. For demos, you can use `from transformers.pipelines import DefaultOCR`\r\n\r\npipe = pipeline(task=\"visual-question-answering\", model=\"layoutlmv3-xxx\", ocr=DefaultOCR())\r\n\r\nout = pipe(image=Image.load(\"id_card.jpg\"), question=\"What is this person's address ?\")\r\n# out == [{answer: \"24 nerdy street\", score:0.8}, {\"answer\": \"John travolta\", \"score\": 0.1}, ...]\r\n```",
"Ahh, okay, yes I agree that working from these examples is really helpful. Let me first be precise about what is required vs. not:\r\n\r\n- In all LayoutLM models, words and bounding boxes are technically required. The model itself requires them to be formatted a certain way (e.g. box coordinates are axis aligned and normalized between 0->1000), but it _does not_ impose where they came from. The inspiration is something like \"BERT + bounding boxes\".\r\n- In LayoutLMv2 and v3, the models additionally accept an image (normalized to 224x224) as input. Theoretically, the model is able to use information from the image alongside the encoded words and boxes. Notably, in LayoutLMv1, you do not need to provide the image. And furthermore, you _can_ fine tune v2 and v3 for many use cases _without_ the additional image and achieve similar or in some cases better results.\r\n- The `LayoutLMv2` and `LayoutLMv3` processor classes in `transformers` optionally accept an `apply_ocr` argument. If set to `True`, while doing feature extraction from the image, they'll also use the tesseract library to run OCR and return them back out to caller, so you can pass them into the model. There is some tricky control flow throughout these classes that branches based on whether the user provides their own OCR or not.\r\n\r\nI think part of why it's structured this way, or at least one of the advantages, is that in practice, since OCR can be costly (time and $$), many document processing practitioners will run OCR as a pre-processing step, so you can reuse its results across many invocations of extractions/questions/etc. E.g. imagine you were building an app that lets you point at a folder on your computer and then ask the files questions interactively. You'd probably implement this app by first running OCR on each file, and then re-using the OCR each time a user provides a new question as input.\r\n\r\nI think with this in mind, there are probably a few different use cases that would be ideal to capture in the pipeline. I fully recognize that some of these may qualify as \"more advanced\" than the scope of a pipeline, so I'm open to and appreciate your push back on where that may be the case.\r\n\r\n### Scenario 1: Single file, single question\r\n\r\n(your example above)\r\n\r\n```python\r\npipe = pipeline(task=\"visual-question-answering\", model=\"layoutlmv3-xxx\")\r\n\r\nout = pipe(image=Image.load(\"id_card.jpg\"), question=\"What is this person's address ?\")\r\n# out == [{answer: \"24 nerdy street\", score:0.8}, {\"answer\": \"John travolta\", \"score\": 0.1}]\r\n```\r\n\r\n### Scenario 2: Interactive REPL\r\n\r\n(this is an approximation of a real-world use case)\r\n\r\n```python\r\npipe = pipeline(task=\"visual-question-answering\", model=\"layoutlmv3-xxx\")\r\n\r\nimg = Image.load(\"id_card.jpg\")\r\nwords, boxes = my_favorite_ocr(img)\r\nwhile True:\r\n question = input(\"Ask a question of the image: \")\r\n print(pipe(image=img, question=question, words=words, boxes=boxes)\r\n```\r\n\r\n### Scenario 3: Mixed Media Types\r\n\r\n```python\r\nimg = rasterize(\"my_tax_form.pdf\")\r\nwords, boxes = text_extract(\"my_tax_form.pdf\")\r\n\r\n# NOTE: in certain models, e.g. LayoutLMv1, you do not even need to rasterize/pass in the image as input in this case\r\nout = pipe(image=img, question=\"What is the person's income?\", words=words, boxes=boxes)\r\n# out == [{answer: \"$10\", score:0.8}, {\"answer\": \"$1000\", \"score\": 0.1}]\r\n```\r\n\r\nI can certainly imagine some alternatives:\r\n\r\n- Words/boxes could be required inputs, and we could simply enforce that the user run OCR (or alternative) before using the pipeline. I think in this case, the image _should_ be considered optional input, simply because certain document processing models take it as input, and others don't.\r\n- Another would be to allow the user to provide a more advanced \"OCR\" input that could accept things like PDFs, spreadsheets, etc. and let it call out to OCR or use something else depending on the media type. I would say from experience, handling various document types is a can of worms and it prevents you from reusing pre-processed results across calls to the pipeline. (I believe this is your second suggestion).\r\n- My original suggestion: words/boxes could be optional, and when not provided, we use a default OCR implementation. One more advantage of this approach is that it's consistent with the LayoutLMv2 processor classes. So if a user starts with this pipeline, and then wants to dig one level deeper to the processor, they'll have a familiar pattern.\r\n\r\nLet me know your thoughts. In certain options (namely the first), I think it'd be a requirement for it to be a `DocumentQuestionAnsweringPipeline` since the _required_ inputs are different than the `VisualQuestionAnsweringPipeline`. In options 2 or 3, that might not be the case. I don't have a strong opinion about this but just wanted to clarify my understanding/thinking.",
"Ok, thanks for all the explanation !\r\n\r\nNow I think I am starting to understand it and all the use cases you displayed really make sense !\r\n\r\nI think we can ignore layoutlmv1 not requiring the image so we can keep the number of cases rather small. (If you really know what you're doing you could always send an empty image, or we could just make the code in such a way that sending `None` doesn't break anything without actively trying to sanitize it)\r\n\r\nSince the OCR is indeed quite costly (or can come from a non image !) I can really understand why we would need those optional `boxes` and `texts`. So let's support them. (We can make the docs extremely clear on that front)\r\n\r\nI think `example 1` should really be the focus for newcoming users, and we need to be able to support `example 2` and `example 3` to be usable in prod.\r\n\r\nAnd if a user sends `boxes + texts` then we can simply skip the OCR part. \r\n\r\n_Actually, wdyt about having a list of tuples instead of two lists ? Two lists enables having different sized lists which would silently break things, I usually tend to prefer arguments that cannot by design be inconsistent, and lists of tuples cannot have different sizes and will necessarily raise errors when the tuple is unwrapped, so less room for error_\r\n\r\n\r\nI think all 3 examples could become tests so that we make sure that those cases are maintained through time.\r\n\r\n\r\nI will ping @NielsRogge which is also really involved in vision and might have other insights.",
"Awesome, I really appreciate you taking the time to dig into this with me. I'll sketch this all out as a next step. And I agree that we can leave the empty image (or None image) as a performance optimization for advanced users. The one thing we'll need to be careful of is that the LayoutLMv1 model gets upset if you _do_ pass in the image (i.e. it's optional for v2/v3 but not for v1 -- v1 does not accept images at all). So if the workaround is to pass in an empty image, we'll just need to figure out a way to cleverly avoid handing it to the model (e.g. implement a no-op feature extractor that takes the image as input and returns an empty dict).\r\n\r\nWith all of this context in mind, do you have a preference for whether we extend the existing `VisualQuestionAnsweringPipeline` or isolate this logic into a `DocumentQuestionAnsweringPipeline`? I'm okay with either, although I am leaning a bit towards the latter so that we can be very clear with the examples/documentation about the use cases (and not muddy the waters with the `VisualQuestionAnsweringPipeline` which operates directly on the image each time). But of course, I'm open either way.\r\n\r\n> Actually, wdyt about having a list of tuples instead of two lists ? Two lists enables having different sized lists which would silently break things, I usually tend to prefer arguments that cannot by design be inconsistent, and lists of tuples cannot have different sizes and will necessarily raise errors when the tuple is unwrapped, so less room for error_\r\n\r\nI have no concerns with this. The runtime \"perf hit\" of converting one format to the other is trivial compared to the other operations involved. I think it's a smart way to prevent an accidental length mismatch.\r\n\r\n> I think all 3 examples could become tests so that we make sure that those cases are maintained through time.\r\n\r\nGreat point. I'm happy to contribute these.\r\n\r\n\r\n\r\n\r\n",
"> With all of this context in mind, do you have a preference for whether we extend the existing VisualQuestionAnsweringPipeline or isolate this logic into a DocumentQuestionAnsweringPipeline? I'm okay with either,\r\n\r\nGo with `DocumentQuestionAnsweringPipeline` for now then. In general we try to avoid adding pipelines when we can and when the set of input/output is the same as it makes discoverability and usability on hf.co easier/more consistent.\r\n\r\nBut you made great points explaining core differences (especially the pdf example for instance), IMO.\r\nIf we decide to revisit later or other members have different opinions, we might revisit later (we would do the lifting, and since we're committed to zero breaking change you would still be able to use your code regardless of internal decisions)",
"Okay great, as a next step I'll rework this PR to sketch out `DocumentQuestionAnsweringPipeline` and address some of your comments on the original change (but may not do all in the first pass, just to optimize for getting feedback sooner). Thanks again for the back and forth and look forward to the next iteration!",
"I just pushed an update that moves the logic into a new `DocumentQuestionAnsweringPipeline`. I still need to do a few major things:\r\n\r\n- Integrate OCR\r\n- Figure out padding (specifically -- using \"return_tensors\" basically requires padding, so I could either enforce it or do the `unsqueeze` trick used in the qa pipeline)\r\n- Integrate the post-processing from the QA pipeline.\r\n\r\nI did some sanity testing with a model we've trained and can confirm that it is starting to work! I think we're headed in the right direction.",
"> Figure out padding (specifically -- using \"return_tensors\" basically requires padding, so I could either enforce it or do the unsqueeze trick used in the qa pipeline)\r\n\r\nNot sure I understand, in the pipelines the padding should be done by the pipeline itself, not by the `preprocess` (It just allows for more flexible control over how things are executed). `preprocess` only processes 1 input at a time, so padding shouldn't be necessary (it might be activable, like truncating, but I don't think it should be the default)",
"> Not sure I understand, in the pipelines the padding should be done by the pipeline itself, not by the preprocess (It just allows for more flexible control over how things are executed). preprocess only processes 1 input at a time, so padding shouldn't be necessary (it might be activable, like truncating, but I don't think it should be the default)\r\n\r\nIf I'm understanding the QA pipeline code correctly, the reason padding is relevant is that if you stride a document (e.g. one with more than 512 words), then one item that you preprocess might result multiple inputs to the model that get concatenated together in one big tensor. The question answering pipeline has to solve for this too, and it seems to do that by (a) _not_ returning tensors from `tokenize()`, and then (b) while constructing the final output, using `tensor.unsqueeze(0)` ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L355)) to effectively pad each element to the same size.\r\n\r\nI'm happy to do it that way if you prefer -- my working assumption was that the \"padding\" argument to the tokenizer accomplishes the same thing (but certainly may be missing some interesting implementation detail).\r\n\r\n",
"> If I'm understanding the QA pipeline code correctly, the reason padding is relevant is that if you stride a document (e.g. one with more than 512 words), then one item that you preprocess might result multiple inputs to the model that get concatenated together in one big tensor. The question answering pipeline has to solve for this too, and it seems to do that by (a) _not_ returning tensors from `tokenize()`, and then (b) while constructing the final output, using `tensor.unsqueeze(0)` ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L355)) to effectively pad each element to the same size.\r\n\r\nOk, this is what I alluded to, QA solves this by using `return_overflowing_tokens` (and the padding is set to `do_no_pad` by default).\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L283\r\n\r\nQA solves this by using `ChunkPipeline`. \r\nIf you want to tackle this, you're more than welcome to it, but IMO it's going to be easier to do it in two steps, and two separate PRs.\r\n\r\nAs a first step I would recommend simply not treating padding, and let sequences too long go to to the model, which will then crash. It's aligned with the \"don't hide\" anything policy. Some models can handle long range, most cannot, so not trying to hide that fact is IMO a good thing. We can add an auto chunking in a follow UP PR.\r\n\r\n\r\n",
"> As a first step I would recommend simply not treating padding, and let sequences too long go to to the model, which will then crash. It's aligned with the \"don't hide\" anything policy. Some models can handle long range, most cannot, so not trying to hide that fact is IMO a good thing. We can add an auto chunking in a follow UP PR.\r\n\r\nThat plan works for me! I'll provide a new update shortly.",
"@Narsil I just updated the PR with a few changes that remove the padding/striding stuff (for now) and add some docs. The next steps are to integrate OCR and then refactor/borrow the post-processing code from the QA pipeline. I'll keep working on that but wanted to post an intermediate update in case you had a chance to take a quick look.",
"@Narsil another thought / question I had while working on the OCR stuff... Currently, both LayoutLMv2 and v3 have a feature extractor which _by default_ applies OCR. By incorporating OCR into the pipeline itself (which I'm doing by just borrowing their code), we essentially take over that functionality. So, a user may have to do something like this:\r\n\r\n```python\r\npipe = pipeline(task=\"visual-question-answering\", model=\"layoutlmv3-xxx\", tokenizer=\"layoutlmv3-xxx\", feature_extractor=AutoFeatureExtractor.from_pretrained(\"layoutlmv3-xxx\", apply_ocr=False))\r\n\r\nout = pipe(image=Image.load(\"id_card.jpg\"), question=\"What is this person's address ?\")\r\n# out == [{answer: \"24 nerdy street\", score:0.8}, {\"answer\": \"John travolta\", \"score\": 0.1}]\r\n```\r\n\r\nEssentially, we'll want to rely on the pipeline's OCR, not the feature extractor's. However as a result, we make the user experience a bit awkward (since they have to provide \"apply_ocr\" `False` in one place). I can think of a few solutions to this:\r\n\r\n1. We could rely on the user providing a feature extractor as input, and then invoke the feature extractor in `preprocess()`, essentially following the conventions that `LayoutLMv2Processor`/`LayoutLMv3Processor` do (call the feature extractor and then the tokenizer). If they provide neither a feature extractor nor words, we can provide a helpful error message that they must provide a feature extractor that returns words. One major downside to this approach is that users of models like LayoutLMv1 will _not_ ever get OCR run for them by the pipeline, but I'm open to implementing a feature extractor for LayoutLMv1 to solve this.\r\n2. If they provide a feature extractor, we could try to check whether it'll run OCR (e.g. by checking whether its \"apply_ocr\" attribute is `True`). If it will, then we can rely on the feature extractor to provide words/boxes. If not, and they haven't passed in words to the pipeline, then we can run OCR. I think the major downside is depending on a non-standard flag (`apply_ocr`) in the generic pipeline code. I'm not sure how you all think about this tradeoff -- it may be fine to do. A slight variant of this is to test whether _after_ running the feature extractor, we have `words` and `boxes` available in its output.\r\n3. We could just ignore this altogether and let the user be the expert. I.e. if they pass in a feature extractor and have not specified `apply_ocr=False`, it will run OCR twice (once in the pipeline and once in the feature extractor), which is an unnecessary perf hit, but makes no assumptions about the feature extractor itself.\r\n\r\nLet me know your thoughts.",
"@Narsil I think I've implemented all of what we talked about (and apologies in advance if I missed anything). To summarize:\r\n\r\n- Padding/truncation are gone. I've left them commented out, because we plan to address them as a follow up (in this or a fast-follow PR), but I'm happy to remove those comments too if you prefer.\r\n- OCR is integrated. Regarding my question just above, I went down the route of option 2, and check whether the feature extractor returned words/boxes before trying to run OCR, which the pipeline natively supports.\r\n- I refactored the tricky postprocessing parts of the QA pipeline into helper functions which I call from the document question answering pipeline.\r\n- I've copied the relevant subsets of the code (including PR #18407) and published it [here](https://huggingface.co/impira/layoutlm-document-qa) with some examples. Feel free to play around with it!\r\n \r\nAs a next step, I'd appreciate a quick review from you on these major points to verify whether we're on the right track. I'd like to add the tests and more documentation next (pendingΒ your feedback on if we are in a good place with the interface/overall design). I also have a few questions regarding the tests/docs:\r\n\r\n- The existing tests for both question-answering and visual-question-answering use models published on HF. There aren't (currently that I could find) any reliablen doc qa models. I have published one [here](https://huggingface.co/impira/layoutlm-document-qa), but there's a bit of a soft dependency on PR #18407 because the model we're publishing uses LayoutLMv1. You can [access the model w/ remote code enabled](https://huggingface.co/docs/transformers/main/en/custom_models#using-a-model-with-custom-code), but I'm not sure that's advisable for a test in the repo. It'd also be good to have tests that span multiple models (e.g. v1-v3) because there are some differences in their tokenizers.\r\n- Is there any way to use a processor in a pipeline? The reason I ask is that LayoutLMv2 and v3 have some interesting logic encapsulated in their processors (e.g. LayoutLMv2 renames the input to the model from `image` to `pixel_values` and v3 to `image_features`). It'd be great to reuse the logic in those classes within the pipeline. Alternatively, I could just support LayoutLMv1 to start with and we can work on adding support for the other versions in a follow up PR.\r\n- Should I add docs anywhere other than the code itself (which I assume would show up [here](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.QuestionAnsweringPipeline))? For example a place like [here](https://huggingface.co/docs/transformers/main/en/task_summary#question-answering)) as a tutorial for how document question answering works?\r\n",
"@Narsil gentle nudge in case this slipped from your queue :)",
"> So rather than extending the VQA pipeline, it seems that the design has been updated to create a separate DocumentQuestionAnswering pipeline?\r\n\r\nYes that's correct.\r\n\r\n> Also, I'd like to note that there's a new model I'm working on called Donut which solved DocVQA in a generative manner. Donut is generative T5-like model, which simply generates the answer given a question. Would this pipeline be able to support that model as well?\r\n\r\nThe interface should support it. As input, you provide an image+question (and optional word/box pairs if you've pre-run OCR) and as output you receive an answer + start/end words. For a generative model, I could imagine either omitting the start/end or the pipeline doing its best to find it in the document if it exists.\r\n\r\nCode-wise, there may be some refactoring _within_ the pipeline implementation to best support a model like Donut. Very happy to collaborate with you on that.",
"@NielsRogge congrats on pushing Donut -- I just saw it come through. I've integrated it into the pipeline, and it works! The code gets a bit splintered _inside_ the pipeline which now handles the `VisionEncoderDecoderModel` case a bit differently. I would definitely appreciate feedback on how the control flow handles both cases (CC @Narsil too). I think one thing that would help is if pipelines could accept processors as input. We could potentially capture some of the LayoutLM-specific tokenization logic into a `LayoutLMProcessor` (similar to `LayoutLMv2Processor`), and then simply invoke processor-specific commands for each type of model within the pipeline.\r\n\r\nLet me know your thoughts. And feel free to take it for a spin! For example, the following commands work:\r\n\r\n```python\r\n\r\nfrom transformers import AutoTokenizer, pipeline\r\nnlp = pipeline('document-question-answering', model='naver-clova-ix/donut-base-finetuned-docvqa', tokenizer=AutoTokenizer.from_pretrained(\"naver-clova-ix/donut-base-finetuned-docvqa\"), feature_extractor='naver-clova-ix/donut-base-finetuned-docvqa')\r\n\r\nnlp(\"https://templates.invoicehome.com/invoice-template-us-neat-750px.png\", \"What is the invoice total?\")\r\n```",
"Hi @Narsil, thanks for the feedback. I will address your comments. I appreciate your willingness to pull down the code and get your hands dirty. Please let me know if I can help at all with that. I need to quickly rebase and fix one or two bugs which I will do ASAP (I broke a couple things while adding support for Donut).\r\n\r\nLet me roll up a couple of high level questions that are open. I would greatly appreciate your feedback on these:\r\n\r\n1 - Is it possible to use `Processor`s in pipelines? I think _some_ (but not a whole lot) of the logic for Donut, and a whole lot of the logic for LayoutLMv2-3 is present in their processor class and would need to be duplicated here otherwise. Likewise, we could probably create a processor for LayoutLMv1 and place some of the logic there.\r\n\r\n2 - While working some more on this in real-world scenarios, I realized that for models like Donut, which operate _only_ on the images, grouping things together by page is actually really important (so you can pass in one input per page). I think it might be useful to change the input format to either be a list of images, or something like `[(image, [(word, box)])]`, where each tuple has an image and a list of word/boxes. WDYT?",
"@ankrgyl Here are some tests we can integrate If you're ok (feel free to modify, the important part is to have exact values in the asserts everywhere except `run_pipeline_test`.\r\n\r\nhttps://github.com/huggingface/transformers/pull/18732/commits/2e8b01cd5e65aa64956e0f5e56e29ea8391c3955",
"> 1 - Is it possible to use Processors in pipelines? I think some (but not a whole lot) of the logic for Donut, and a whole lot of the logic for LayoutLMv2-3 is present in their processor class and would need to be duplicated here otherwise. Likewise, we could probably create a processor for LayoutLMv1 and place some of the logic there.\r\n\r\nIn general, `processor` should be extremely shallow, and the real logic should actually be in `feature_extractor`. Leveraging it is not only encouraged but extremely welcome as they can contain model specific details that the pipeline then doesn't have to care aobut.\r\n\r\n> 2 - While working some more on this in real-world scenarios, I realized that for models like Donut, which operate only on the images, grouping things together by page is actually really important (so you can pass in one input per page). I think it might be useful to change the input format to either be a list of images, or something like [(image, [(word, box)])], where each tuple has an image and a list of word/boxes. WDYT?\r\n\r\nYou mean sharing the question throughout the pipeline ?\r\nI don't know, I think we should focus on the simple solution first, see later for different use cases.\r\nSending it at every step is not too hard, and the optimizations should be dwarfed compared to other issues that might occur (like feeding the GPU fast enough and image processing). Happy to be proven wrong I haven't checked (but in my experience tokenizer is rarely a bottleneck)\r\n\r\nAnything list, Dataset or generator should probably be handled by the base class, not by the pipeline directly. ",
"@ankrgyl could you rename the PR to better describe what it does (as it doesn't seem to extend the existing VQA pipeline)?\r\n\r\nI'll do a second round of review soon. ",
" > In general, `processor` should be extremely shallow, and the real logic should actually be in `feature_extractor`. Leveraging it is not only encouraged but extremely welcome as they can contain model specific details that the pipeline then doesn't have to care aobut.\r\n\r\nOkay got it. That makes sense\r\n\r\n> > 2 - While working some more on this in real-world scenarios, I realized that for models like Donut, which operate only on the images, grouping things together by page is actually really important (so you can pass in one input per page). I think it might be useful to change the input format to either be a list of images, or something like [(image, [(word, box)])], where each tuple has an image and a list of word/boxes. WDYT?\r\n> \r\n> You mean sharing the question throughout the pipeline ? I don't know, I think we should focus on the simple solution first, see later for different use cases. Sending it at every step is not too hard, and the optimizations should be dwarfed compared to other issues that might occur (like feeding the GPU fast enough and image processing). Happy to be proven wrong I haven't checked (but in my experience tokenizer is rarely a bottleneck)\r\n> \r\n> Anything list, Dataset or generator should probably be handled by the base class, not by the pipeline directly.\r\n\r\nNo, I'm talking about the case where you're working with a document that has multiple pages. Each page consists of an image and potentially word/box pairs (if OCR is pre-run). In document processing, it's a common request to try to find an answer from more than one page (e.g. find the total from a 2 page invoice). Right now, as constructed, you can only pass one page at a time, since you can pass in at most one image. That means as a user, you'd have to run the pipeline on each page, and then pick the highest confidence answer. Ideally, this logic should live in the pipeline, because the pipeline can have some logic that picks the best answer across pages.\r\n\r\nThe main reason I'm wondering about it now is that it affects the input shape. For example, if you have a 3 page document, the code could look like:\r\n\r\n```python\r\npages = []\r\nfor page in my_pdf.pages():\r\n pages.append({\"image\": Image.load(page.image()), \"word_boxes\": tesseract(page.image())})\r\n\r\npipe(image=pages, question=\"What is this person's address ?\")\r\n```\r\n\r\nI'm ok with addressing this in a follow up too, where we can extend `images` to also be an array (and expect it to be this shape). I just wanted to flag the scenario sooner than later.",
"> @ankrgyl Here are some tests we can integrate If you're ok (feel free to modify, the important part is to have exact values in the asserts everywhere except `run_pipeline_test`.\r\n> \r\n> [2e8b01c](https://github.com/huggingface/transformers/commit/2e8b01cd5e65aa64956e0f5e56e29ea8391c3955)\r\n\r\nThanks @Narsil. I've incorporated these tests into a new test suite in the change and am working through them. I will work on expanding the tests next. It would be really helpful to land PR #18407 so I can include tests for LayoutLMv1 too.\r\n\r\nA couple things came up while I was integrating the tests:\r\n\r\n- I think there's something wrong with `hf-internal-testing/tiny-random-layoutlmv2`. Specifically, if you run the following (w/out any of these changes), you should see an error:\r\n\r\n```python\r\n\r\nfrom transformers import AutoModel, AutoProcessor\r\nfrom PIL import Image\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"hf-internal-testing/tiny-random-layoutlmv2\")\r\nmodel = AutoModel.from_pretrained(\"hf-internal-testing/tiny-random-layoutlmv2\")\r\nencoding = processor(Image.open('tests/fixtures/tests_samples/COCO/000000039769.png').convert(\"RGB\"), \"What is the meaning of life?\", return_tensors=\"pt\")\r\no = model(**encoding)\r\n\r\n# ValueError: 'p5' is not in list\r\n```\r\n\r\nHowever, if you run with `microsoft/layoutlmv2-base-uncased` instead of `hf-internal-testing/tiny-random-layoutlmv2`, the above code works. Could there be something incorrectly configured with this model?\r\n\r\n- I was able to get the `test_large_model_pt_layoutlmv2` model to structurally work, however, the model's outputs are so low confidence that the results are inconsistent from run to run (there are several answers with the minimum possible score). I think it might be worth using a fine-tuned one like `tiennvcs/layoutlmv2-base-uncased-finetuned-docvqa` with a pinned revision. Is that ok?",
"> # ValueError: 'p5' is not in list\r\n\r\nThe small model is just configured with much less layers, and `p5` is not expected to be there. (The goal is to have a tiny random model)\r\n\r\nI don't know what `Processor` does, but the feature_extractor was working properly if I recall. Therer still was an error in the test but further down in the forward pass because some keys were missing.\r\n\r\nFeel free to modify the tiny model as you see fit locally and propose a PR on it (or use a small tiny random model you own and we'll port it back into `hf-internal-testing`.\r\n\r\nI did remove some vision layers (including `p5`) if something is failing I would consider it a bug, but I am not super familiar with this model's internals.",
"> I did remove some vision layers (including `p5`) if something is failing I would consider it a bug, but I am not super familiar with this model's internals.\r\n\r\nYes it was failing inside of the forward pass, not in the processor. I only used the processor to demonstrate that the repro did not have to do with the new code in the PR (it's not running in the test, either).\r\n\r\nI can explore the mini model but am unfortunately not very familiar with that stuff myself, either. I will take a look but may need some help if I get stuck."
] | 1,659
| 1,691
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR extends VisualQuestionAnsweringPipeline to accept `words` and `boxes` as input, passes them into the tokenizer/model (along with the question), and post-processes their `QuestionAnsweringModelOutput` response.
Fixes #18380
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18414/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18414",
"html_url": "https://github.com/huggingface/transformers/pull/18414",
"diff_url": "https://github.com/huggingface/transformers/pull/18414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18414.patch",
"merged_at": 1662572329000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18413
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18413/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18413/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18413/events
|
https://github.com/huggingface/transformers/issues/18413
| 1,325,310,941
|
I_kwDOCUB6oc5O_qPd
| 18,413
|
Tranformers documentation translation to Japanese π―π΅
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @younesbelkada as we talked about that last week",
"That's great π₯ !\r\nLinking this issue to the one on HF course: https://github.com/huggingface/course/issues/114 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, I would like to translate the 3 mdx files in the Get Started section. (index.mdx, quicktour.mdx, installation.mdx)\r\nIf there is no person in charge yet.",
"Thanks @kambehmw for your interest in this!\r\nSure, feel free to start working on it as no one is in charge of that yet! ",
"@younesbelkada \r\nThanks for the reply. Then I will be in charge of translating those three documents. Once the translation document is ready, I will make a pull request.",
"Thank you very much, looking forward to it!!",
"Hi @younesbelkada! Still working on [this PR request](https://github.com/huggingface/optimum/pull/542) but I would like to work on this issue too because I'm Japanese π―π΅ \r\n\r\nCan I work on [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)?\r\n\r\nBy the way, the link for [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) is broken. I think [this](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.mdx) is the correct one π ",
"I would like to work on this as well if it is OK.",
"Sure yes @rustinwelter , that would be great,\r\nlet us know what topic would you like to pick up for translation!",
"Thank you @younesbelkada! Let me just try [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) as I'm a still a bit nervous. But if it goes well and I feel comfortable with it, will you let me do others as well?",
"Of course yes! Don't worry all will go very well πͺ And looking forward to your contributions!",
"Thank you! I have sent my pull request! :)",
"Awesome! Thanks a lot for your contribution!",
"Hey all! As some people were interested in a place to discuss about translations, we opened a category in the [HF Discord server](http://hf.co/join/discord) with a category for internationalization and translation efforts, including a Japanese channel!"
] | 1,659
| 1,697
| 1,697
|
CONTRIBUTOR
| null |
Hi!
Let's bring the documentation to all the Japanese-speaking community :)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
- Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
- Please translate in a gender-neutral way.
- Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
- Register your translation in `ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
- Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @omarespejel and @sgugger for review.
- π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) | https://github.com/huggingface/transformers/pull/21186
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx).
- [x] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx) | https://github.com/huggingface/transformers/pull/21241
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [x] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) | https://github.com/huggingface/transformers/pull/21084
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18413/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18413/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18412
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18412/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18412/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18412/events
|
https://github.com/huggingface/transformers/pull/18412
| 1,325,239,834
|
PR_kwDOCUB6oc48eTFk
| 18,412
|
fix: keras fit tests for segformer tf and minor refactors.
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Pinging @gante as this week's TF reviewer!",
"> (question: why is test_keras_fit entirely overwritten?)\r\n\r\n1. The `TFSegFormerModel` class doesn't support the fit test since we can't compute loss on embeddings. \r\n2. The labels for the rest of the two classes (semantic segmentation and image classification) have different label shapes. \r\n\r\nSo, it made sense to test them in isolation. ",
"> > (question: why is test_keras_fit entirely overwritten?)\r\n> \r\n> 1. The `TFSegFormerModel` class doesn't support the fit test since we can't compute loss on embeddings.\r\n\r\nThe line `if getattr(model, \"hf_compute_loss\", None):` should already take care of this case, I think.\r\n\r\n> 2. The labels for the rest of the two classes (semantic segmentation and image classification) have different label shapes.\r\n\r\nDoes the main issue come from the fact that `_prepare_for_class` in `tests/test_modeling_tf_common.py` lack the label preparation for `segmentation`?\r\n",
"> Does the main issue come from the fact that _prepare_for_class in tests/test_modeling_tf_common.py lack the label preparation for segmentation?\r\n\r\nI think so, yes. ",
"Looks like the new `test_keras_fit()` in the base `test_modeling_tf_common` takes care of the nuances I faced when I was overriding `test_keras_fit()` (at the time of writing `modeling_tf_segformer.py`. \r\n\r\nSo, I incorporated the latest changes, bypassing the complete rewrite. \r\n\r\n@ydshieh @amyeroberts @gante up for another review. ",
"Thanks for flagging this to me!\r\n\r\n@ydshieh @gante okay to merge? ",
"Let gante push the final approval button π ",
"@sayakpaul our CI failed in the reworked test -- can you confirm that it runs correctly? :) \r\n\r\nhttps://github.com/huggingface/transformers/runs/7655675934?check_suite_focus=true",
"@gante taking a quick look [here](https://github.com/huggingface/transformers/runs/7655675934?check_suite_focus=true#step:9:139), seems like it's happening because of the second point [here](https://github.com/huggingface/transformers/pull/18412#issuecomment-1202957292). If this is the case, I will sync with @ydshieh to add support for segmentation labels in the necessary places. \r\n\r\nSounds good? "
] | 1,659
| 1,659
| 1,659
|
MEMBER
| null |
Fixes the issues as noticed in: https://github.com/huggingface/transformers/runs/7485048615?check_suite_focus=true.
I don't have access to an instance having multiple GPUs at the moment, but I figured out the root cause of the issue.
https://github.com/huggingface/transformers/blob/df5e4232f59e6fea08911eddd0adc965d1b59c15/tests/models/segformer/test_modeling_tf_segformer.py#L346
^ I wasn't calling the model on some sample inputs, which is why the weights retrieved from `get_weights()` were zero. That has been fixed in this PR.
I tested it locally in isolation with the following snippet (I acknowledge that it's not super clean):
```py
from transformers import TFSegformerForImageClassification, TFSegformerForSemanticSegmentation, SegformerConfig
import tensorflow as tf
from tests.test_modeling_tf_common import floats_tensor, ids_tensor
import numpy as np
batch_size = 13
image_size = 64
num_channels = 3
num_encoder_blocks = 4
depths = [2, 2, 2, 2]
sr_ratios = [8, 4, 2, 1]
hidden_sizes = [16, 32, 64, 128]
downsampling_rates = [1, 4, 8, 16]
num_attention_heads = [1, 2, 4, 8]
is_training = True
use_labels = True
hidden_act = "gelu"
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
initializer_range = 0.02
num_labels = 3
def get_config():
return SegformerConfig(
image_size=image_size,
num_channels=num_channels,
num_encoder_blocks=num_encoder_blocks,
depths=depths,
hidden_sizes=hidden_sizes,
num_attention_heads=num_attention_heads,
hidden_act=hidden_act,
hidden_dropout_prob=hidden_dropout_prob,
attention_probs_dropout_prob=attention_probs_dropout_prob,
initializer_range=initializer_range,
num_labels=num_labels
)
def prepare_config_and_inputs(for_semseg=True):
pixel_values = floats_tensor([batch_size, num_channels, image_size, image_size])
if for_semseg:
labels = ids_tensor([batch_size, image_size, image_size], num_labels)
else:
labels = tf.zeros((batch_size))
config = get_config()
return config, pixel_values, labels
model_classes = (TFSegformerForImageClassification, TFSegformerForSemanticSegmentation)
for model_class in model_classes:
if model_class == TFSegformerForSemanticSegmentation:
config, pixel_values, labels = prepare_config_and_inputs(for_semseg=True)
else:
config, pixel_values, labels = prepare_config_and_inputs(for_semseg=False)
input_for_model_fit = {"pixel_values": pixel_values, "labels": labels}
model = model_class(config)
model(model.dummy_inputs)
model_weights = model.get_weights()
model.compile(optimizer=tf.keras.optimizers.SGD(0.0), run_eagerly=True)
history1 = model.fit(
input_for_model_fit,
validation_data=input_for_model_fit,
steps_per_epoch=1,
validation_steps=1,
shuffle=False,
)
val_loss1 = history1.history["val_loss"][0]
label_names = {"labels"}
labels = {key: val for key, val in input_for_model_fit.items() if key in label_names}
inputs_minus_labels = {key: val for key, val in input_for_model_fit.items() if key not in label_names}
# We reinitialize the model here even though our learning rate was zero
# because BatchNorm updates weights by means other than gradient descent.
model.set_weights(model_weights)
history2 = model.fit(
inputs_minus_labels,
labels,
validation_data=(inputs_minus_labels, labels),
steps_per_epoch=1,
validation_steps=1,
shuffle=False,
)
val_loss2 = history2.history["val_loss"][0]
print(np.allclose(val_loss1, val_loss2, atol=1e-2, rtol=1e-3))
```
@amyeroberts @Rocketknight1 @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18412/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18412",
"html_url": "https://github.com/huggingface/transformers/pull/18412",
"diff_url": "https://github.com/huggingface/transformers/pull/18412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18412.patch",
"merged_at": 1659541195000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18411
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18411/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18411/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18411/events
|
https://github.com/huggingface/transformers/issues/18411
| 1,325,147,182
|
I_kwDOCUB6oc5O_CQu
| 18,411
|
`assertion failed: stride < max_len` when using tokenizer with text_pair
|
{
"login": "shakabash",
"id": 110434758,
"node_id": "U_kgDOBpUZxg",
"avatar_url": "https://avatars.githubusercontent.com/u/110434758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shakabash",
"html_url": "https://github.com/shakabash",
"followers_url": "https://api.github.com/users/shakabash/followers",
"following_url": "https://api.github.com/users/shakabash/following{/other_user}",
"gists_url": "https://api.github.com/users/shakabash/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shakabash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shakabash/subscriptions",
"organizations_url": "https://api.github.com/users/shakabash/orgs",
"repos_url": "https://api.github.com/users/shakabash/repos",
"events_url": "https://api.github.com/users/shakabash/events{/privacy}",
"received_events_url": "https://api.github.com/users/shakabash/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I'd like to provide an update on my own ticket. Although I still have no idea what causes this behavior and despite probably not being relevant for most practical applications, it seems like the truncation is enforcing that the shortest of the two texts after the truncation is at least 3 tokens long. The resulting error message (see title of this ticket) is only adding to this confusion.\r\nIn the following, I'll describe the experiments I ran and the outcomes that led me to the aforementioned conclusion.\r\n\r\nFirst, I determined the lengths of the above sentences after tokenization.\r\n```\r\nlen(tokenizer(sentences[0])['input_ids']) # 17 when `add_special_tokens=True`, 15 otherwise\r\nlen(tokenizer(sentences[1])['input_ids']) # 12 when `add_special_tokens=True`, 10 otherwise\r\n```\r\n\r\nThen I played around with the `max_length` parameter to the tokenizer's `__call__` method to determine if there's a setting which lets it complete the tokenization.\r\nFor\r\n```\r\ninputs = tokenizer(\r\n sentences[0], sentences[1], truncation=\"longest_first\", return_overflowing_tokens=True, max_length=6, stride=2,\r\n add_special_tokens=True\r\n)\r\n```\r\nit was `max_length = 6` and for\r\n```\r\ninputs = tokenizer(\r\n sentences[0], sentences[1], truncation=\"longest_first\", return_overflowing_tokens=True, max_length=9, stride=2,\r\n add_special_tokens=True\r\n)\r\n```\r\nit was `max_length = 9`, showing the impact of the 3 special tokens added before, between, and after the two sequences.\r\n\r\nThen I tried the same experiment for `truncation=\"only_second\"` which has different `max_length` thresholds, as the first sequence needs to fit completely. This resulted in settings of `max_length = 18` and `max_length = 21` for `add_special_tokens=False` and `add_special_tokens=True`, respectively.\r\n\r\nBased on these observations, I concluded, as mentioned in the introduction, that the shortest sequence must have at least 3 tokens after the truncation for it to be successful. I'm wondering about 2 things now:\r\n1. Should the documentation be adjusted to make this specific?\r\n2. Should the error message be improved as it was misleading, at least to me.\r\n\r\nThanks for your support!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### System Info
transformers 4.20.1, 4.21.0 (tested with both), MacOS 12.4, Python 3.8.12
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The above occurs when using truncation and overflowing tokens with a sentence pair. Maybe I'm doing something stupid but here is the code that reproduces it:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-dot-v5')
sentences = [
"This sentence is not too long but we are going to split it anyway.",
"This sentence is shorter but will still get split.",
]
inputs = tokenizer(
sentences[0], sentences[1], truncation=True, return_overflowing_tokens=True, max_length=6, stride=2
)
```
When using the same sentences like this
```
inputs = tokenizer(
sentences, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2
)
```
all is fine.
### Expected behavior
I would expect that the truncation succeeds as max_length > stride.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18411/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18410
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18410/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18410/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18410/events
|
https://github.com/huggingface/transformers/issues/18410
| 1,325,091,007
|
I_kwDOCUB6oc5O-0i_
| 18,410
|
Sharded Multi-GPU MT5 training with the Seq2SeqTrainer fails (4.21.0)
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
] |
[
"It still fails when I install `transformers` directly from the GitHub repository (as of today).\r\n\r\nHere's the traceback:\r\n```\r\nTraceback (most recent call last):\r\n File \"script.py\", line 102, in <module>\r\n main()\r\n File \"script.py\", line 98, in main\r\n trainer.train()\r\n File \"/mnt/task_runtime/transformers/src/transformers/trainer.py\", line 1506, in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n File \"/mnt/task_runtime/transformers/src/transformers/trainer.py\", line 1744, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/mnt/task_runtime/transformers/src/transformers/trainer.py\", line 2492, in training_step\r\n loss.backward()\r\n File \"/miniconda/lib/python3.7/site-packages/torch/_tensor.py\", line 307, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/autograd/__init__.py\", line 156, in backward\r\n allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag\r\nRuntimeError: grad.numel() == bucket_view.numel()INTERNAL ASSERT FAILED at \"/opt/conda/conda-bld/pytorch_1640811797118/work/torch/csrc/distributed/c10d/reducer.cpp\":328, please report a bug to PyTorch. \r\n 0%| | 0/10000 [00:00<?, ?it/s]\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 48181) of binary: /miniconda/bin/python\r\nTraceback (most recent call last):\r\n File \"/miniconda/bin/torchrun\", line 33, in <module>\r\n sys.exit(load_entry_point('torch==1.10.2', 'console_scripts', 'torchrun')())\r\n File \"/miniconda/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 345, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/distributed/run.py\", line 719, in main\r\n run(args)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/distributed/run.py\", line 713, in run\r\n )(*cmd_args)\r\n File \"/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py\", line 131, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py\", line 261, in launch_agent\r\n failures=result.failures,\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n============================================================\r\nscript.py FAILED\r\n------------------------------------------------------------\r\nFailures:\r\n[1]:\r\n time : 2022-08-02_15:26:28\r\n host : bolt-imq45r3c3y-8dfzr73qqa.bolt-pods.turi-bolt.svc.int.usmsc39.applecloud.io\r\n rank : 1 (local_rank: 1)\r\n exitcode : 1 (pid: 48182)\r\n error_file: <N/A>\r\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\n------------------------------------------------------------\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2022-08-02_15:26:28\r\n host : bolt-imq45r3c3y-8dfzr73qqa.bolt-pods.turi-bolt.svc.int.usmsc39.applecloud.io\r\n rank : 0 (local_rank: 0)\r\n exitcode : 1 (pid: 48181)\r\n error_file: <N/A>\r\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\n============================================================\r\n```",
"Related issue: https://discuss.pytorch.org/t/multi-gpu-model-parallelism-device-error/117854/9\r\n\r\nThis issue seems to be related to how DDP is set up in a constructor somewhere, probably in the trainer's constructor when adding DDP.",
"Hello @shermansiu , I am unable to reproduce the error with transformers==4.22.0.dev0 main branch and fairscale==0.4.6. `sharded_ddp` has nothing to do with DeepSpeed. I get another error and it is unrelate with the integration. Therefore, please open the issue with `Fairscale` and follow it there. The issue I face is below which is different from the one you face:\r\n\r\n```bash\r\nTraceback (most recent call last): \r\n File \"script.py\", line 109, in <module> \r\n main()\r\n File \"script.py\", line 103, in main\r\n trainer.train()\r\n File \"/home/sourab/transformers/src/transformers/trainer.py\", line 1502, in train\r\n return inner_training_loop(\r\n File \"/home/sourab/transformers/src/transformers/trainer.py\", line 1744, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/home/sourab/transformers/src/transformers/trainer.py\", line 2492, in training_step\r\n loss.backward()\r\n File \"/home/sourab/dev/lib/python3.8/site-packages/torch/_tensor.py\", line 396, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n File \"/home/sourab/dev/lib/python3.8/site-packages/torch/autograd/__init__.py\", line 173, in backw\r\nard\r\n Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\nRuntimeError: Function SplitWithSizesBackward0 returned an invalid gradient at index 0 - got [582401\r\n280] but expected shape compatible with [291200640]\r\n```\r\n\r\nAlso, if you want to leverage Fully Sharded Data Parallelism, you can use the production focused PyTorch FSDP integration in transformers by having following args:\r\n```diff\r\nargs = Seq2SeqTrainingArguments(\r\n \"script_debug\",\r\n per_device_train_batch_size=4,\r\n per_device_eval_batch_size=4,\r\n fp16=False,\r\n- sharded_ddp=[\"zero_dp_3\"],\r\n+ fsdp=[\"full_shard\", \"auto_wrap\"],\r\n+ fsdp_transformer_layer_cls_to_wrap=\"T5Block\",\r\n max_steps=100,\r\n logging_steps=5000,\r\n save_steps=5000\r\n )\r\n```\r\n\r\nwhich gives below output:\r\n```bash\r\n***** Running training *****\r\n Num examples = 500\r\n Num Epochs = 2\r\n Instantaneous batch size per device = 4\r\n Total train batch size (w. parallel, distributed & accumulation) = 8\r\n Gradient Accumulation steps = 1\r\n Total optimization steps = 100\r\nAutomatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\n\r\n...\r\n\r\n\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 100/100 [00:26<00:00, 3.72it/s]\r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\nFullyShardedDataParallel( \r\n (_fsdp_wrapped_module): FlattenParamsWrapper(\r\n (_fpw_module): MT5ForConditionalGeneration(\r\n (shared): Embedding(250112, 768)\r\n (encoder): T5Stack( \r\n (embed_tokens): Embedding(250112, 768)\r\n (block): ModuleList(\r\n (0): FullyShardedDataParallel(\r\n (_fsdp_wrapped_module): FlattenParamsWrapper(\r\n (_fpw_module): T5Block(\r\n (layer): ModuleList(\r\n (0): T5LayerSelfAttention(\r\n (SelfAttention): T5Attention(\r\n (q): Linear(in_features=768, out_features=768, bias=False)\r\n (k): Linear(in_features=768, out_features=768, bias=False)\r\n (v): Linear(in_features=768, out_features=768, bias=False)\r\n (o): Linear(in_features=768, out_features=768, bias=False)\r\n (relative_attention_bias): Embedding(32, 12)\r\n )\r\n (layer_norm): T5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (1): T5LayerFF(\r\n (DenseReluDense): T5DenseGatedActDense(\r\n\r\n...\r\n\r\n```\r\n\r\nOn transformers[deepspeed]==4.20.1, I don't the issue as you mentioned. I will look into it further by this week or next.",
"Thanks! The weird thing is that changing the fairscale version doesn't affect whether the bug appears.\r\n\r\nAs you just said, I can make the bug appear by first running `pip install transformers==4.21.0` and disappear by running `pip install transformers==4.20.1`. I'll file a bug report in the FairScale repository anyway.",
"I was able to reproduce your `RuntimeError: Function SplitWithSizesBackward0 returned an invalid gradient at index 0 - got [582401280] but expected shape compatible with [145600320]` error by upgrading PyTorch (cudatoolkit=11.3) from `1.10.2` to `1.12.0`.\r\n\r\nI think it's still the same bug because running `torchrun --nproc_per_node=1 script.py` with `pytorch==1.12.0` works.\r\n\r\nAfter upgrading PyTorch to 1.12.0, I applied your FSDP patch and the code started to work. Thanks!",
"(FSDP is only available for PyTorch versions 1.12 and later)",
"Hello @shermansiu , I found the bug and raised above PR which should fix it. Can you try the above PR and confirm?",
"> (FSDP is only available for PyTorch versions 1.12 and later)\r\n\r\nYes",
"Post applying PR, the output logs for `sharded_ddp`:\r\n\r\n```bash\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 100/100 [00:25<00:00, 3.93it/s]\r\n \r\nTraining completed. Do not forget to share your model on huggingface.co/models =) \r\n \r\n \r\n \r\n{'train_runtime': 26.4257, 'train_samples_per_second': 30.274, 'train_steps_per_second': 3.784, 'tra\r\nin_loss': 17.26375, 'epoch': 1.59} \r\nFullyShardedDataParallel( \r\n world_size=2, flatten_parameters=True, mixed_precision=False, \r\n (_fsdp_wrapped_module): FlattenParamsWrapper( \r\n (_fpw_module): MT5ForConditionalGeneration( \r\n (shared): Embedding(250112, 768)\r\n (encoder): T5Stack( \r\n (embed_tokens): Embedding(250112, 768)\r\n (block): ModuleList(\r\n (0): T5Block(\r\n (layer): ModuleList(\r\n (0): T5LayerSelfAttention(\r\n\r\n...\r\n```\r\n",
"Yes, I can confirm that it works!\r\n\r\n```\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'train_runtime': 48.4985, 'train_samples_per_second': 32.991, 'train_steps_per_second': 2.062, 'train_loss': 18.418689575195312, 'epoch': 3.12}\r\n100%|ββββββββββββββββββββββ| 100/100 [00:48<00:00, 2.06it/s]\r\n```\r\n\r\nI guess I don't need to file a FairScale issue after all!",
"Wait... am I supposed to keep the issue open until the PR is merged?",
"Probably, I suppose.\r\n> [pacman100](https://github.com/pacman100) linked a pull request [1 hour ago ](https://github.com/huggingface/transformers/issues/18410#ref-pullrequest-1326351330)that will close this issue"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
### System Info
transformers version: 4.21.0
Platform: Linux
Python version: 3.7.6
Huggingface_hub version: 0.8.1
PyTorch version (GPU?): 1.10.2 (Yes)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes (2+ Tesla V100)
Using distributed or parallel set-up in script?: Yes
When trying to fine-tune a MT5ForConditionalGeneration model using a Seq2SeqTrainer, while using multiple GPUs, I get a InternalAssert error. I am running the script using `torchrun --nproc=$NUM_GPUS script.py`. The issue appears when `$NUM_GPUS` is greater than 1. Also, it only appears when the argument `sharded_ddp: ["zero_dp_3"]` is passed to the trainer.
```
Traceback (most recent call last):
File "script.py", line 475, in <module>
fire.Fire(main)
File "/miniconda/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/miniconda/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire
target=component.__name__)
File "/miniconda/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "script.py", line 447, in main
train_model(model, tokenizer, cli_arguments)
File "script.py", line 357, in train_model
trainer.train()
File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1502, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1740, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2488, in training_step
loss.backward()
File "/miniconda/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/miniconda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: grad.numel() == bucket_view.numel()INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1640811797118/work/torch/csrc/distributed/c10d/reducer.cpp":328, please report a bug to PyTorch.
0%| | 0/100000 [00:06<?, ?it/s]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 660 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 662 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 663 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 661) of binary: /miniconda/bin/python
Traceback (most recent call last):
File "/miniconda/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.10.2', 'console_scripts', 'torchrun')())
File "/miniconda/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/miniconda/lib/python3.7/site-packages/torch/distributed/run.py", line 719, in main
run(args)
File "/miniconda/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
script.py FAILED
------------------------------------------------------------
```
The issue fails on `transformers[deepspeed]==4.21.0` but there are no issues in `transformers[deepspeed]==4.20.1`. The versions of Deepspeed and Fairscale are `deepspeed==0.6.5` or `deepspeed==0.6.7` and `fairscale=0.4.6` and this code was run in a Linux machine.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
# The simplified contents of script.py
# Running torchrun --nproc_per_node=1 script.py should work
# Running torchrun --nproc_per_node=4 script.py should fail with a RuntimeError: grad.numel() == bucket_view.numel()INTERNAL ASSERT FAILED error.
from __future__ import annotations
import functools
import typing as tp
import datasets
import transformers
from transformers import (
DataCollatorForSeq2Seq,
PreTrainedTokenizer,
Seq2SeqTrainingArguments,
Seq2SeqTrainer,
)
increment_en = [
{"input": "One", "target": "Two"},
{"input": "Three", "target": "Four"},
{"input": "Five", "target": "Six"},
{"input": "Seven", "target": "Eight"},
{"input": "Nine", "target": "Ten"},
]
increment_en = increment_en * 100
def lod_to_dol(list_of_dicts: tp.List[tp.Dict[str, tp.Any]]) -> tp.Dict[str, list]:
dict_of_lists = {
key: [dct[key] for dct in list_of_dicts] for key in list_of_dicts[0]
}
return dict_of_lists
increment_en = lod_to_dol(increment_en)
def preprocess_function_(
examples,
tokenizer: PreTrainedTokenizer,
max_input_length: int,
max_target_length: int,
):
inputs = examples["input"]
targets = examples["target"]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
def main():
tokenizer = transformers.MT5Tokenizer.from_pretrained("google/mt5-base")
model = transformers.MT5ForConditionalGeneration.from_pretrained("google/mt5-base")
args = Seq2SeqTrainingArguments(
"script_debug",
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
fp16=False,
push_to_hub=False,
sharded_ddp=["zero_dp_3"],
max_steps=10000,
logging_steps=5000,
save_steps=5000
)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, padding=True)
dataset = datasets.DatasetDict(
{
"train": datasets.Dataset.from_dict(increment_en),
"test": datasets.Dataset.from_dict(increment_en),
}
)
preprocess_function = functools.partial(
preprocess_function_,
tokenizer=tokenizer,
max_input_length=512,
max_target_length=512
)
processed_ds = dataset.map(preprocess_function, batched=True)
processed_ds.set_format(
type="torch", columns=["input_ids", "attention_mask", "labels"]
)
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=processed_ds["train"],
eval_dataset=processed_ds["test"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
if __name__ == "__main__":
main()
```
### Expected behavior
The training code should not crash.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18410/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18409
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18409/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18409/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18409/events
|
https://github.com/huggingface/transformers/issues/18409
| 1,324,976,233
|
I_kwDOCUB6oc5O-Yhp
| 18,409
|
Fine-tuning a pretrained model did not follow as expected from the blog posting
|
{
"login": "changyeli",
"id": 9058204,
"node_id": "MDQ6VXNlcjkwNTgyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/changyeli",
"html_url": "https://github.com/changyeli",
"followers_url": "https://api.github.com/users/changyeli/followers",
"following_url": "https://api.github.com/users/changyeli/following{/other_user}",
"gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changyeli/subscriptions",
"organizations_url": "https://api.github.com/users/changyeli/orgs",
"repos_url": "https://api.github.com/users/changyeli/repos",
"events_url": "https://api.github.com/users/changyeli/events{/privacy}",
"received_events_url": "https://api.github.com/users/changyeli/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for feature requests and bugs (in the library) only. It's hard to know what went wrong just from your code since we don't have access to the files you use, but my guess would be that your labels are floats instead of ints, so by default the model thinks you have one-hot encoded labels instead of numbers.",
"Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-5.13.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik, @sgugger, @stevhliu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was following the [blog](https://huggingface.co/docs/transformers/training) with BERT for sequential classification on my own dataset. Here is the snippet I used for the fine-tuning:
```python
def tokenize_function(element):
"""
batchfy the tokenization
:param element: a frament of dataset
:type element: transformers.Dataset
"""
return tokenizer(
element["tran"],
return_attention_mask=True,
add_special_tokens=True,
truncation=True,
max_length=CONTEXT_LENGTH,
padding=True)
def prepare_dataset(file_path):
"""
tokenize the dataset
:param file_path: the location to the file
:type file_path: str
"""
dt = pd.read_csv(file_path)
# preprocessing omitted here
dt = dt[["file", "tran", "label"]]
dt = dt.groupby(["file", "label"])["tran"].apply(". ".join).reset_index()
dt["tran"] = dt["tran"].str.lower()
dt = Dataset.from_pandas(dt)
tokenized_dt = dt.map(tokenize_function, batched=True)
return tokenized_dt
def compute_metrics(eval_pred):
"""
compute accuracy for the fine-tuned BERT model
:param eval_pred: _description_
:type eval_pred: _type_
:return: _description_
:rtype: _type_
"""
metric = load_metric("accuracy")
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
tokenized_train = prepare_dataset(
"training_set.csv")
tokenized_test = prepare_dataset(
"test.csv")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
print(tokenized_train[4].keys())
model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-uncased", num_labels=2)
training_args = TrainingArguments(
output_dir="../outputs/",
num_train_epochs=EPOCHS,
warmup_steps=500,
fp16=True,
learning_rate=1e-4,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
evaluation_strategy='epoch',
save_strategy="epoch",
logging_strategy="epoch",
# prediction_loss_only=True,
do_train=True,
do_eval=True,
max_grad_norm=1.0,
seed=RANDOM_SEED,
data_seed=RANDOM_SEED,
save_total_limit=1,
load_best_model_at_end=True,
report_to="none"
)
if training_args.do_train:
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
tokenizer=tokenizer,
train_dataset=tokenized_train,
eval_dataset=tokenized_test,
compute_metrics=compute_metrics
)
trainer.train()
```
The output of `print(tokenized_train[4]` is:
```python
dict_keys(['tran', 'label', 'input_ids', 'token_type_ids', 'attention_mask'])
```
### Expected behavior
The script is basically what the blog has. As expected, it should begin the fine-tuning process. Instead, I got this error: `ValueError: Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 2]))`.
I checked some online resource, it suggested something like `torch.unsqueeze()`, but I wonder if how to make it happen inside of the `trainer`. Also, I'm a little bit confused - did I missed something from the blog?
Thanks in advance!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18409/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18408
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18408/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18408/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18408/events
|
https://github.com/huggingface/transformers/pull/18408
| 1,324,831,639
|
PR_kwDOCUB6oc48c8gp
| 18,408
|
fix: create a copy for tokenizer object
|
{
"login": "YBooks",
"id": 14153578,
"node_id": "MDQ6VXNlcjE0MTUzNTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/14153578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YBooks",
"html_url": "https://github.com/YBooks",
"followers_url": "https://api.github.com/users/YBooks/followers",
"following_url": "https://api.github.com/users/YBooks/following{/other_user}",
"gists_url": "https://api.github.com/users/YBooks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YBooks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YBooks/subscriptions",
"organizations_url": "https://api.github.com/users/YBooks/orgs",
"repos_url": "https://api.github.com/users/YBooks/repos",
"events_url": "https://api.github.com/users/YBooks/events{/privacy}",
"received_events_url": "https://api.github.com/users/YBooks/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@YBooks , this PR creates another issue since deepcopy of tokenizer object is not possible when a Custom Pretokenizer is used. \r\n\r\n```python \r\ntokenizer.pre_tokenizer = Custom()\r\n```\r\n\r\nSee open issue in tokenizers here: https://github.com/huggingface/tokenizers/issues/581\r\n\r\nI suggest using another serialization/loading scheme instead of copy.",
"Would like to open a PR with a fix?",
"Yes, I will give it a try. "
] | 1,659
| 1,663
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
To not have the object in the PreTrainedTokenizerFast and not impact its padding/truncating attribute we can just have a deep copy of the object
Fixes # ([18406](https://github.com/huggingface/transformers/issues/18406))
## Who can review?
@LysandreJik @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18408/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18408",
"html_url": "https://github.com/huggingface/transformers/pull/18408",
"diff_url": "https://github.com/huggingface/transformers/pull/18408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18408.patch",
"merged_at": 1659382332000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18407
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18407/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18407/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18407/events
|
https://github.com/huggingface/transformers/pull/18407
| 1,324,828,421
|
PR_kwDOCUB6oc48c70K
| 18,407
|
Add LayoutLMForQuestionAnswering model
|
{
"login": "ankrgyl",
"id": 565363,
"node_id": "MDQ6VXNlcjU2NTM2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankrgyl",
"html_url": "https://github.com/ankrgyl",
"followers_url": "https://api.github.com/users/ankrgyl/followers",
"following_url": "https://api.github.com/users/ankrgyl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions",
"organizations_url": "https://api.github.com/users/ankrgyl/orgs",
"repos_url": "https://api.github.com/users/ankrgyl/repos",
"events_url": "https://api.github.com/users/ankrgyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankrgyl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Narsil I've left a few TODOs -- (1) supporting tensorflow, (2) filling in docs, (3) filling in tests -- which I'll gladly do. I just wanted to post sooner than later to start getting feedback on the approach.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok, for this part I will let @NielsRogge comment as I am not the best person to answer how it should be done.",
"@NielsRogge @Narsil gentle nudge on this PR. I plan to fix the tests + write docs as a next step but wanted to get some quick feedback about whether this approach is acceptable for including `LayoutLMForQuestionAnswering`. Appreciate your consideration!",
"Thanks @NielsRogge!\r\n\r\nWe're discussing the pipeline part in [pull request 18414](https://github.com/huggingface/transformers/pull/18414). Would love your feedback there too!",
"@NielsRogge @Narsil I just updated it to include tests+documentation. If it's okay, I'd like to defer the tensorflow implementation for now (due to some personal lack of familiarity). I am failing a consistency check, however, as a result:\r\n\r\n```\r\n File \"/Users/ankur/projects/transformers/transformers/utils/check_inits.py\", line 298, in <module>\r\n check_all_inits()\r\n File \"/Users/ankur/projects/transformers/transformers/utils/check_inits.py\", line 238, in check_all_inits\r\n raise ValueError(\"\\n\\n\".join(failures))\r\nValueError: Problem in src/transformers/models/layoutlm/__init__.py, both halves do not define the same objects.\r\nDifferences for tf backend:\r\n LayoutLMForQuestionAnswering in _import_structure but not in TYPE_HINT.\r\n ```\r\n \r\n Could you help me resolve this?",
"@NielsRogge @Narsil, I went ahead and implemented support for TensorFlow and the checks are now passing. Would appreciate a re-review.",
"@NielsRogge gentle nudge on this PR :)",
"> \r\n\r\nThanks @NielsRogge! I just updated with your comments, added to the list of doc tests, and verified locally that they are (now) passing.",
"Up to you guys on that one! ",
"@NielsRogge @Narsil I did some thinking over the weekend and think it makes sense to include them in `AutoModelForQuestionAnswering` to be consistent with `LayoutLMv2` and `v3`. We can move around the auto mapping in PR #18414.\r\n\r\nLet me know if you have any concerns with that thinking. If not, I'll proceed with merging the change in.",
"@Narsil @NielsRogge did you have any further questions on this PR, or is it ready to merge in?",
"Also happy to hold off, since we have some traction with PR #18414, and just wait to include it in the `AutoModelForDocumentQuestionAnswering` there?",
"Hi @Narsil @NielsRogge just wanted to bump on this -- based on the most recent round of comments on PR #18414, we removed `LayoutLMv2ForQuestionAnswering` and `LayoutLMv3ForQuestionAnswering` from `AutoModelForQuestionAnswering`, so I think it makes sense to not add `LayoutLMForQuestionAnswering` to the auto mapping, if we are about to remove it.\r\n\r\nI will go ahead and remove it and update the PR. Please let me know if it's ready to move forward. It would be very helpful to rebase PR #18414 against it for testing purposes.",
"This PR seems almost ready, I'd just update:\r\n* all code examples to use either `LayoutLMTokenizer` or `AutoTokenizer`\r\n* add a working code example of `LayoutLMForQuestionAnswering`/`TFLayoutLMForQuestionAnswering`, with an expected output",
"I actually don't have a pre-trained `TFLayoutLMForQuestionAnswering` (i.e. one with tensorflow weights), but I could use the same code and just reference the base model?\r\n\r\nI'll make the other updates now.",
"> I actually don't have a pre-trained TFLayoutLMForQuestionAnswering (i.e. one with tensorflow weights), but I could use the same code and just reference the base model?\r\n\r\nThe Transformers library makes sure that any PyTorch model also works in the other framework, and vice versa, due to the same variable names being used. So you can just do:\r\n```\r\nfrom transformers import TFLayoutLMForQuestionAnswering\r\n\r\nmodel = TFLayoutLMForQuestionAnswering.from_pretrained(\"impira/layoutlm-document-qa\", from_pt=True)\r\n```\r\n\r\nand it should work (this should also normally be tested with the PT-TF cross equivalence test). You can then perhaps do `model.push_to_hub(\"impira/layoutlm-document-qa\")` to upload the TF weights to the same repo. This way, you can remove the `from_pt` statement.",
"Wow, that is super cool! Okay let me give it a try.",
"Ok @NielsRogge I've made all of these changes. It was a really nice idea to put a fully working example in there. I've also pushed the TF weights to the hub.",
"@NielsRogge @Narsil the test failures are now occurring because LayoutLMForQuestionAnswering is not in any sort of auto mapping (for example, `tests/test_modeling_tf_common.py:_prepare_for_class` uses the auto mapping to determine what the expected output labels are. I'm not sure what the best way to proceed with this is. Perhaps we include it in the QuestionAnswering mapping just to keep the commit (a) consistent with LayoutLMv2-3 and (b) passing tests, and then solve the auto mapping issue properly in PR #18414?",
"@ankrgyl normally if you run `make fixup` and it complains about a model not being in any auto mapping, you can add it to utils/check_repo.py in the IGNORE_NON_AUTO_CONFIGURED mapping.\r\n\r\nThen, in #18414, you can remove it from this mapping and add it to the auto mapping instead.",
"> @ankrgyl normally if you run `make fixup` and it complains about a model not being in any auto mapping, you can add it to utils/check_repo.py in the IGNORE_NON_AUTO_CONFIGURED mapping.\r\n\r\n@NielsRogge I actually have already added it here, and it still fails the tests :(. The reason is that I've included it in `tests/models/layoutlm/test_modeling_layoutlm.py:LayoutLMModelTest.all_model_classes`. I feel like there's a tradeoff here: I can either exclude it from all tests, or put it into the QuestionAnswering auto class and then remove it shortly in PR #18414. Let me know what you think is best.",
"Following up on this @NielsRogge @Narsil @sgugger, could you please advise on how to proceed? It seems that if something _has_ tests then it _must_ be in an Auto model list (the failing tests are the due to `LayoutLMForQuestionAnswering` not being part of any Auto model). \r\n\r\nPlease correct me if I'm wrong, but my understanding is that we have the following options for how to proceed:\r\n\r\n1. Add `LayoutLMForQuestionAnswering` to the `AutoModelForQuestionAnswering` pipeline, which will make the tests pass. I'll remove it shortly after in https://github.com/huggingface/transformers/pull/18414.\r\n2. Remove all tests about `LayoutLMForQuestionAnswering` and add them in https://github.com/huggingface/transformers/pull/18414.\r\n3. Add `AutoModelForDocumentQuestionAnswering` in this PR, and then simply extend/use it in PR #18414.\r\n\r\n\r\n",
"To make all tests pass, you need to overwrite the `_prepare_for_class` method defined in `test_modeling_common.py`, to make sure the targets are prepared correctly for `LayoutLMForQuestionAnswering`. It can be defined as follows in `test_modeling_layoutlm.py`:\r\n```\r\ndef _prepare_for_class(self, inputs_dict, model_class, return_labels=False):\r\n inputs_dict = copy.deepcopy(inputs_dict)\r\n if return_labels:\r\n if model_class in get_values(MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING):\r\n inputs_dict[\"labels\"] = torch.zeros(\r\n self.model_tester.batch_size, dtype=torch.long, device=torch_device\r\n )\r\n elif model_class in [\r\n *get_values(MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING),\r\n *get_values(MODEL_FOR_MASKED_LM_MAPPING),\r\n ]:\r\n inputs_dict[\"labels\"] = torch.zeros(\r\n (self.model_tester.batch_size, self.model_tester.seq_length), dtype=torch.long, device=torch_device\r\n )\r\n elif model_class.__name__ == \"LayoutLMForQuestionAnswering\":\r\n inputs_dict[\"start_positions\"] = torch.zeros(\r\n self.model_tester.batch_size, dtype=torch.long, device=torch_device\r\n )\r\n inputs_dict[\"end_positions\"] = torch.zeros(\r\n self.model_tester.batch_size, dtype=torch.long, device=torch_device\r\n )\r\n \r\n return inputs_dict\r\n```\r\n\r\nThis can then be removed once the model is added to an Auto mapping.\r\n\r\nThe same needs to happen for the TF model.",
"Regarding the failing tests - you might need to rebase with the main branch. \r\n\r\nAlso note that sometimes, tests which are totally unrelated to your PR fail, in which case you can ignore them.",
"Thanks @NielsRogge just rebased ",
"@NielsRogge I believe all outstanding comments have been addressed. Are we ready to merge this in?",
"I've pinged @sgugger for a final review, however he's off this week so will be merged next week :)",
"Thank you for merging it in! @LysandreJik or @NielsRogge are you planning to do any sort of announcement? I'm asking because we're going to publicly announce the project we've been working on (https://github.com/impira/docquery) in the next few days, and it would be great to collaborate.",
"I'd like to communicate on that once the pipeline is merged, because the Space above is using that right?\r\n\r\nAlso, the doc tests don't seem to pass:\r\n\r\n```\r\n_ [doctest] transformers.models.layoutlm.modeling_layoutlm.LayoutLMForQuestionAnswering.forward _\r\n1328 ... bbox.append([0] * 4)\r\n1329 >>> encoding[\"bbox\"] = torch.tensor([bbox])\r\n1330 \r\n1331 >>> word_ids = encoding.word_ids(0)\r\n1332 >>> outputs = model(**encoding)\r\n1333 >>> loss = outputs.loss\r\n1334 >>> start_scores = outputs.start_logits\r\n1335 >>> end_scores = outputs.end_logits\r\n1336 >>> start, end = word_ids[start_scores.argmax(-1)], word_ids[end_scores.argmax(-1)]\r\n1337 >>> print(\" \".join(words[start : end + 1]))\r\nExpected:\r\n M. Hamann P. Harper, P. Martinez\r\nGot:\r\n J. S. Wigand\r\n\r\n/__w/transformers/transformers/src/transformers/models/layoutlm/modeling_layoutlm.py:1337: DocTestFailure\r\n_ [doctest] transformers.models.layoutlm.modeling_tf_layoutlm.TFLayoutLMForQuestionAnswering.call _\r\n[15](https://github.com/huggingface/transformers/runs/8125145111?check_suite_focus=true#step:9:16)53 ... bbox.append([0] * 4)\r\n1554 >>> encoding[\"bbox\"] = tf.convert_to_tensor([bbox])\r\n1555 \r\n1556 >>> word_ids = encoding.word_ids(0)\r\n1557 >>> outputs = model(**encoding)\r\n1558 >>> loss = outputs.loss\r\n1559 >>> start_scores = outputs.start_logits\r\n1560 >>> end_scores = outputs.end_logits\r\n1561 >>> start, end = word_ids[tf.math.argmax(start_scores, -1)[0]], word_ids[tf.math.argmax(end_scores, -1)[0]]\r\n1562 >>> print(\" \".join(words[start : end + 1]))\r\nExpected:\r\n M. Hamann P. Harper, P. Martinez\r\nGot:\r\n <BLANKLINE>\r\n```"
] | 1,659
| 1,662
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a `LayoutLMForQuestionAnswering` class that follows the implementations of `LayoutLMv2ForQuestionAnswering` and `LayoutLMv3ForQuestionAnswering`, so that `LayoutLM` can be fine-tuned for the question answering task.
Fixes #18380
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: https://github.com/huggingface/transformers/issues/18380
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18407/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18407",
"html_url": "https://github.com/huggingface/transformers/pull/18407",
"diff_url": "https://github.com/huggingface/transformers/pull/18407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18407.patch",
"merged_at": 1661933133000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18406
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18406/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18406/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18406/events
|
https://github.com/huggingface/transformers/issues/18406
| 1,324,820,590
|
I_kwDOCUB6oc5O9yhu
| 18,406
|
PreTrainedTokenizerFast with tokenizer object is acting on original tokenizer object
|
{
"login": "YBooks",
"id": 14153578,
"node_id": "MDQ6VXNlcjE0MTUzNTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/14153578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YBooks",
"html_url": "https://github.com/YBooks",
"followers_url": "https://api.github.com/users/YBooks/followers",
"following_url": "https://api.github.com/users/YBooks/following{/other_user}",
"gists_url": "https://api.github.com/users/YBooks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YBooks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YBooks/subscriptions",
"organizations_url": "https://api.github.com/users/YBooks/orgs",
"repos_url": "https://api.github.com/users/YBooks/repos",
"events_url": "https://api.github.com/users/YBooks/events{/privacy}",
"received_events_url": "https://api.github.com/users/YBooks/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @SaulLu ",
"Hi @YBooks\r\n\r\nThank you very much for the detailed issue :hugs: ! \r\n\r\nI see that you have already proposed a fix that has been merged and that solves the problem you are pointing out. If you are happy with it, is it ok if we close this issue?",
"Hey @SaulLu \r\nYes sure. My pleasure",
"@YBooks , @SaulLu , @sgugger can we reopen this issue, since https://github.com/huggingface/transformers/pull/18408 creates another one ?\r\n"
] | 1,659
| 1,663
| 1,660
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
- To reproduce this error, we can create a tokenizer and try to wrap it in the PreTrainedTokenizerFast
```
from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, trainers
data = [
"My first sentence",
"My second sentence",
"My third sentence is a bit longer",
"My fourth sentence is longer than the third one"
]
tokenizer = Tokenizer(models.WordLevel(unk_token="<unk>"))
trainer = trainers.WordLevelTrainer(vocab_size=10, special_tokens=["<unk>", "<pad>"])
tokenizer.pre_tokenizer = pre_tokenizers.Whitespace()
tokenizer.train_from_iterator(data, trainer=trainer)
tokenizer.enable_padding(pad_token="<pad>", pad_id=tokenizer.token_to_id("<pad>"))
tokenizer.enable_truncation(max_length=5)
print(tokenizer.encode(data[-1]).ids, tokenizer.padding)
```
This gives an output with len 5 and an explicit padding object
- In the other hand if we load our tokenizer in the PreTrainedTokenizerFast class and print the same thing like before.
```
from transformers import PreTrainedTokenizerFast
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
fast_tokenizer(data)
print(tokenizer.encode(data[-1]).ids, tokenizer.padding)
```
This gives an output with len > 5 and None in padding
### Expected behavior
The expected behavior should be the same with tokenizer before loading it in the PreTrainedTokenizerFast wrapper. It should not impact the padding and the truncation part
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18406/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18405
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18405/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18405/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18405/events
|
https://github.com/huggingface/transformers/issues/18405
| 1,324,716,678
|
I_kwDOCUB6oc5O9ZKG
| 18,405
|
Incorrect assertion in pipeline test test_dbmdz_english()
|
{
"login": "davidbenton",
"id": 1603279,
"node_id": "MDQ6VXNlcjE2MDMyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1603279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidbenton",
"html_url": "https://github.com/davidbenton",
"followers_url": "https://api.github.com/users/davidbenton/followers",
"following_url": "https://api.github.com/users/davidbenton/following{/other_user}",
"gists_url": "https://api.github.com/users/davidbenton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidbenton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidbenton/subscriptions",
"organizations_url": "https://api.github.com/users/davidbenton/orgs",
"repos_url": "https://api.github.com/users/davidbenton/repos",
"events_url": "https://api.github.com/users/davidbenton/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidbenton/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@davidbenton \r\n~~Thanks for the info, I was able to reproduce on torch `1.11.0` but this is fixed on `1.12.0`.~~\r\n\r\n~~We're aware of some modifications within the `nn.Linear` between those two versions. The errors are relatively minor actually, but since this is a random model (meaning not trained on real data) it's much more sensitive to those tiny fluctuations. \r\nThat's why the test fails on `1.11.0` while it works on `1.12.0`.~~\r\n\r\n\r\nEDIT: \r\nI though I had tested the test and it worked on `1.12.0` but if my explanation was correct, then a diff should have had occurred on the test itself.\r\nI found out that this commit touch the file: 95113d136508dfef192a29d23344e941735d1a1d\r\n\r\nThis commit actually changed the string, and makes this slow test fail.\r\nI am guessing this is an automated change. Looked at the diff, it seems only this slow test was affected.\r\nWe can either rollback the string, or update the values.\r\n\r\nSo the `1.11.0` vs `1.12.0` seems like it doesn't explain the difference here.\r\n\r\n@ydshieh Maybe can you confirm ? ",
"~~Hmm, it fails for me on 1.12.0 also (CPU, env otherwise same as above). How could `end: 24` be correct for a string with length 20? Those offsets should be in `sentence` indices, right?~~\r\n\r\n I see you're on the track now.",
"Pinging @sgugger too here.\r\n\r\nThanks for reporting @davidbenton !",
"Since that commit, we do have this failure in Slack CI report. From the changes, I think it try to fix all `the the`. So reverting the change on the expected value is good to me.\r\n\r\nThank you, @davidbenton ",
"Yes, it slipped through the crack as the contributor was trying to fix typos and I didn't pay attention this one was intentional."
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: macOS-12.4-x86_64-i386-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: n
- Using distributed or parallel set-up in script?: n
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`RUN_SLOW=1 RUN_PIPELINE_TESTS=yes pytest tests/pipelines/test_pipelines_token_classification.py::TokenClassificationPipelineTests::test_dbmdz_english`
Fails with two notable diffs: the "UN" entity offsets in the assertion don't match the offsets in the input string itself (off by two characters), and the `index` doesn't match. Output:
```
======================================= FAILURES ========================================
__________________ TokenClassificationPipelineTests.test_dbmdz_english __________________
self = <tests.pipelines.test_pipelines_token_classification.TokenClassificationPipelineTests testMethod=test_dbmdz_english>
@require_torch
@slow
def test_dbmdz_english(self):
# Other sentence
NER_MODEL = "dbmdz/bert-large-cased-finetuned-conll03-english"
model = AutoModelForTokenClassification.from_pretrained(NER_MODEL)
tokenizer = AutoTokenizer.from_pretrained(NER_MODEL, use_fast=True)
sentence = """Enzo works at the UN"""
token_classifier = pipeline("ner", model=model, tokenizer=tokenizer)
output = token_classifier(sentence)
> self.assertEqual(
nested_simplify(output),
[
{"entity": "I-PER", "score": 0.997, "word": "En", "start": 0, "end": 2, "index": 1},
{"entity": "I-PER", "score": 0.996, "word": "##zo", "start": 2, "end": 4, "index": 2},
{"entity": "I-ORG", "score": 0.999, "word": "UN", "start": 22, "end": 24, "index": 7},
],
)
E AssertionError: Lists differ: [{'en[24 chars] 0.998, 'index': 1, 'word': 'En', 'start': 0, [179 chars] 20}] != [{'en[24 chars] 0.997, 'word': 'En', 'start': 0, 'end': 2, 'i[179 chars]: 7}]
E
E First differing element 0:
E {'ent[15 chars]'score': 0.998, 'index': 1, 'word': 'En', 'start': 0, 'end': 2}
E {'ent[15 chars]'score': 0.997, 'word': 'En', 'start': 0, 'end': 2, 'index': 1}
E
E [{'end': 2,
E 'entity': 'I-PER',
E 'index': 1,
E - 'score': 0.998,
E ? ^
E
E + 'score': 0.997,
E ? ^
E
E 'start': 0,
E 'word': 'En'},
E {'end': 4,
E 'entity': 'I-PER',
E 'index': 2,
E - 'score': 0.997,
E ? ^
E
E + 'score': 0.996,
E ? ^
E
E 'start': 2,
E 'word': '##zo'},
E - {'end': 20,
E ? ^
E
E + {'end': 24,
E ? ^
E
E 'entity': 'I-ORG',
E - 'index': 6,
E ? ^
E
E + 'index': 7,
E ? ^
E
E 'score': 0.999,
E - 'start': 18,
E ? ^^
E
E + 'start': 22,
E ? ^^
E
E 'word': 'UN'}]
tests/pipelines/test_pipelines_token_classification.py:284: AssertionError
```
### Expected behavior
[a green dot]
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18405/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18405/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18404
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18404/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18404/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18404/events
|
https://github.com/huggingface/transformers/issues/18404
| 1,324,698,226
|
I_kwDOCUB6oc5O9Upy
| 18,404
|
GPT-J evaluation with multiple GPUs crashes
|
{
"login": "manuelciosici",
"id": 51477,
"node_id": "MDQ6VXNlcjUxNDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/51477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manuelciosici",
"html_url": "https://github.com/manuelciosici",
"followers_url": "https://api.github.com/users/manuelciosici/followers",
"following_url": "https://api.github.com/users/manuelciosici/following{/other_user}",
"gists_url": "https://api.github.com/users/manuelciosici/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manuelciosici/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manuelciosici/subscriptions",
"organizations_url": "https://api.github.com/users/manuelciosici/orgs",
"repos_url": "https://api.github.com/users/manuelciosici/repos",
"events_url": "https://api.github.com/users/manuelciosici/events{/privacy}",
"received_events_url": "https://api.github.com/users/manuelciosici/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"The issue is probably in the modeling code missing some `.contiguous()` calls.",
"I can reproduce the error with GPT-J. This also happens with Salesforce/codegen-16B-nl and EleutherAI/gpt-neox-20b. In all cases the error is RuntimeError: Tensors must be contiguous.\r\n\r\nThe problem doesn't occur with gpt2-xl and facebook/opt-13b.\r\n\r\nThis on Transformers 4.21.1 and also using 2x RTX a6000 GPUs.\r\n\r\nThe problem was also reproduced by another dev training gpt-neoX-20b on 2x a6000.\r\n\r\nCould this be a6000 related?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"+1 for this issue, still having problems with `Tensors must be contiguous` error in evaluation.",
"I have same problem\r\n"
] | 1,659
| 1,674
| 1,664
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (2+ RTX A6000)
- Using distributed or parallel set-up in script?: Yes
The issue appears when parallelizing with `python -m torch.distributed.launch --nproc_per_node=2` and also when parallelizing with `deepspeed`
### Who can help?
I hope @patil-suraj, @stas00, or @sgugger.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run the `run_clm.py` script from the examples directory: `python -m torch.distributed.launch --nproc_per_node=4 /path/to/transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path "EleutherAI/gpt-j-6B" --do_eval --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --output_dir "${output_dir}/output_fine_tune" --eval_steps 1 --evaluation_strategy steps --per_device_eval_batch_size 4 --block_size 2048`
2. The script crashes with the following error:
```
08/01/2022 08:51:08 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2891] 2022-08-01 08:51:22,867 >> ***** Running Evaluation *****
[INFO|trainer.py:2893] 2022-08-01 08:51:22,868 >> Num examples = 119
[INFO|trainer.py:2896] 2022-08-01 08:51:22,868 >> Batch size = 4
Traceback (most recent call last):
File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 579, in <module>
Traceback (most recent call last):
File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 579, in <module>
main()
File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 545, in main
main()
File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 545, in main
metrics = trainer.evaluate()
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2758, in evaluate
metrics = trainer.evaluate()
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2758, in evaluate
output = eval_loop(
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2960, in evaluation_loop
output = eval_loop(
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2960, in evaluation_loop
logits = self._nested_gather(logits)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 3072, in _nested_gather
logits = self._nested_gather(logits)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 3072, in _nested_gather
tensors = distributed_concat(tensors)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat
tensors = distributed_concat(tensors)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 181, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 181, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2068, in all_gather
dist.all_gather(output_tensors, tensor)
File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2068, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be contiguous
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be contiguous
```
## Some debugging
* The crash only appears when the `compute_metrics` argument to `Trainer` is not `None`. In other words, replacing the line `compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,` with `compute_metrics=None` prevents the script from crashing.
* It looks like the logits on Trainer line 3181 https://github.com/huggingface/transformers/blob/a9eee2ffecc874df7dd635b2c6abb246fdb318cc/src/transformers/trainer.py#L3181 are not contiguous.
* If I force the tensors to be contiguous with the patch below, run_clm no longer crashes. I do not think the issue is in `Trainer`, so the patch below is not a fix. I include it only to help with debugging.
## Patch to make tensors contiguous
```diff
2850,2875d2849
< def check_contiguous(self, tensor) -> Tuple[int, int]:
< if tensor is None:
< return 0, 0
< if isinstance(tensor, (list, tuple)):
< first = 0
< total = 0
< for t in tensor:
< f, t = self.check_contiguous(t)
< first += f
< total += t
< return first, total
< else:
< f = 0
< t = 1
< if tensor.is_contiguous():
< f = 1
< return f, t
<
< def make_contiguous(self, tensor):
< if tensor is None:
< return None
< if isinstance(tensor, (list, tuple)):
< return tuple(self.make_contiguous(t) for t in tensor)
< else:
< return tensor.contiguous()
<
3208,3216d3181
< cont, total = self.check_contiguous(logits)
< if cont != total:
< print(
< f"[DebugTrainer] prediction_step, no sm, outputs dict logits (cont, total)"
< f"{(cont, total)}")
< logits= self.make_contiguous(logits)
< print(
< f"[DebugTrainer] prediction_step, no sm, outputs dict, after contiguous, logits (cont, total)"
< f"{self.check_contiguous(logits)}")
```
### Expected behavior
The script should finish running and report the evaluation results (loss and accuracy).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18404/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18404/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18403
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18403/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18403/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18403/events
|
https://github.com/huggingface/transformers/issues/18403
| 1,324,677,678
|
I_kwDOCUB6oc5O9Pou
| 18,403
|
Cannot restore `sequences_scores` from `scores` and `beam_indices` returned by `t5-base`
|
{
"login": "namespace-Pt",
"id": 61188463,
"node_id": "MDQ6VXNlcjYxMTg4NDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namespace-Pt",
"html_url": "https://github.com/namespace-Pt",
"followers_url": "https://api.github.com/users/namespace-Pt/followers",
"following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}",
"gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions",
"organizations_url": "https://api.github.com/users/namespace-Pt/orgs",
"repos_url": "https://api.github.com/users/namespace-Pt/repos",
"events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}",
"received_events_url": "https://api.github.com/users/namespace-Pt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @namespace-Pt π \r\n\r\nDoes [this function](https://github.com/huggingface/transformers/blob/dbd9641c8c0e146c078cbee11cdefcf556f6c817/src/transformers/generation_utils.py#L804) solve your issue?\r\n\r\nI noticed that it is undocumented, so it is hard to find π¬ ",
"@gante Thanks, the function worked. However, in the above example, how can I get the `sequences_scores` given the returned transition scores?",
"@namespace-Pt check these two threads:\r\n1. https://github.com/huggingface/transformers/issues/16413\r\n2. https://github.com/huggingface/transformers/issues/15869\r\n\r\nTL;DR you can't directly atm unless you specify a length penalty of 0",
"Got that. Thank you."
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.18.0
- Platform: Linux-5.10.25-nvidia-gpu-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <No>
- Using distributed or parallel set-up in script?: <No>
### Who can help?
I was trying to restore the logit of each generated token in `t5-base` by `beam_search`. However, I found that the `sequences_scores` cannot be computed from the generated token indices, the `beam_indices`, and the `scores` returned by `model.generate()`.
Here is my script with annotations:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
# load model
model = T5ForConditionalGeneration.from_pretrained("t5-base")
tokenizer = T5Tokenizer.from_pretrained("t5-base")
inputs = tokenizer(["I love hugging face", "I love deep learning"], return_tensors="pt", truncation=True, padding="max_length", max_length=10)
max_length = 10
min_length = 10
num_beams = 10
num_return_seq = 2
outputs = model.generate(
input_ids=inputs.input_ids,
attention_mask=inputs.attention_mask,
do_sample=False,
max_length=max_length,
num_beams=num_beams,
num_return_sequences=num_return_seq,
return_dict_in_generate=True,
output_scores=True
)
sequences = outputs.sequences.transpose(0, 1) # num_step + 1, batch_size * num_return_seq
beam_indices = torch.tensor(outputs.beam_indices).view(-1, max_length - 1) # batch_size * num_return_seq, num_step
scores = torch.stack(outputs.scores, dim=0) # num_step, batch_size * num_beams, vocab_size
beam_indices = beam_indices.transpose(-1, -2) # num_step, batch_size * num_return_seq
# get the associated logits over the vocabulary at each step
selected_distribution = scores.gather(dim=-2, index=beam_indices.unsqueeze(-1).expand(*beam_indices.shape, scores.shape[-1])) # num_step, batch_size * num_return_seq, vocab_size
# get the associated logit of the selected token at each step
selected_score = selected_distribution.gather(dim=-1, index=sequences[1:].unsqueeze(-1))
# output cumulative scores and sequences_scores
>>> selected_score.squeeze().mean(0)
<<< tensor([-0.2241, -0.3926, -0.1859, -0.4169])
>>> outputs.sequences_scores
<<< tensor([-0.2017, -0.3534, -0.1674, -0.3752])
```
Why these two scores are unequal? Are there any specific notes for computing sequence scores? Also, I think returning the token score along with the `Generate Outputs` is **useful**. Thanks in advance for your reply: @patrickvonplaten, @Narsil, @gante.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just use the above code.
### Expected behavior
I prefer to know how to get the logit of each generated token in `T5`, from which I can properly restore `sequences_scores` returned by the model. Further, I think returning the logit of each generated token is useful for developing systems trying to use that score.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18403/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18402
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18402/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18402/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18402/events
|
https://github.com/huggingface/transformers/pull/18402
| 1,324,655,375
|
PR_kwDOCUB6oc48cYRy
| 18,402
|
Update pipeline word heuristic to work with whitespace in token offsets
|
{
"login": "davidbenton",
"id": 1603279,
"node_id": "MDQ6VXNlcjE2MDMyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1603279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidbenton",
"html_url": "https://github.com/davidbenton",
"followers_url": "https://api.github.com/users/davidbenton/followers",
"following_url": "https://api.github.com/users/davidbenton/following{/other_user}",
"gists_url": "https://api.github.com/users/davidbenton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidbenton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidbenton/subscriptions",
"organizations_url": "https://api.github.com/users/davidbenton/orgs",
"repos_url": "https://api.github.com/users/davidbenton/repos",
"events_url": "https://api.github.com/users/davidbenton/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidbenton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The tests failures seem to have nothing to do with this PR.\r\n\r\n@LysandreJik @ydshieh maybe ?",
"@LysandreJik @ydshieh @Narsil I think the test failures are because I set up CircleCI to track my fork, just to see if the tests would pass there. I didn't expect it to show here on the upstream project PR, but I think that might be what we're seeing. I've disabled that for any future commits.\r\n\r\nI'm guessing failures on a hosted, free CircleCI project are expected, right? Sorry for the CI spam.",
"\r\n> I'm guessing failures on a hosted, free CircleCI project are expected, right? Sorry for the CI spam.\r\n\r\nProbably yes: I saw you have `Docker / [Docker Medium]` while `transformers` uses `Docker / [Docker X-Large]`.\r\n\r\nNow we have to make the tests run under Hugging Face's CircleCI plan π \r\n",
"Ugh, so that stopped the HF circleci from firing? I can add a \"maybe this time\" commit, as is traditional with CI workflows...",
"Ok all that's left before merging is a final approval from a core maintainer.\r\n\r\n@sgugger maybe ?"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This change checks for whitespace in the input string at either the
character preceding the token or in the first character of the token.
This works with tokenizers that return offsets excluding whitespace
between words or with offsets including whitespace.
Fixes #18111
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18402/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18402",
"html_url": "https://github.com/huggingface/transformers/pull/18402",
"diff_url": "https://github.com/huggingface/transformers/pull/18402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18402.patch",
"merged_at": 1659468662000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18401
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18401/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18401/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18401/events
|
https://github.com/huggingface/transformers/issues/18401
| 1,324,601,055
|
I_kwDOCUB6oc5O887f
| 18,401
|
Can't run the example in https://huggingface.co/transformers/v4.9.2/model_doc/blenderbot.html#transformers.BlenderbotModel
|
{
"login": "OliverZijia",
"id": 43709858,
"node_id": "MDQ6VXNlcjQzNzA5ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/43709858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OliverZijia",
"html_url": "https://github.com/OliverZijia",
"followers_url": "https://api.github.com/users/OliverZijia/followers",
"following_url": "https://api.github.com/users/OliverZijia/following{/other_user}",
"gists_url": "https://api.github.com/users/OliverZijia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OliverZijia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OliverZijia/subscriptions",
"organizations_url": "https://api.github.com/users/OliverZijia/orgs",
"repos_url": "https://api.github.com/users/OliverZijia/repos",
"events_url": "https://api.github.com/users/OliverZijia/events{/privacy}",
"received_events_url": "https://api.github.com/users/OliverZijia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Remove this line:\r\n`inputs.pop(\"token_type_ids\")`\r\nAs for the `inputs.keys()`, the keys aren't assigned to any variable or used for any form of auxiliary calculation, so it effectively does nothing. \r\n\r\nIn other words, just run this:\r\n```python\r\nfrom transformers import BlenderbotSmallTokenizer, BlenderbotSmallForConditionalGeneration\r\n\r\nmname = 'facebook/blenderbot_small-90M'\r\nmodel = BlenderbotSmallForConditionalGeneration.from_pretrained(mname)\r\ntokenizer = BlenderbotSmallTokenizer.from_pretrained(mname)\r\nUTTERANCE = \"My friends are cool but they eat too many carbs.\"\r\nprint(\"Human: \", UTTERANCE)\r\ninputs = tokenizer([UTTERANCE], max_length=512, truncation=True, return_tensors='pt')\r\nreply_ids = model.generate(**inputs)\r\nprint(\"Bot: \", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])\r\n```",
"Thanks @shermansiu, the problem is solved, thanks for your prompt reply :-)",
"You're welcome! :smile:"
] | 1,659
| 1,659
| 1,659
|
NONE
| null |
Hi,
I just encountered some issue while running this example code:
from transformers import BlenderbotSmallTokenizer, BlenderbotSmallForConditionalGeneration
mname = 'facebook/blenderbot_small-90M'
model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotSmallTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
print("Human: ", UTTERANCE)
inputs = tokenizer([UTTERANCE], return_tensors='pt')
inputs.keys()
inputs.pop("token_type_ids")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
REPLY = "I'm not sure"
print("Human: ", REPLY)
NEXT_UTTERANCE = (
"My friends are cool but they eat too many carbs."
)
inputs = tokenizer([NEXT_UTTERANCE], return_tensors='pt')
inputs.pop("token_type_ids")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])
Then I got this error:
KeyError Traceback (most recent call last)
Input In [1], in <cell line: 9>()
7 inputs = tokenizer([UTTERANCE], return_tensors='pt')
8 inputs.keys()
9 inputs.pop("token_type_ids")
10 # inputs.pop("input_ids")
11 reply_ids = model.generate(**inputs)
File /opt/conda/lib/python3.8/_collections_abc.py:795, in MutableMapping.pop(self, key, default)
791 '''D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
792 If key is not found, d is returned if given, otherwise KeyError is raised.
793 '''
794 try:
--> 795 value = self[key]
796 except KeyError:
797 if default is self.__marker:
File /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:236, in BatchEncoding.__getitem__(self, item)
229 """
230 If the key is a string, returns the value of the dict associated to `key` ('input_ids', 'attention_mask',
231 etc.).
232
233 If the key is an integer, get the `tokenizers.Encoding` for batch item with index `key`.
234 """
235 if isinstance(item, str):
--> 236 return self.data[item]
237 elif self._encodings is not None:
238 return self._encodings[item]
KeyError: 'token_type_ids'
I found the keys in input are:
dict_keys(['input_ids', 'attention_mask'])
then i change 'token_type_ides ' to 'input_ids', then i got this error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [2], in <cell line: 11>()
9 # inputs.pop("token_type_ids")
10 inputs.pop("input_ids")
---> 11 reply_ids = model.generate(**inputs)
12 print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
14 REPLY = "I'm not sure"
File /opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File /opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py:1182, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs)
1175 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
1176 inputs_tensor, pad_token_id, eos_token_id
1177 )
1179 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
1180 # if model is encoder decoder encoder_outputs are created
1181 # and added to `model_kwargs`
-> 1182 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
1183 inputs_tensor, model_kwargs, model_input_name
1184 )
1186 # 4. Prepare `input_ids` which will be used for auto-regressive generation
1187 if self.config.is_encoder_decoder:
File /opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py:525, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
523 encoder_kwargs["return_dict"] = True
524 encoder_kwargs[model_input_name] = inputs_tensor
--> 525 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
527 return model_kwargs
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.8/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py:780, in BlenderbotSmallEncoder.forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
773 layer_outputs = torch.utils.checkpoint.checkpoint(
774 create_custom_forward(encoder_layer),
775 hidden_states,
776 attention_mask,
777 (head_mask[idx] if head_mask is not None else None),
778 )
779 else:
--> 780 layer_outputs = encoder_layer(
781 hidden_states,
782 attention_mask,
783 layer_head_mask=(head_mask[idx] if head_mask is not None else None),
784 output_attentions=output_attentions,
785 )
787 hidden_states = layer_outputs[0]
789 if output_attentions:
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.8/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py:311, in BlenderbotSmallEncoderLayer.forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions)
299 """
300 Args:
301 hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`
(...)
308 returned tensors for more detail.
309 """
310 residual = hidden_states
--> 311 hidden_states, attn_weights, _ = self.self_attn(
312 hidden_states=hidden_states,
313 attention_mask=attention_mask,
314 layer_head_mask=layer_head_mask,
315 output_attentions=output_attentions,
316 )
317 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
318 hidden_states = residual + hidden_states
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.8/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py:225, in BlenderbotSmallAttention.forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
223 if attention_mask is not None:
224 if attention_mask.size() != (bsz, 1, tgt_len, src_len):
--> 225 raise ValueError(
226 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
227 )
228 attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
229 attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
ValueError: Attention mask should be of size (1, 1, 1, 1), but is torch.Size([1, 1, 12, 12])
Guess should be triggered by the version of torch or transformers? Could you give me the version that can run this code properly?
Best regards,
Zijia
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18401/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18400
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18400/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18400/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18400/events
|
https://github.com/huggingface/transformers/pull/18400
| 1,324,552,793
|
PR_kwDOCUB6oc48cClr
| 18,400
|
pytest with --forked
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18400). All of your documentation changes will be reflected on that endpoint."
] | 1,659
| 1,662
| 1,661
|
COLLABORATOR
| null |
# What does this PR do?
pytest with --forked
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18400/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18400",
"html_url": "https://github.com/huggingface/transformers/pull/18400",
"diff_url": "https://github.com/huggingface/transformers/pull/18400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18400.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18399
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18399/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18399/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18399/events
|
https://github.com/huggingface/transformers/pull/18399
| 1,324,469,847
|
PR_kwDOCUB6oc48bw8b
| 18,399
|
[LayoutLMv3] Fix docs
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a non-working link and adds a link to the fine-tuning scripts.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18399/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18399",
"html_url": "https://github.com/huggingface/transformers/pull/18399",
"diff_url": "https://github.com/huggingface/transformers/pull/18399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18399.patch",
"merged_at": 1659366171000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18398
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18398/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18398/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18398/events
|
https://github.com/huggingface/transformers/pull/18398
| 1,324,469,525
|
PR_kwDOCUB6oc48bw4G
| 18,398
|
Fix ROUGE add example check and update README
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failures are flaky, so merging."
] | 1,659
| 1,659
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
This PR contains a few fixes for the examples all linked to #18381
- it adds the version check of Transformers to the `no_trainer` examples, so that the user is not surprised when the example script that uses main fails if they use a different version
- it expands the table of examples for each version to get to the current version
- it fixes the ROUGE metric return type since that metric was broken in https://github.com/huggingface/evaluate/pull/158
Fixes #18381
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18398/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18398",
"html_url": "https://github.com/huggingface/transformers/pull/18398",
"diff_url": "https://github.com/huggingface/transformers/pull/18398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18398.patch",
"merged_at": 1659366890000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18397
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18397/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18397/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18397/events
|
https://github.com/huggingface/transformers/pull/18397
| 1,324,452,527
|
PR_kwDOCUB6oc48btNI
| 18,397
|
Fix DETR doc test
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the doc test of `DetrForObjectDetection`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18397/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18397",
"html_url": "https://github.com/huggingface/transformers/pull/18397",
"diff_url": "https://github.com/huggingface/transformers/pull/18397.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18397.patch",
"merged_at": 1659362171000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18396
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18396/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18396/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18396/events
|
https://github.com/huggingface/transformers/pull/18396
| 1,324,364,421
|
PR_kwDOCUB6oc48bZ46
| 18,396
|
Add evaluate to test dependencies
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Seem to fix most tests so merging.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
This PR should fix all current failures on main coming from the examples being updated to use evaluate for the metrics. The problem is that some tests use those example scripts without installing the test requirements.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18396/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18396",
"html_url": "https://github.com/huggingface/transformers/pull/18396",
"diff_url": "https://github.com/huggingface/transformers/pull/18396.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18396.patch",
"merged_at": 1659358544000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18395
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18395/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18395/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18395/events
|
https://github.com/huggingface/transformers/issues/18395
| 1,324,142,117
|
I_kwDOCUB6oc5O7M4l
| 18,395
|
GFT: Generative Fundamental Training
|
{
"login": "wangyi-fudan",
"id": 17891453,
"node_id": "MDQ6VXNlcjE3ODkxNDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/17891453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangyi-fudan",
"html_url": "https://github.com/wangyi-fudan",
"followers_url": "https://api.github.com/users/wangyi-fudan/followers",
"following_url": "https://api.github.com/users/wangyi-fudan/following{/other_user}",
"gists_url": "https://api.github.com/users/wangyi-fudan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangyi-fudan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangyi-fudan/subscriptions",
"organizations_url": "https://api.github.com/users/wangyi-fudan/orgs",
"repos_url": "https://api.github.com/users/wangyi-fudan/repos",
"events_url": "https://api.github.com/users/wangyi-fudan/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangyi-fudan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[] | 1,659
| 1,659
| null |
NONE
| null |
### Model description
Hi,
It is just an ad of our GFT.
It has a novel attention head by fusing attention and mlp layer. It has an ultra-thin, ultra-deep architecture that maximizing model performance with minimal parameters. More interestingly, it equips a novel decoder call top-E(entropy) algorithm.
Our model has 81 layers but only 300M parameters.
Both Chinese and English(Biomedical) model are available.
It is coded from scratch by cuda and C++, no DL framework is needed.
Enjoy!
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/wangyi-fudan/GFT
Chinese Model Download
ιΎζ₯: https://pan.baidu.com/s/1HKw83YWttCIPdnZCZ-hdOA
ε―η : 0r1j
English Medical Model Download
ιΎζ₯: https://pan.baidu.com/s/1yazmdB8xFzLwXbKmXWslRA
ε―η : rqdc
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18395/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18394
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18394/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18394/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18394/events
|
https://github.com/huggingface/transformers/pull/18394
| 1,324,135,781
|
PR_kwDOCUB6oc48aoDS
| 18,394
|
Add mt5 onnx config
|
{
"login": "chainyo",
"id": 50595514,
"node_id": "MDQ6VXNlcjUwNTk1NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chainyo",
"html_url": "https://github.com/chainyo",
"followers_url": "https://api.github.com/users/chainyo/followers",
"following_url": "https://api.github.com/users/chainyo/following{/other_user}",
"gists_url": "https://api.github.com/users/chainyo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chainyo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chainyo/subscriptions",
"organizations_url": "https://api.github.com/users/chainyo/orgs",
"repos_url": "https://api.github.com/users/chainyo/repos",
"events_url": "https://api.github.com/users/chainyo/events{/privacy}",
"received_events_url": "https://api.github.com/users/chainyo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I tried to convert the `google/mt5-base` model to Onnx on my local machine and it worked.\r\n\r\n```bash\r\npython -m transformers.onnx --model=google/mt5-base convert-onnx/\r\n\r\n2022-08-01 12:50:24.992612: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2022-08-01 12:50:24.992654: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n/home/workstation/miniconda3/envs/transformers/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py:434: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.\r\n warnings.warn(\r\nSome weights of the model checkpoint at google/mt5-base were not used when initializing MT5Model: ['lm_head.weight']\r\n- This IS expected if you are initializing MT5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing MT5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nUsing framework PyTorch: 1.11.0+cu102\r\nOverriding 1 configuration item(s)\r\n\t- use_cache -> False\r\n/home/workstation/miniconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_utils.py:679: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if causal_mask.shape[1] < attention_mask.shape[1]:\r\nValidating ONNX model...\r\n\t-[β] ONNX model output names match reference model ({'last_hidden_state'})\r\n\t- Validating ONNX Model output \"last_hidden_state\":\r\n\t\t-[β] (2, 8, 768) matches (2, 8, 768)\r\n\t\t-[β] all values close (atol: 1e-05)\r\nAll good, model saved at: convert-onnx/model.onnx\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"**[Update]**\r\n\r\nSorry, I just realized that MT5 is newly added to the ONNX test. So updating the expected values (somewhere) should work fine.\r\n\r\nPR opened: https://github.com/huggingface/transformers/pull/18560\r\n\r\n----\r\n\r\nHi @ChainYo. First of all, thank you for the contribution. It looks like this PR is (probably) the cause of 2 new test failures in our CI\r\n```\r\nFAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_seq2seq_with_past_52_mt5_seq2seq_lm\r\nFAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_seq2seq_with_past_53_mt5_seq2seq_lm_with_past\r\n```\r\nsee [job run page](https://github.com/huggingface/transformers/runs/7759166084?check_suite_focus=true). The difference of the outputs and expected outputs are small, but it worked before. So I am wondering if this PR could have any impact on the outputs.\r\n\r\nWe also have a few other test failures, so I am not 100% sure yet. But if you are interested and have some time, it would be great if you could take a look π . Otherwise, no problem! We could work on it internally :-) Thank you."
] | 1,659
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
- Added MT5 Onnx Config copied from T5 Onnx Config
- Updated docs and tests files
Related to #16308 , and [optimum#321](https://github.com/huggingface/optimum/issues/321)
@patrickvonplaten, @patil-suraj, and @lewtun
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18394/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18394",
"html_url": "https://github.com/huggingface/transformers/pull/18394",
"diff_url": "https://github.com/huggingface/transformers/pull/18394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18394.patch",
"merged_at": 1660031213000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18393
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18393/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18393/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18393/events
|
https://github.com/huggingface/transformers/pull/18393
| 1,324,121,438
|
PR_kwDOCUB6oc48ak_d
| 18,393
|
Fix DeiT doc tets
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"From the description & the change, it seems to me that Deit has some random OP(s) used in the inference. Is this the case (and if yes, could you share which line that random op is π ) Thank you.",
"Sure :) `DeiTForImageClassification` loaded from the official facebook hub weights will have its head randomly initialized. You can't see it directly in this diff, but there's a comment above on line L928 mentioning this. "
] | 1,659
| 1,659
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
Fixes failing doctest for DeiT cause by copy-pasta from the PT model.
The predicted class is determinstic with the set seed. However, as RNG for pytorch and tensorflow are different, it's a different class than in the pytorch Deit doctest.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18393/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18393",
"html_url": "https://github.com/huggingface/transformers/pull/18393",
"diff_url": "https://github.com/huggingface/transformers/pull/18393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18393.patch",
"merged_at": 1659355590000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18392
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18392/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18392/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18392/events
|
https://github.com/huggingface/transformers/pull/18392
| 1,324,089,266
|
PR_kwDOCUB6oc48ad7L
| 18,392
|
Fixing issue where generic model types wouldn't load properly with the pipeline
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger \r\n\r\nIt seems I had completely misunderstood what was going on there. I thought it was misconfiguration while it's more of a normal state of things (Wasn't aware we had added those generic models for vision too).\r\n\r\nMy new proposed PR then actually fixes the underlying issue initially created #17929 . \r\n\r\nThe way I did it is keep some manual bookkeeping for these \"multi model\" configurations (is the name right) ?\r\n\r\nThen if we are actually using one of these models, attempt to load the `Tokenizer`/`feature_Extractor` looking exclusively at whether or not the task requires one.\r\nThis should fix the original issue, and it so happens we had a test that could easily be updated to support those use cases.\r\n\r\nWhat do you think of this approach ? \r\n\r\nIf a user created a model and forgot to upload either one of the necessary components, the pipeline will simply fail to load attempting to load one of them. I think that sort of failure mode should be OK to understand and users should be able to recover on their own. So no need for error messages now.\r\n\r\nI am still keeping the regular way to detect if we need the tokenizer for other types of configs, but then we will still fail if the AutoTokenizer/FeatureExtractor is not correctly configured. I think maybe switching entirely to `NO_TOKENIZER_TASKS` detection seems easier in the long run but I didn't want to do such a change in a small PR. (`feature-extraction` will still be a corner case)"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
When this occurs https://github.com/huggingface/transformers/issues/17929
we can provide a better error message since this is detectable at load time
and the fix should happen within `transformers`.
Found out 3 odd cases which have been dealt with differently:
- `translation` actually uses `translation_XX_to_YY` and also relies on `task_specific_params` for some model configs.
I tried cleaning that up and using `task_specific_params` only once, but the rabbit hole is deep, and it would have meant more
code changes that this PR should hold. Waiting for a subsequent PR.
The issue is that `translation_XX_to_YY` is not a normalized task name and is not within `NO_TOKENIZER_TASKS` nor `NO_FEATURE_EXTRACTION_TASKS` so the configuration on wether we should load or not doesn't work.
- `feature-extraction`. That one is extremely special, since ALL models could in theory use that pipeline, and so we cannot enforce or detect anything statically on what should be loaded or not.
- `automatic-speech-recognition` has this `speech-encoder-decoder` type of model, which do not define any `tokenizer` class, so the `type(config)` is NOT within `TOKENIZER_MAPPING` (correctly), but the first version of the check would fail when deciding staticly if we should load the tokenizer or not. The fix was to check if the user passed a tokenizer or not (if tokenizer is passed we should never try to do anything anyway)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18392/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18392",
"html_url": "https://github.com/huggingface/transformers/pull/18392",
"diff_url": "https://github.com/huggingface/transformers/pull/18392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18392.patch",
"merged_at": 1659681907000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18391
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18391/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18391/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18391/events
|
https://github.com/huggingface/transformers/issues/18391
| 1,324,088,186
|
I_kwDOCUB6oc5O6_t6
| 18,391
|
Is run_clip.py an example of fine-tune or an example of training a vision-text model from scratchοΌ
|
{
"login": "gongshaojie12",
"id": 6407116,
"node_id": "MDQ6VXNlcjY0MDcxMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6407116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gongshaojie12",
"html_url": "https://github.com/gongshaojie12",
"followers_url": "https://api.github.com/users/gongshaojie12/followers",
"following_url": "https://api.github.com/users/gongshaojie12/following{/other_user}",
"gists_url": "https://api.github.com/users/gongshaojie12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gongshaojie12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gongshaojie12/subscriptions",
"organizations_url": "https://api.github.com/users/gongshaojie12/orgs",
"repos_url": "https://api.github.com/users/gongshaojie12/repos",
"events_url": "https://api.github.com/users/gongshaojie12/events{/privacy}",
"received_events_url": "https://api.github.com/users/gongshaojie12/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @gongshaojie12, is the README available [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) answering your questions?\r\n\r\nIn the [code sample](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model), you'll see it's using @ydshieh's \"coco_dataset_script\" as a dataset; feel free to replace the dataset here by a similar dataset of yours to use the script on your own data.",
"Hi!\r\n\r\nThe script is mostly for finetuning. But it's also possible to train from scratch - you just need to create a model, save it, and use it for the argument `model_name_or_path` - if this is what you like to do.",
"Hi, @LysandreJik @ydshieh Thank you very much for your reply, maybe the first sentence in the README (`The following example showcases how to train a CLIP-like vision-text dual encoder model using a pre-trained vision and text encoder.`) bothered me. This sentence makes me think that `run_clip.py` is training a CLIP-like model from scratch, rather than fine-tuning the existing CLIP model. If there is something wrong with my understanding, please criticize and correct me, thank you!",
"The doc mentions `using a pre-trained vision and text encoder.`, so I think there is no ambiguity here. It doesn't necessary finetune the original clip checkpoints thought. You can use any text and image encoders.\r\n\r\nClose this issue now. Don't hesitate if you have further question, @gongshaojie12 "
] | 1,659
| 1,660
| 1,660
|
NONE
| null |
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Hi @patil-suraj I am a newer to CLIP, and while browsing [run_clip.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py), I have a question, Is `run_clip.py` an example of fine-tune or an example of training a vision-text model from scratch? Is it possible to fine-tune CLIP on my own dataset using `run_clip.py`οΌ
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python run_clip.py
### Expected behavior
fine-tune
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18391/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18390
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18390/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18390/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18390/events
|
https://github.com/huggingface/transformers/issues/18390
| 1,324,073,865
|
I_kwDOCUB6oc5O68OJ
| 18,390
|
Incorrect learning rate when using 'cosine_with_restarts' scheduler type
|
{
"login": "tijsg",
"id": 5707158,
"node_id": "MDQ6VXNlcjU3MDcxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5707158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tijsg",
"html_url": "https://github.com/tijsg",
"followers_url": "https://api.github.com/users/tijsg/followers",
"following_url": "https://api.github.com/users/tijsg/following{/other_user}",
"gists_url": "https://api.github.com/users/tijsg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tijsg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tijsg/subscriptions",
"organizations_url": "https://api.github.com/users/tijsg/orgs",
"repos_url": "https://api.github.com/users/tijsg/repos",
"events_url": "https://api.github.com/users/tijsg/events{/privacy}",
"received_events_url": "https://api.github.com/users/tijsg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Could you explain to us how you are seeing the learning rate being constant? Just tried your code and adding a print statement for the learning rate at every step, and I see something that changes.",
"I'm using weight and biases at wandb.ai to keep track of my metrics\r\n\r\n",
"training_args = TrainingArguments(\r\n output_dir='./roberta',\r\n overwrite_output_dir=True,\r\n evaluation_strategy = 'steps',\r\n num_train_epochs=100,\r\n learning_rate=1e-4,\r\n weight_decay=0.01,\r\n per_device_train_batch_size=20,\r\n per_device_eval_batch_size=20,\r\n save_steps=2048,\r\n eval_steps=2048,\r\n save_total_limit=3,\r\n report_to=\"wandb\",\r\n ignore_data_skip=True,\r\n gradient_accumulation_steps=4, \r\n gradient_checkpointing=True,\r\n fp16=True\r\n)\r\n\r\nResults in \r\n\r\n",
"So this only shows the learning rates at steps that are multiple of 500 since that is the default logging step I wouldn't trust a curve with 6 points on it.",
"could you please share how you add print statements for each step?",
"By changing the source code of the Trainer. YOu can also log more frequently by changing the `loggin_steps` argument.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Start training a Roberta model for masked LM using the following parameters.
All parameters are shared for the sake of context, but the parameter that fails is "lr_scheduler_type".
The learning rate remains fixed at a default value.
```py
training_args = TrainingArguments(
output_dir='./roberta',
overwrite_output_dir=True,
evaluation_strategy = 'steps',
num_train_epochs=100,
lr_scheduler_type='cosine_with_restarts',
warmup_steps=100,
weight_decay=0.01,
per_device_train_batch_size=20,
per_device_eval_batch_size=20,
save_steps=2048,
eval_steps=2048,
save_total_limit=3,
report_to="wandb",
ignore_data_skip=True,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
fp16=True
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
#prediction_loss_only=True,
)
```
### Expected behavior
The learning rate should behave like configured and like documented for 'cosine_with_restarts', instead of remaining fixed through the epochs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18390/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18389
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18389/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18389/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18389/events
|
https://github.com/huggingface/transformers/pull/18389
| 1,324,047,338
|
PR_kwDOCUB6oc48aU_G
| 18,389
|
Add a check regarding the number of occurrences of ```
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,662
| 1,659
|
COLLABORATOR
| null |
# What does this PR do?
We have `TFOPTForCausalLM` doctest failed due to the wrong expected value. The file `prepare_for_doc_test.py` didn't change that model file. It comes from the existence of **``decoder_input_ids```**.
This PR adds a check, and also fixes all problematic places found with this new check.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18389/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18389",
"html_url": "https://github.com/huggingface/transformers/pull/18389",
"diff_url": "https://github.com/huggingface/transformers/pull/18389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18389.patch",
"merged_at": 1659356582000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18388
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18388/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18388/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18388/events
|
https://github.com/huggingface/transformers/issues/18388
| 1,324,041,759
|
I_kwDOCUB6oc5O60Yf
| 18,388
|
Mismatch between logits from generate and forward with an attention mask for most GPT models
|
{
"login": "LaurenceA",
"id": 4278594,
"node_id": "MDQ6VXNlcjQyNzg1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4278594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LaurenceA",
"html_url": "https://github.com/LaurenceA",
"followers_url": "https://api.github.com/users/LaurenceA/followers",
"following_url": "https://api.github.com/users/LaurenceA/following{/other_user}",
"gists_url": "https://api.github.com/users/LaurenceA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LaurenceA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LaurenceA/subscriptions",
"organizations_url": "https://api.github.com/users/LaurenceA/orgs",
"repos_url": "https://api.github.com/users/LaurenceA/repos",
"events_url": "https://api.github.com/users/LaurenceA/events{/privacy}",
"received_events_url": "https://api.github.com/users/LaurenceA/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @gante for generate :)",
"Hi @LaurenceA π\r\n\r\nWith decoder-only models, such as the ones you mentioned, padding should be done on the left. This is because the output is a continuation of the input prompt -- there would be gaps in the output without left padding. Our code to automatically prepare the position IDs for a given attention mask in decoder-only models has left-sided padding in mind and differs from the one you wrote in your example, hence the output mismatch :)\r\n\r\nNot being aware that left-sided padding should be used for these models is a common issue. I'm leaving this issue open as a reminder that we should add some form of warning for users.\r\n\r\n___________________________\r\nπ [example of code to prepare the position IDs](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L1014)\r\n\r\nHere's your example, with left padding and the same position IDs creation method:\r\n\r\n```python\r\n\"\"\"\r\nMWE showing that logits from generate match those from forward, except for the first token?\r\n\"\"\"\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom torch.distributions import Categorical\r\nimport torch as t\r\n\r\n#Broken:\r\nmodel_name = \"distilgpt2\"\r\n#model_name = \"gpt2\"\r\n#model_name = \"EleutherAI/gpt-neo-125M\"\r\n#model_name = \"EleutherAI/gpt-neo-1.3B\"\r\n#Working:\r\n#model_name = \"EleutherAI/gpt-j-6B\"\r\nlm = AutoModelForCausalLM.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, padding_side=\"left\")\r\n\r\n\r\ntokenizer.pad_token = tokenizer.eos_token\r\nprompt = tokenizer([\"big unpadded five token prompt \", \"padded three token \"], return_tensors='pt', padding=True, add_special_tokens=True)\r\n\r\n#generate with plain sampling (https://huggingface.co/blog/how-to-generate)\r\n\r\nresult = lm.generate(prompt[\"input_ids\"], attention_mask=prompt[\"attention_mask\"], do_sample=True, output_scores=True, return_dict_in_generate=True, top_k=0, max_length=10)\r\nx, logits_gen = result.sequences, result.scores\r\nlogits_gen = t.stack(logits_gen, 1)\r\n\r\nx_attention_mask = (x != tokenizer.eos_token_id).to(dtype=t.int64)\r\nposition_ids = x_attention_mask.cumsum(-1)-1\r\nposition_ids.masked_fill_(x_attention_mask == 0, 1)\r\nprint(\"Attention mask for prompt + generated text\")\r\nprint(x_attention_mask)\r\nprint(\"Position IDs\")\r\nprint(position_ids)\r\nlogits_for = lm(x, attention_mask=x_attention_mask, position_ids=position_ids).logits\r\n#we drop the last element, and the first prompt_length-1 elements to get\r\n#logits from forward to match those from generate\r\nlogits_for = logits_for[:, (prompt[\"input_ids\"].shape[-1]-1):-1]\r\n\r\nP_for = Categorical(logits = logits_for)\r\nP_gen = Categorical(logits = logits_gen)\r\n\r\n#Take only generated tokens\r\nx = x[..., prompt['input_ids'].shape[-1]:]\r\nlog_prob_for = P_for.log_prob(x)\r\nlog_prob_gen = P_gen.log_prob(x)\r\n\r\nprint(\"log-probs from forward\")\r\nprint(log_prob_for)\r\nprint(\"log-probs from generate\")\r\nprint(log_prob_gen)\r\n```",
"@LaurenceA if you run `generate` from the current main, you should see a warning if you don't use left-padding with decoder-only models like GPT2 :)\r\n\r\n(#19067)"
] | 1,659
| 1,664
| 1,664
|
NONE
| null |
### System Info
- `transformers` version: 4.21.0
- Platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.0+cu113 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
"""
MWE showing that logits from generate match those from forward, except for the first token?
"""
from transformers import AutoTokenizer, AutoModelForCausalLM
from torch.distributions import Categorical
import torch as t
#Broken:
model_name = "distilgpt2"
#model_name = "gpt2"
#model_name = "EleutherAI/gpt-neo-125M"
#model_name = "EleutherAI/gpt-neo-1.3B"
#Working:
#model_name = "EleutherAI/gpt-j-6B"
lm = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, padding='right')
tokenizer.pad_token = tokenizer.eos_token
prompt = tokenizer(["big unpadded five token prompt ", "padded three token "], return_tensors='pt', padding=True, add_special_tokens=True)
#generate with plain sampling (https://huggingface.co/blog/how-to-generate)
result = lm.generate(prompt["input_ids"], attention_mask=prompt["attention_mask"], do_sample=True, output_scores=True, return_dict_in_generate=True, top_k=0, max_length=10)
x, logits_gen = result.sequences, result.scores
logits_gen = t.stack(logits_gen, 1)
x_attention_mask = (x != tokenizer.eos_token_id).to(dtype=t.int64)
position_ids = x_attention_mask.cumsum(-1)-1
print("Attention mask for prompt + generated text")
print(x_attention_mask)
print("Position IDs")
print(position_ids)
logits_for = lm(x, attention_mask=x_attention_mask, position_ids=position_ids).logits
#we drop the last element, and the first prompt_length-1 elements to get
#logits from forward to match those from generate
logits_for = logits_for[:, (prompt["input_ids"].shape[-1]-1):-1]
P_for = Categorical(logits = logits_for)
P_gen = Categorical(logits = logits_gen)
#Take only generated tokens
x = x[..., prompt['input_ids'].shape[-1]:]
log_prob_for = P_for.log_prob(x)
log_prob_gen = P_gen.log_prob(x)
print("log-probs from forward")
print(log_prob_for)
print("log-probs from generate")
print(log_prob_gen)
```
### Expected behavior
I'm trying to get logits or log-probabilities from `generate` to match those from `forward` in the presence of a padded prompt.
For GPT models, I managed to get almost everything working, by setting the `position_ids` for `forward` (see MWE script).
However, there still seems to be a slight mismatch with the first token, if the prompt has an attention mask. You can see this in the returned output, from this script, which is:
```
Attention mask for prompt + generated text
tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0, 0, 1, 1, 1]])
Position IDs
tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[0, 1, 2, 3, 4, 4, 4, 5, 6, 7]])
log-probs from forward
tensor([[ -8.3152, -5.5587, -3.0973],
[ -2.6509, -10.6300, -7.5426]], grad_fn=<SqueezeBackward1>)
log-probs from generate
tensor([[ -8.3152, -5.5587, -3.0973],
[ -2.7818, -10.6300, -7.5426]])
```
Note the slightly mismatch between the bottom-left log-prob, which doesn't happen for any other log-probability.
I've tried a few GPT flavour models: we get problem for `distilgpt2`, `gpt2`, `EleutherAI/gpt-neo-125M` and `EleutherAI/gpt-neo-1.3B`. But the log-probs all match for `EleutherAI/gpt-j-6B`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18388/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18387
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18387/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18387/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18387/events
|
https://github.com/huggingface/transformers/pull/18387
| 1,323,980,497
|
PR_kwDOCUB6oc48aGrO
| 18,387
|
Fix from_pretrained kwargs forward
|
{
"login": "YouJiacheng",
"id": 83971976,
"node_id": "MDQ6VXNlcjgzOTcxOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/83971976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YouJiacheng",
"html_url": "https://github.com/YouJiacheng",
"followers_url": "https://api.github.com/users/YouJiacheng/followers",
"following_url": "https://api.github.com/users/YouJiacheng/following{/other_user}",
"gists_url": "https://api.github.com/users/YouJiacheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YouJiacheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YouJiacheng/subscriptions",
"organizations_url": "https://api.github.com/users/YouJiacheng/orgs",
"repos_url": "https://api.github.com/users/YouJiacheng/repos",
"events_url": "https://api.github.com/users/YouJiacheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/YouJiacheng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
I don't know whether `use_auth_token`, `cache_dir` and `local_files_only` should be passed to `(cls.slow_tokenizer_class)._from_pretrained`, but I guess it should.
Please correct me if anything is wrong.
# What does this PR do?
Fixes #18385
@sgugger @LysandreJik
BTW, I find #13523 and #14508 addressing similar problems, which shows current implementation is vulnerable to this kind of problems, a refactor might be needed. A context manager might be suitable as a long-range dependency injector.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18387/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18387",
"html_url": "https://github.com/huggingface/transformers/pull/18387",
"diff_url": "https://github.com/huggingface/transformers/pull/18387.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18387.patch",
"merged_at": 1659356185000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18386
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18386/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18386/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18386/events
|
https://github.com/huggingface/transformers/issues/18386
| 1,323,965,266
|
I_kwDOCUB6oc5O6htS
| 18,386
|
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
|
{
"login": "qiuxia-alone",
"id": 52020630,
"node_id": "MDQ6VXNlcjUyMDIwNjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/52020630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiuxia-alone",
"html_url": "https://github.com/qiuxia-alone",
"followers_url": "https://api.github.com/users/qiuxia-alone/followers",
"following_url": "https://api.github.com/users/qiuxia-alone/following{/other_user}",
"gists_url": "https://api.github.com/users/qiuxia-alone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiuxia-alone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiuxia-alone/subscriptions",
"organizations_url": "https://api.github.com/users/qiuxia-alone/orgs",
"repos_url": "https://api.github.com/users/qiuxia-alone/repos",
"events_url": "https://api.github.com/users/qiuxia-alone/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiuxia-alone/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"It's likely that you have a mismatch between your torch and torch-scatter installs. Can you import torch-scatter on its own without the SIGSEGV?\r\n",
"> It's likely that you have a mismatch between your torch and torch-scatter installs. Can you import torch-scatter on its own without the SIGSEGV?\r\n\r\nhi, i have installed the right version of torch-scatter(2.0.9) as showed in [https://pytorch-geometric.com/whl/](url), but i import torch_scatter as you said and met the SIGSEGV.",
"sorry, i get the wrong version of torch, the problem solved, thank you for your apply.",
"Great, happy you could solve the problem!",
"Having faced the same issues and also fixed it by updating the version of pytorch"
] | 1,659
| 1,669
| 1,659
|
NONE
| null |
### System Info
macos
pycharm
python 3.7
transformers 4.20.1
torch 1.12.0
torch-scatter. 2.0.9
tensorflow 2.3.0
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
when i run `from transformers import TapasForQuestionAnswering`, then i meet error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
### Expected behavior
expect nothing happened
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18386/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18385
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18385/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18385/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18385/events
|
https://github.com/huggingface/transformers/issues/18385
| 1,323,931,390
|
I_kwDOCUB6oc5O6Zb-
| 18,385
|
`local_files_only` is not passed to `_from_pretrained` in `PreTrainedTokenizerBase.from_pretrained`
|
{
"login": "YouJiacheng",
"id": 83971976,
"node_id": "MDQ6VXNlcjgzOTcxOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/83971976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YouJiacheng",
"html_url": "https://github.com/YouJiacheng",
"followers_url": "https://api.github.com/users/YouJiacheng/followers",
"following_url": "https://api.github.com/users/YouJiacheng/following{/other_user}",
"gists_url": "https://api.github.com/users/YouJiacheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YouJiacheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YouJiacheng/subscriptions",
"organizations_url": "https://api.github.com/users/YouJiacheng/orgs",
"repos_url": "https://api.github.com/users/YouJiacheng/repos",
"events_url": "https://api.github.com/users/YouJiacheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/YouJiacheng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
I run
```python
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32", local_files_only=True)
```
But request is still sent to get etag.
https://github.com/huggingface/transformers/blob/b2e4b091f08f1aaf21855d588c6c8d284baba9eb/src/transformers/tokenization_utils_base.py#L1653-L1813
`local_files_only` is aborted when calling `_from_pretrained`, whether it is explicitly passed or implicitly set by `is_offline_mode()` check.
I think this behavior is buggy.
Fortunately, `transformers` check `is_offline_mode()` in `utils/hub.py/cached_path`, so I can globally and permanently force `local_files_only` as a workaround in my usecase.
Only fixing `PreTrainedTokenizerBase.from_pretrained` is not enough, `_from_pretrained` doesn't pass `local_files_only` to `AutoConfig.from_pretrained` either.
I'm working on a fix for it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18385/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18384
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18384/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18384/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18384/events
|
https://github.com/huggingface/transformers/issues/18384
| 1,323,691,246
|
I_kwDOCUB6oc5O5ezu
| 18,384
|
HPO could be enabled by a HPO configuration file(yaml or json) instead of adding code explicitly in example.py
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @yao-matrix @kding1 any comment?",
"The examples are meant to stay simple and readable so users can change them to their need (cause as mentioned in several places, they are just examples, not production apps). That's why we set the bar at training + evaluation and they don't support hyperparameters search out of the box.",
"> \r\n\r\nAgree that examples should stay simple and clear for easily ramp up. @sgugger , our question is actually \"is it needed to unify HF's HPO configuration and run across different backends(Optuna, SigOpt etc.) ?\", by supplying an unified configure interface(one example is the yaml style @sywangyi proposed). For data scientist, they can decouple their applications code from specific HPO tool; for HPO tool developer, easier for them to integrate into HF ecosystem. ",
"@sgugger thanks for your comment, do you think it's necessary to apply the same yaml configuration in case the user is not quite familiar with so much different HPO backend?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
### Feature request
now, the HPO should be enabled by adding some code in example.py(like run_glue.py) to indicate the HPO backend, metric, hp space. code like
```
def glue_hp_space(trial):
return [
{"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"},
{
"categorical_values": ["16", "32", "64", "128"],
"name": "per_device_train_batch_size",
"type": "categorical",
},
]
def model_init(trial):
return AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
model_init=model_init,
tokenizer=tokenizer,
data_collator=data_collator,
)
best_trial = trainer.hyperparameter_search(
direction="maximize", backend="sigopt", hp_space=glue_hp_space, n_trials=2
)
```
could we add a configuration file and pass it as an argument to trainer for the easy usage of the HPO?
yaml could like
```
HPO.yaml
name: hg_glue_optimization
backend: sigopt
metrics:
-name: accuracy
strategy: optimize
objective: maximize
parameters:
-name: learning_rate
bounds:
min: 1e-6
max: 1e-4
type: double
-name: per_device_train_batch_size
categorical_values:
- 16
- 32
- 64
- 128
type: categorical_values
trials: 10
```
training_args could be responsible for the yaml parse and convert to input space format according to the different backend (Optuna, Sigopt, Wandb...)
### Motivation
I am always frustrated when I need to modify the example code to enable HPO, and change it according to the different HPO backend
### Your contribution
I could help submit PR after all are aligned on this point
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18384/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18383
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18383/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18383/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18383/events
|
https://github.com/huggingface/transformers/issues/18383
| 1,323,651,975
|
I_kwDOCUB6oc5O5VOH
| 18,383
|
Encoder Decoder Model gives same generation results after finetuning
|
{
"login": "tqnwhz",
"id": 39870399,
"node_id": "MDQ6VXNlcjM5ODcwMzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/39870399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqnwhz",
"html_url": "https://github.com/tqnwhz",
"followers_url": "https://api.github.com/users/tqnwhz/followers",
"following_url": "https://api.github.com/users/tqnwhz/following{/other_user}",
"gists_url": "https://api.github.com/users/tqnwhz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqnwhz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqnwhz/subscriptions",
"organizations_url": "https://api.github.com/users/tqnwhz/orgs",
"repos_url": "https://api.github.com/users/tqnwhz/repos",
"events_url": "https://api.github.com/users/tqnwhz/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqnwhz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"> Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n> \r\n> Thanks!\r\n\r\nOf course. I'll migrate it and close this issue. Thanks for your reply!"
] | 1,659
| 1,659
| 1,659
|
NONE
| null |
β Questions & Help
Hi, everyone. I am using transformers(v 4.20.1) and try to build a seq2seq model for multi-label classification. However, I found the model always gives same generation results after finetuning. I found two related issues in github but there seems to exist no solution.
Here's the code. Main logic is located in init and train method.
```python
class Model(pl.LightningModule):
def __init__(self,
decoder_tokenizer,
lr=1e-4,
beam_size=1,
num_decoder_layers=12,):
super().__init__()
self.pad_id = decoder_tokenizer.pad_token_id
self.bos_id = decoder_tokenizer.bos_token_id
self.eos_id = decoder_tokenizer.eos_token_id
self.lr = lr
self.beam_size = beam_size
self.decoder_tokenizer = decoder_tokenizer
encoder_config = RobertaConfig.from_pretrained('roberta-base')
decoder_config = RobertaConfig(bos_token_id=self.bos_id,
eos_token_id=self.eos_id,
pad_token_id=self.pad_id)
decoder_config.num_hidden_layers = num_decoder_layers
self.config = EncoderDecoderConfig.from_encoder_decoder_configs(
encoder_config, decoder_config)
self.model = EncoderDecoderModel(self.config)
self.decoder = self.model.get_decoder()
self.decoder.resize_token_embeddings(decoder_tokenizer.vocab_size)
self.model.config.vocab_size = self.model.config.decoder.vocab_size
nn.init.xavier_uniform_(self.decoder.resize_token_embeddings().weight)
self.model.config.decoder_start_token_id = self.bos_id
self.model.config.pad_token_id = self.pad_id
def training_step(self, batch, batch_idx):
'''batch, a dict contains
input_ids: ids for the input sequence
attention_mask: mask for the input sequence
labels: ids for the output sequence
'''
self.model.train()
loss = self.model(**batch).loss
self.log("train_loss",
loss,
on_step=True,
on_epoch=True,
prog_bar=True,
logger=True)
self.model.eval()
with torch.no_grad():
predictions = self.model.generate(
input_ids=batch['input_ids'],
attention_mask=batch['attention_mask'],
num_beams=self.beam_size,
min_length=7,
max_length=7,
no_repeat_ngram_size=1,
do_sample=False).cpu().numpy().tolist()
labels=batch['labels'].cpu().numpy().tolist()
for pred,label in zip(predictions,labels):
logger.info(f'pred: {pred}, label: {label}')
return loss
def configure_optimizers(self):
optimizer = torch.optim.AdamW(self.parameters(), lr=self.lr)
return optimizer
```
The phenomenon is:
- At the begin of the training, a sanity check starts. I sample some generation results(demonstrated below), it can be seen that the initialized-model is able to generate different predictions.
```
INFO: idx: 0, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 122, 153, 174, 1161, 1618, 102, -100, -100, -100]
INFO: idx: 6, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 338, 498, 587, 2905, 102, -100, -100, -100, -100]
INFO: idx: 7, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 109, 112, 143, 164, 278, 973, 102, -100, -100]
INFO: idx: 9, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 109, 112, 116, 137, 174, 260, 102, -100, -100]
INFO: idx: 10, pred: [101, 1595, 1438, 1135, 3886, 3698, 1406], label: [101, 107, 115, 119, 123, 310, 431, 102, -100, -100]
```
- After finetuning only 1 update, the model starts to generate same results, for both trained samples and unseen validation samples, until the end of the training(30 epochs).
```
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 107, 119, 123, 168, 243, 306, 102, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 109, 195, 230, 587, 1617, 2375, 102]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 107, 123, 559, 716, 1376, 102, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 111, 130, 168, 183, 256, 102, -100, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 122, 142, 222, 336, 2072, 2248, 102, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 147, 159, 355, 795, 102, -100, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 111, 113, 232, 261, 651, 849, 102, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 149, 150, 730, 1356, 2940, 102, -100, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 113, 179, 211, 523, 996, 1366, 102, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 154, 1002, 1040, 102, -100, -100, -100, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 254, 984, 1238, 102, -100, -100, -100, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 132, 289, 504, 730, 895, 2450, 102, -100]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 109, 137, 260, 303, 461, 888, 102]
INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 107, 131, 161, 205, 259, 763, 102, -100]
```
I've been stuck at the problem for almost a week, finding no solutions yet. I checked the model architecture and did find cross attention layers in decoder. I've also checked the data format and related logic, all works well, so I omitted this part for simplicity. Therefore I think the bug might exists in the model side, but I haven't found useful info from the docs or google results.
Any kind of help is appreciated. Thanks very much!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18383/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18382
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18382/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18382/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18382/events
|
https://github.com/huggingface/transformers/pull/18382
| 1,323,592,010
|
PR_kwDOCUB6oc48YycV
| 18,382
|
Fix custom config loading for clip model
|
{
"login": "avinashsai",
"id": 22453634,
"node_id": "MDQ6VXNlcjIyNDUzNjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avinashsai",
"html_url": "https://github.com/avinashsai",
"followers_url": "https://api.github.com/users/avinashsai/followers",
"following_url": "https://api.github.com/users/avinashsai/following{/other_user}",
"gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions",
"organizations_url": "https://api.github.com/users/avinashsai/orgs",
"repos_url": "https://api.github.com/users/avinashsai/repos",
"events_url": "https://api.github.com/users/avinashsai/events{/privacy}",
"received_events_url": "https://api.github.com/users/avinashsai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18382). All of your documentation changes will be reflected on that endpoint.",
"Hey @avinashsai, I don't understand what you're trying to do: two lines above your changes is a `self.config = config`, so you're working with the same object.\r\n\r\nCould you shed some light on what doesn't work so I can help you out? Thanks :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
# What does this PR do?
Fixes # (issue)
In the clip model, CLIPTextTransformer and CLIPVisionTransformer loads the original clip config file eventhough custom parameters are given in the new config file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18382/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18382",
"html_url": "https://github.com/huggingface/transformers/pull/18382",
"diff_url": "https://github.com/huggingface/transformers/pull/18382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18382.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18381
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18381/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18381/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18381/events
|
https://github.com/huggingface/transformers/issues/18381
| 1,323,515,415
|
I_kwDOCUB6oc5O4z4X
| 18,381
|
Summarisation example fails to run on given example. Missing positional argument TypeError
|
{
"login": "SupreethRao99",
"id": 55043035,
"node_id": "MDQ6VXNlcjU1MDQzMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupreethRao99",
"html_url": "https://github.com/SupreethRao99",
"followers_url": "https://api.github.com/users/SupreethRao99/followers",
"following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}",
"gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions",
"organizations_url": "https://api.github.com/users/SupreethRao99/orgs",
"repos_url": "https://api.github.com/users/SupreethRao99/repos",
"events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupreethRao99/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Aha, that's one for @sgugger, linked to https://github.com/huggingface/transformers/pull/18325",
"You need to use the main version of Transformers to use the main version of the example scripts. You can find the examples for v4.21.0 [here](https://github.com/huggingface/transformers/tree/v4.21.0/examples).",
"Thank you @sgugger @LysandreJik , it works perfectly now",
"hey, sorry to bother you again @sgugger , but, this is the output I'm getting when I'm running the script on my own dataset\r\n```\r\nAll the weights of BartForConditionalGeneration were initialized from the model checkpoint at ainize/bart-base-cnn.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use BartForConditionalGeneration for predictions without further training.\r\nRunning tokenizer on dataset: 100% 1/1 [00:00<00:00, 215.96ba/s]\r\nRunning tokenizer on dataset: 100% 1/1 [00:00<00:00, 342.76ba/s]\r\n08/01/2022 12:58:50 - INFO - __main__ - Sample 27 of the training set: {'input_ids': [0, 6323, 34638, 251, 2788, 2], 'attention_mask': [1, 1, 1, 1, 1, 1], 'labels': [0, 12465, 765, 2788, 2]}.\r\n08/01/2022 12:58:52 - INFO - __main__ - ***** Running training *****\r\n08/01/2022 12:58:52 - INFO - __main__ - Num examples = 32\r\n08/01/2022 12:58:52 - INFO - __main__ - Num Epochs = 3\r\n08/01/2022 12:58:52 - INFO - __main__ - Instantaneous batch size per device = 8\r\n08/01/2022 12:58:52 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8\r\n08/01/2022 12:58:52 - INFO - __main__ - Gradient Accumulation steps = 1\r\n08/01/2022 12:58:52 - INFO - __main__ - Total optimization steps = 12\r\n 33% 4/12 [00:01<00:01, 4.60it/s]08/01/2022 12:58:54 - INFO - absl - Using default tokenizer.\r\nTraceback (most recent call last):\r\n File \"/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py\", line 764, in <module>\r\n main()\r\n File \"/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py\", line 711, in main\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\n File \"/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py\", line 711, in <dictcomp>\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\nAttributeError: 'numpy.float64' object has no attribute 'mid'\r\n 33% 4/12 [00:01<00:03, 2.11it/s]\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py\", line 43, in main\r\n args.func(args)\r\n File \"/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py\", line 826, in launch_command\r\n simple_launcher(args)\r\n File \"/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py\", line 358, in simple_launcher\r\n raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'ainize/bart-base-cnn', '--train_file', '/content/test.csv', '--validation_file', '/content/test.csv', '--summary_column', 'Summary', '--text_column', 'Text', '--output_dir', '/content/model']' returned non-zero exit status 1.\r\n```\r\n\r\nThe code I'm using to launch the script is \r\n```\r\n!accelerate launch /content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py \\\r\n --model_name_or_path ainize/bart-base-cnn \\\r\n --train_file /content/test.csv \\\r\n --validation_file /content/test.csv \\\r\n --summary_column Summary \\\r\n --text_column Text \\\r\n --output_dir /content/model\r\n```\r\nthe test.csv file is below\r\n[test.csv](https://github.com/huggingface/transformers/files/9234094/test.csv)\r\n\r\n",
"Yes, it looks like `evaluate` decided to break the rouge metric. Sending a fix!"
] | 1,659
| 1,659
| 1,659
|
NONE
| null |
### System Info
```
- `transformers` version: 4.21.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger @pati
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to fine tune my own summarisation model based on the example in `transformers/examples/pytorch/summarization/run_summarization_no_trainer.py` but it when I first tried on the example given in the repository. link to [Google Colab to reproduce error](https://colab.research.google.com/drive/1Jk7-1hC6wAac8Ejh57URcalcRzzrC2Nd?usp=sharing)
```
!accelerate launch /content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
```
I'm getting the following error
```
Traceback (most recent call last):
File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 763, in <module>
main()
File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 493, in main
desc="Running tokenizer on dataset",
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 790, in map
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 790, in <dictcomp>
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2405, in map
desc=desc,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2779, in _map_single
offset=offset,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 474, in preprocess_function
labels = tokenizer(text_target=targets, max_length=max_target_length, padding=padding, truncation=True)
TypeError: __call__() missing 1 required positional argument: 'text'
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 826, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 358, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 't5-small', '--dataset_name', 'cnn_dailymail', '--dataset_config', '3.0.0', '--source_prefix', 'summarize: ', '--output_dir', '/root/tmp/tst-summarization']' returned non-zero exit status 1.
```
### Expected behavior
The model should start training
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18381/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18380
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18380/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18380/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18380/events
|
https://github.com/huggingface/transformers/issues/18380
| 1,323,507,420
|
I_kwDOCUB6oc5O4x7c
| 18,380
|
LayoutLM-based visual question answering model, weights, and pipeline
|
{
"login": "ankrgyl",
"id": 565363,
"node_id": "MDQ6VXNlcjU2NTM2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankrgyl",
"html_url": "https://github.com/ankrgyl",
"followers_url": "https://api.github.com/users/ankrgyl/followers",
"following_url": "https://api.github.com/users/ankrgyl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions",
"organizations_url": "https://api.github.com/users/ankrgyl/orgs",
"repos_url": "https://api.github.com/users/ankrgyl/repos",
"events_url": "https://api.github.com/users/ankrgyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankrgyl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil as well as @NielsRogge ",
"Thank you for this proposal !\r\n\r\nIt is really well thought out and everything you mention is pertinent.\r\nAdding support would be really awesome !\r\n\r\n- We probably need to use VisualQuestionAnswering for this one. What defines a pipeline is the set of input/output so as far as I understand that would fit (image+question_text, output is a list of strings with scores attached, in decreasin order of `top_k`). Actually for this one, we might be able to return the bbox in addition so that we could visually show where the information is in the original document. (Optionally extra information is OK, but pipelines can't change the core input/output so that users can easily switch between models/architectures).\r\n- As far as I understand, the main reason we haven't already included the pipeline is because of the OCR. I think we actually can include it in the pipeline if it's easy to install (single dependency addition) and if we provide a clear error message when it's missing. We're already using `ffmpeg` for audio pipelines when it's missing, and `kenlm` when there's a n-gram layer with the model. Those are all pipeline specific so not necessary for `transformers` but they do make users' lives easier. \r\n- For differenciating between layout and other models, we tend not to focus on actual model names (like `layoutLM` but more on model `ForXX` name (`ForDocumentQuestionAnswering` maybe @NielsRogge ?), as they should have consistent API. So when a new model comes around and implements the same API, there's no additional work for the pipeline (99% of the time at least). \r\n\r\nFeel free to start the PRs and ping me as early on as you want (so I can help with the details).\r\n\r\nHere is doc on adding new pipelines, most of it is not necessary since `vqa` already exists but it should help with the overall design.\r\nhttps://huggingface.co/docs/transformers/v4.21.0/en/add_new_pipeline#adding-it-to-the-list-of-supported-tasks2\r\n\r\nCheers, and thanks for the proposal !",
"@Narsil that's great to hear! I will start sending pieces as PRs and tag you for feedback.",
"Re-opening this as we're still working on the pipeline."
] | 1,659
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
### Feature request
Question answering is an important problem for both text and documents. The question-answering pipeline makes it very easy to work with plain text and includes helpful utilities (like [post-processing start/end candidates](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L510)). It'd be amazing for question answering on documents to be _that_ easy.
The primary goal of this feature request is to extend either the question answering or visual question answering pipeline to be as easy to use as, for example, the [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) model. LayoutLM is a great model architecture for solving this problem and @NielsRogge's [notebook example](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb) even shows you how to fine tune the model for this use case. I think it'd be very powerful for a number of use cases if it were _as easy_ to use LayoutLM for document question answering as it is to use BERT-like models for text question answering.
This will require a few additions, all of which I have working code for that I'd be happy to contribute:
1. Extend the `QuestionAnsweringPipeline` or `VisualQuestionAnsweringPipeline` pipeline to support document inputs. I _think_ the latter would be the right pipeline, since it already takes an image as input, but ideally could also take a list of words+bounding boxes as input (in case users want to run their own OCR).
2. Hook up `LayoutLMv2ForQuestionAnswering` and `LayoutLMv3ForQuestionAnswering` to the pipeline. Ideally, there would also be `LayoutLMForQuestionAnswering`, since v2 and v3 are not licensed for commercial use.
2. Publish pre-trained model weights with an easy-to-follow model card. I found a few examples of fine-tuned layoutlm for QA models (e.g. [this](https://huggingface.co/tiennvcs/layoutlmv2-base-uncased-finetuned-docvqa)), but could not get them to run easily. For example, the "hosted inference API" UI throws an error when you try to run it. I think the visual question answering UI (which lets you load an image) might be a better fit. But I am very open to discussion on what the best experience would be.
### Motivation
When we started using transformers, we saw the `question-answering` pipeline and we're blown away by how easy it was to use for text-based extractive QA. We were hoping it'd be "that easy" for document QA, but couldn't find pre-trained weights or a pipeline implementation. Thanks to [this tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb), however, we were able to fine tune our own model and get it running. That inspired us to wonder -- could we make it _that_ easy for Document QA too?
### Your contribution
We have working code for all of the proposed feature requests that we'd be happy to contribute. We also have a pre-trained model that we're happy to upload along with an easy-to-follow model card. Since there are a few changes proposed here, it might be worthwhile to break this into multiple issues/PRs, or we can do it all at once (however works best within your processes).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18380/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18380/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18379
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18379/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18379/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18379/events
|
https://github.com/huggingface/transformers/issues/18379
| 1,323,450,730
|
I_kwDOCUB6oc5O4kFq
| 18,379
|
raise RuntimeError("Failed to load audio from {}".format(filepath))
|
{
"login": "mehrdad78",
"id": 46048846,
"node_id": "MDQ6VXNlcjQ2MDQ4ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46048846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mehrdad78",
"html_url": "https://github.com/mehrdad78",
"followers_url": "https://api.github.com/users/mehrdad78/followers",
"following_url": "https://api.github.com/users/mehrdad78/following{/other_user}",
"gists_url": "https://api.github.com/users/mehrdad78/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mehrdad78/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mehrdad78/subscriptions",
"organizations_url": "https://api.github.com/users/mehrdad78/orgs",
"repos_url": "https://api.github.com/users/mehrdad78/repos",
"events_url": "https://api.github.com/users/mehrdad78/events{/privacy}",
"received_events_url": "https://api.github.com/users/mehrdad78/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"> feedback response with error code\r\n\r\nI didn't get what do you mean? ",
"Hey @mehrdad78, could you share the full stack trace?",
"> Hey @mehrdad78, could you share the full stack trace?\r\n\r\nYes,sure.\r\nhere is my colab notebook:[https://colab.research.google.com/drive/1jNdztD-Kkk8MCkzPLlLXVr0Z2jSgpkM8?usp=sharing](url)\r\nand the stack trace:\r\n\r\n```\r\n`_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=False,\r\nbf16_full_eval=False,\r\ndata_seed=None,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=True,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_delay=0,\r\neval_steps=100,\r\nevaluation_strategy=steps,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\nfsdp=[],\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=None,\r\nfull_determinism=False,\r\ngradient_accumulation_steps=2,\r\ngradient_checkpointing=True,\r\ngreater_is_better=None,\r\ngroup_by_length=True,\r\nhalf_precision_backend=auto,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=every_save,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=0.0003,\r\nlength_column_name=input_length,\r\nload_best_model_at_end=False,\r\nlocal_rank=-1,\r\nlog_level=-1,\r\nlog_level_replica=-1,\r\nlog_on_each_node=True,\r\nlogging_dir=/content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/runs/Aug01_10-37-50_87323b63b7db,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=500,\r\nlogging_strategy=steps,\r\nlr_scheduler_type=linear,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=15.0,\r\noptim=adamw_hf,\r\noutput_dir=/content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo,\r\noverwrite_output_dir=True,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=16,\r\nprediction_loss_only=False,\r\npush_to_hub=True,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=/content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo,\r\nsave_on_each_node=False,\r\nsave_steps=400,\r\nsave_strategy=steps,\r\nsave_total_limit=3,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntf32=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=500,\r\nweight_decay=0.0,\r\nxpu_backend=None,\r\n)\r\nDownloading builder script: 26.4kB [00:00, 24.1MB/s] \r\nDownloading metadata: 174kB [00:00, 88.1MB/s] \r\nDownloading and preparing dataset common_voice/ru (download: 3.40 GiB, generated: 4.88 GiB, post-processed: Unknown size, total: 8.29 GiB) to /root/.cache/huggingface/datasets/common_voice/ru/6.1.0/a1dc74461f6c839bfe1e8cf1262fd4cf24297e3fbd4087a711bd090779023a5e...\r\nDownloading data: 100% 3.66G/3.66G [01:57<00:00, 31.0MB/s]\r\nDataset common_voice downloaded and prepared to /root/.cache/huggingface/datasets/common_voice/ru/6.1.0/a1dc74461f6c839bfe1e8cf1262fd4cf24297e3fbd4087a711bd090779023a5e. Subsequent calls will reuse this data.\r\n08/01/2022 10:43:49 - WARNING - datasets.builder - Reusing dataset common_voice (/root/.cache/huggingface/datasets/common_voice/ru/6.1.0/a1dc74461f6c839bfe1e8cf1262fd4cf24297e3fbd4087a711bd090779023a5e)\r\nremove special characters from datasets: 100% 23444/23444 [00:03<00:00, 7780.78ex/s]\r\nremove special characters from datasets: 100% 8007/8007 [00:01<00:00, 7715.10ex/s]\r\nhttps://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp2g_x442y\r\nDownloading config.json: 100% 1.73k/1.73k [00:00<00:00, 2.68MB/s]\r\nstoring https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/8508c73cd595eb416a1d517b90762416c0bc6cfbef529578079aeae4d8c14336.7581ed2ee0c677f1e933180df51bd1a668c4a2b6d5fd1297d32069373dac097c\r\ncreating metadata file for /root/.cache/huggingface/transformers/8508c73cd595eb416a1d517b90762416c0bc6cfbef529578079aeae4d8c14336.7581ed2ee0c677f1e933180df51bd1a668c4a2b6d5fd1297d32069373dac097c\r\nloading configuration file https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/8508c73cd595eb416a1d517b90762416c0bc6cfbef529578079aeae4d8c14336.7581ed2ee0c677f1e933180df51bd1a668c4a2b6d5fd1297d32069373dac097c\r\nModel config Wav2Vec2Config {\r\n \"_name_or_path\": \"facebook/wav2vec2-large-xlsr-53\",\r\n \"activation_dropout\": 0.0,\r\n \"adapter_kernel_size\": 3,\r\n \"adapter_stride\": 2,\r\n \"add_adapter\": false,\r\n \"apply_spec_augment\": true,\r\n \"architectures\": [\r\n \"Wav2Vec2ForPreTraining\"\r\n ],\r\n \"attention_dropout\": 0.1,\r\n \"bos_token_id\": 1,\r\n \"classifier_proj_size\": 256,\r\n \"codevector_dim\": 768,\r\n \"contrastive_logits_temperature\": 0.1,\r\n \"conv_bias\": true,\r\n \"conv_dim\": [\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512\r\n ],\r\n \"conv_kernel\": [\r\n 10,\r\n 3,\r\n 3,\r\n 3,\r\n 3,\r\n 2,\r\n 2\r\n ],\r\n \"conv_stride\": [\r\n 5,\r\n 2,\r\n 2,\r\n 2,\r\n 2,\r\n 2,\r\n 2\r\n ],\r\n \"ctc_loss_reduction\": \"sum\",\r\n \"ctc_zero_infinity\": false,\r\n \"diversity_loss_weight\": 0.1,\r\n \"do_stable_layer_norm\": true,\r\n \"eos_token_id\": 2,\r\n \"feat_extract_activation\": \"gelu\",\r\n \"feat_extract_dropout\": 0.0,\r\n \"feat_extract_norm\": \"layer\",\r\n \"feat_proj_dropout\": 0.1,\r\n \"feat_quantizer_dropout\": 0.0,\r\n \"final_dropout\": 0.0,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout\": 0.1,\r\n \"hidden_size\": 1024,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 4096,\r\n \"layer_norm_eps\": 1e-05,\r\n \"layerdrop\": 0.1,\r\n \"mask_channel_length\": 10,\r\n \"mask_channel_min_space\": 1,\r\n \"mask_channel_other\": 0.0,\r\n \"mask_channel_prob\": 0.0,\r\n \"mask_channel_selection\": \"static\",\r\n \"mask_feature_length\": 10,\r\n \"mask_feature_min_masks\": 0,\r\n \"mask_feature_prob\": 0.0,\r\n \"mask_time_length\": 10,\r\n \"mask_time_min_masks\": 2,\r\n \"mask_time_min_space\": 1,\r\n \"mask_time_other\": 0.0,\r\n \"mask_time_prob\": 0.075,\r\n \"mask_time_selection\": \"static\",\r\n \"model_type\": \"wav2vec2\",\r\n \"num_adapter_layers\": 3,\r\n \"num_attention_heads\": 16,\r\n \"num_codevector_groups\": 2,\r\n \"num_codevectors_per_group\": 320,\r\n \"num_conv_pos_embedding_groups\": 16,\r\n \"num_conv_pos_embeddings\": 128,\r\n \"num_feat_extract_layers\": 7,\r\n \"num_hidden_layers\": 24,\r\n \"num_negatives\": 100,\r\n \"output_hidden_size\": 1024,\r\n \"pad_token_id\": 0,\r\n \"proj_codevector_dim\": 768,\r\n \"tdnn_dilation\": [\r\n 1,\r\n 2,\r\n 3,\r\n 1,\r\n 1\r\n ],\r\n \"tdnn_dim\": [\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 1500\r\n ],\r\n \"tdnn_kernel\": [\r\n 5,\r\n 3,\r\n 3,\r\n 1,\r\n 1\r\n ],\r\n \"transformers_version\": \"4.22.0.dev0\",\r\n \"use_weighted_layer_sum\": false,\r\n \"vocab_size\": 32,\r\n \"xvector_output_dim\": 512\r\n}\r\n\r\n100% 1/1 [00:00<00:00, 2.69ba/s]\r\n100% 1/1 [00:00<00:00, 8.21ba/s]\r\nDidn't find file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/tokenizer_config.json. We won't load it.\r\nDidn't find file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/added_tokens.json. We won't load it.\r\nDidn't find file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/special_tokens_map.json. We won't load it.\r\nloading file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/vocab.json\r\nloading file None\r\nloading file None\r\nloading file None\r\nAdding <s> to the vocabulary\r\nAdding </s> to the vocabulary\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nhttps://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/preprocessor_config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpwqmsvu6p\r\nDownloading preprocessor_config.json: 100% 212/212 [00:00<00:00, 360kB/s]\r\nstoring https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/preprocessor_config.json in cache at /root/.cache/huggingface/transformers/281aea0033110ab616ee4c2840ee83ed30496bb549916b8aec6c5668109f9e79.d4484dc1c81456a2461485e7168b04347a7b9a4e3b1ef3aba723323b33e12326\r\ncreating metadata file for /root/.cache/huggingface/transformers/281aea0033110ab616ee4c2840ee83ed30496bb549916b8aec6c5668109f9e79.d4484dc1c81456a2461485e7168b04347a7b9a4e3b1ef3aba723323b33e12326\r\nloading feature extractor configuration file https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/preprocessor_config.json from cache at /root/.cache/huggingface/transformers/281aea0033110ab616ee4c2840ee83ed30496bb549916b8aec6c5668109f9e79.d4484dc1c81456a2461485e7168b04347a7b9a4e3b1ef3aba723323b33e12326\r\nFeature extractor Wav2Vec2FeatureExtractor {\r\n \"do_normalize\": true,\r\n \"feature_extractor_type\": \"Wav2Vec2FeatureExtractor\",\r\n \"feature_size\": 1,\r\n \"padding_side\": \"right\",\r\n \"padding_value\": 0,\r\n \"return_attention_mask\": true,\r\n \"sampling_rate\": 16000\r\n}\r\n\r\nhttps://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpio5rku8q\r\nDownloading pytorch_model.bin: 100% 1.18G/1.18G [00:19<00:00, 65.5MB/s]\r\nstoring https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.622b46163a38532eae8ac5423b0481dfc0b9ea401af488b5141772bdff889079\r\ncreating metadata file for /root/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.622b46163a38532eae8ac5423b0481dfc0b9ea401af488b5141772bdff889079\r\nloading weights file https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.622b46163a38532eae8ac5423b0481dfc0b9ea401af488b5141772bdff889079\r\nSome weights of the model checkpoint at facebook/wav2vec2-large-xlsr-53 were not used when initializing Wav2Vec2ForCTC: ['project_hid.bias', 'project_hid.weight', 'quantizer.weight_proj.weight', 'quantizer.weight_proj.bias', 'project_q.weight', 'project_q.bias', 'quantizer.codevectors']\r\n- This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-large-xlsr-53 and are newly initialized: ['lm_head.weight', 'lm_head.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\npreprocess datasets: 0% 0/23444 [00:00<?, ?ex/s]\r\nTraceback (most recent call last):\r\n File \"run_speech_recognition_ctc.py\", line 769, in <module>\r\n main()\r\n File \"run_speech_recognition_ctc.py\", line 628, in main\r\n desc=\"preprocess datasets\",\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py\", line 790, in map\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py\", line 790, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2405, in map\r\n desc=desc,\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 557, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 524, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py\", line 480, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2756, in _map_single\r\n example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2655, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2347, in decorated\r\n result = f(decorated_item, *args, **kwargs)\r\n File \"run_speech_recognition_ctc.py\", line 609, in prepare_dataset\r\n sample = batch[audio_column_name]\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 123, in __getitem__\r\n value = decode_nested_example(self.features[key], value) if value is not None else None\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/features/features.py\", line 1260, in decode_nested_example\r\n return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/features/audio.py\", line 144, in decode_example\r\n array, sampling_rate = self._decode_mp3(file if file else path)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/features/audio.py\", line 293, in _decode_mp3\r\n array, sampling_rate = torchaudio.load(path_or_file, format=\"mp3\")\r\n File \"/usr/local/lib/python3.7/dist-packages/torchaudio/backend/sox_io_backend.py\", line 227, in load\r\n return _fallback_load(filepath, frame_offset, num_frames, normalize, channels_first, format)\r\n File \"/usr/local/lib/python3.7/dist-packages/torchaudio/backend/sox_io_backend.py\", line 29, in _fail_load\r\n raise RuntimeError(\"Failed to load audio from {}\".format(filepath))\r\nRuntimeError: Failed to load audio from /root/.cache/huggingface/datasets/downloads/extracted/707cd877a91cbe3455d83b9f62c3656e094f633f257743683372c05f4620af3b/cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3`\r\n```",
"Have you ever encountered this error @albertvillanova @mariosasko ?",
"Hi @mehrdad78, thanks for reporting (and thanks @LysandreJik for drawing my attention to this).\r\n\r\nI have manually checked the TAR file, its content and specifically the MP3 file raising the error: `cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3`\r\n\r\nI can load it without any problem (our Datasets library, under the hood uses `torchaudio` for mp3 files):\r\n```python\r\nIn [1]: import torchaudio\r\n\r\nIn [2]: path = \"./data/common_voice/ru/cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3\"\r\n\r\nIn [3]: data = torchaudio.load(path, format=\"mp3\")\r\n\r\nIn [4]: data\r\nOut[4]: \r\n(tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., -2.6095e-04,\r\n 3.2425e-05, 8.8751e-05]]),\r\n 48000)\r\n``` \r\n\r\nThis makes me think that maybe the source of your issue is `sox`. This is a non-Python dependency that must be installed manually using your operating system package manager, e.g. \r\n```shell\r\nsudo apt-get install sox\r\n```\r\n\r\nYou have the installation instruction of Datasets with support for Audio in our docs: [Installation > Audio](https://huggingface.co/docs/datasets/installation#audio)",
"Issue opened in Datasets to raise a more actionable error message:\r\n- https://github.com/huggingface/datasets/issues/4776",
"> Hi @mehrdad78, thanks for reporting (and thanks @LysandreJik for drawing my attention to this).\r\n> \r\n> I have manually checked the TAR file, its content and specifically the MP3 file raising the error: `cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3`\r\n> \r\n> I can load it without any problem (our Datasets library, under the hood uses `torchaudio` for mp3 files):\r\n> \r\n> ```python\r\n> In [1]: import torchaudio\r\n> \r\n> In [2]: path = \"./data/common_voice/ru/cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3\"\r\n> \r\n> In [3]: data = torchaudio.load(path, format=\"mp3\")\r\n> \r\n> In [4]: data\r\n> Out[4]: \r\n> (tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., -2.6095e-04,\r\n> 3.2425e-05, 8.8751e-05]]),\r\n> 48000)\r\n> ```\r\n> \r\n> This makes me think that maybe the source of your issue is `sox`. This is a non-Python dependency that must be installed manually using your operating system package manager, e.g.\r\n> \r\n> ```shell\r\n> sudo apt-get install sox\r\n> ```\r\n> \r\n> You have the installation instruction of Datasets with support for Audio in our docs: [Installation > Audio](https://huggingface.co/docs/datasets/installation#audio)\r\n\r\nThank you.\r\nI try it and report the result. ",
"I have just read that apparently there is a backend change in latest `torchaudio` release.\r\n\r\nTherefore, `torchaudio` version should be restricted so that it continues using `sox` backend, as expected by `datasets`.\r\n```\r\npip install \"torchaudio<0.12.0\"\r\n```\r\n\r\nWe should address this issue to support latest torchaudio.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> \r\n\r\n@albertvillanova Solves my issue, thank you."
] | 1,659
| 1,691
| 1,662
|
NONE
| null |
### System Info
i want to run
run_speech_recognition_ctc.py
but i got the error when run the Single GPU CTC script.
`python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-common_voice-tr-demo" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="400" \
--eval_steps="100" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" β % β β οΏ½ \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval `
The ERROR :
` raise RuntimeError("Failed to load audio from {}".format(filepath))`
`RuntimeError: Failed to load audio from /root/.cache/huggingface/datasets/downloads/extracted``/05be0c29807a73c9b099873d2f5975dae6d05e9f7d577458a2466ecb9a2b0c6b/cv-corpus-6.1-2020-12-11/tr/clips``/common_voice_tr_17346025.mp3`
### Who can help?
@patrickvonplaten @anton-l
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
i just run the steps written on example folder
### Expected behavior
i just want to get the result
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18379/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18378
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18378/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18378/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18378/events
|
https://github.com/huggingface/transformers/issues/18378
| 1,323,410,602
|
I_kwDOCUB6oc5O4aSq
| 18,378
|
NLLB-200 is too slow
|
{
"login": "Yakovtam",
"id": 23466613,
"node_id": "MDQ6VXNlcjIzNDY2NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23466613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yakovtam",
"html_url": "https://github.com/Yakovtam",
"followers_url": "https://api.github.com/users/Yakovtam/followers",
"following_url": "https://api.github.com/users/Yakovtam/following{/other_user}",
"gists_url": "https://api.github.com/users/Yakovtam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yakovtam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yakovtam/subscriptions",
"organizations_url": "https://api.github.com/users/Yakovtam/orgs",
"repos_url": "https://api.github.com/users/Yakovtam/repos",
"events_url": "https://api.github.com/users/Yakovtam/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yakovtam/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### Feature request
When I use modle 'facebook/nllb-200-distilled-600M' to translate , it consuming 0.5 second. And it look like not asyncγ
I want make it consuming 0.1 second οΌand make it asyncγ Is there anyone who can help me γThanks!
### Motivation
When I use modle 'facebook/nllb-200-distilled-600M' to translate , it consuming 0.5 second. And it look like not asyncγ
I want make it consuming 0.1 second οΌand make it asyncγ Is there anyone who can help me γThanks!
### Your contribution
When I use modle 'facebook/nllb-200-distilled-600M' to translate , it consuming 0.5 second. And it look like not asyncγ
I want make it consuming 0.1 second οΌand make it asyncγ Is there anyone who can help me γThanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18378/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18377
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18377/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18377/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18377/events
|
https://github.com/huggingface/transformers/issues/18377
| 1,323,325,942
|
I_kwDOCUB6oc5O4Fn2
| 18,377
|
Getting Torchvision Transforms of `feature_extractor`s
|
{
"login": "sachinruk",
"id": 1410927,
"node_id": "MDQ6VXNlcjE0MTA5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinruk",
"html_url": "https://github.com/sachinruk",
"followers_url": "https://api.github.com/users/sachinruk/followers",
"following_url": "https://api.github.com/users/sachinruk/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions",
"organizations_url": "https://api.github.com/users/sachinruk/orgs",
"repos_url": "https://api.github.com/users/sachinruk/repos",
"events_url": "https://api.github.com/users/sachinruk/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinruk/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4235521865,
"node_id": "LA_kwDOCUB6oc78dO9J",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20extractors",
"name": "Feature extractors",
"color": "c2e0c6",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"cc @amyeroberts ",
"Thanks for adding this request @sachinruk :) \r\n\r\nRegarding the the mean and standard deviation values, can you raise a separate issue? \r\n\r\nWe're currently going through an update of the feature extractor class for images. At the moment, it's not possible to compose the individual transformations we apply, like `create_transform` does. It's something we were thinking about doing down the road and it's great to hear there's support for it! \r\n\r\nTo control what is and isn't applied by a `FeatureExtractor` you can toggle flags like e.g. `do_normalize` on the call. Note: there are some known bugs with this logic we're looking to address soon (see: [#15055](https://github.com/huggingface/transformers/issues/15055))\r\n\r\nIf you want to add transformations that aren't already applied by the `FeatureExtractor`, or work completely within `torchvision.transforms` there's a great example of a custom pipeline in our [example notebooks here](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb). ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
### Feature request
Currently if I was to add any transforms in my training pipeline, it's not quite obvious how to do so. My usual process is to read through the source code and hope to find what I'm after.
What I'm after is something like in `timm` where you can do
```
config = resolve_data_config({}, model=model)
transform = create_transform(**config)
```
In the above I can append any torchvision transforms at will by inspecting transforms.
While I'm here it seems to be that the mean and standard deviations are 0.5 each for `VitFeatureExtractor` and same with Beit. Was this intentional as it might be incorrect if trained using imagenet data.
### Motivation
See above.
### Your contribution
Happy to contribute, but not sure where and how to start on unifying `FeatureExtractor` classes to return `torchvision.transforms`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18377/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18376
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18376/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18376/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18376/events
|
https://github.com/huggingface/transformers/issues/18376
| 1,323,192,153
|
I_kwDOCUB6oc5O3k9Z
| 18,376
|
Potential memory leakage of TensorFlow Swin model on kaggle!
|
{
"login": "innat",
"id": 17668390,
"node_id": "MDQ6VXNlcjE3NjY4Mzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/innat",
"html_url": "https://github.com/innat",
"followers_url": "https://api.github.com/users/innat/followers",
"following_url": "https://api.github.com/users/innat/following{/other_user}",
"gists_url": "https://api.github.com/users/innat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/innat/subscriptions",
"organizations_url": "https://api.github.com/users/innat/orgs",
"repos_url": "https://api.github.com/users/innat/repos",
"events_url": "https://api.github.com/users/innat/events{/privacy}",
"received_events_url": "https://api.github.com/users/innat/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @innat, thanks for flagging this! \r\n\r\nIn order to help figure out what's causing the problem and possible solutions, could you please answer the following questions: \r\n\r\n* Could you give the version of transformers you're using and any other relevant packages? \r\n* Does the notebook run successfully before entering it as a submission? If not, what line of code causes the failure? \r\n* Could you give details on the checkpoint used for convnext? Can you confirm the convnext model works with the exact same pipeline? \r\n* When you said other frameworks work fine - can you confirm that you were able to use the equivalent Swin PyTorch model on the same swin checkpoint? \r\n\r\nWhat would help most and answer all of these would be a saved kaggle notebook that you could share.",
"Can you help me build an app\n\nOn Mon, Aug 1, 2022, 6:47 PM amyeroberts ***@***.***> wrote:\n\n> Hi @innat <https://github.com/innat>, thanks for flagging this!\n>\n> In order to help figure out what's causing the problem and possible\n> solutions, could you please answer the following questions:\n>\n> - Could you give the version of transformers you're using and any\n> other relevant packages?\n> - Does the notebook run successfully before entering it as a\n> submission? If not, what line of code causes the failure?\n> - Could you give details on the checkpoint used for convnext? Can you\n> confirm the convnext model works with the exact same pipeline?\n> - When you said other frameworks work fine - can you confirm that you\n> were able to use the equivalent Swin PyTorch model on the same swin\n> checkpoint?\n>\n> What would help most and answer all of these would be a saved kaggle\n> notebook that you could share.\n>\n> β\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/18376#issuecomment-1201519353>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AXV67Y5E7M3352TID62TFHLVXAETFANCNFSM55DNTU2A>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"Hello @amyeroberts; thanks for checking. To answer all of your query, \r\n\r\n> Could you give the version of transformers you're using and any other relevant packages?\r\n1. It can be done, we will share a notebook file. Shortly, \r\n\r\n```python\r\ntf.__version__, tfa.__version__, transformers.__version__\r\n('2.6.4', '0.14.0', '4.22.0.dev0')\r\n```\r\n\r\n> Does the notebook run successfully before entering it as a submission? If not, what line of code causes the failure?\r\n2. I hardly used hugging face vision model. It's kind of my first look of these vision models for current [on-going kaggle competition](https://www.kaggle.com/competitions/google-universal-image-embedding). \r\n\r\n\r\n> Could you give details on the checkpoint used for convnext? Can you confirm the convnext model works with the exact same pipeline?\r\n3. Regarding the convnext checkpoint, yes, I can give you the exact file and reproducible code. And I confirm that hugging face convnext (larger one) runs fine whereas tiny swin gives OOM.\r\n\r\n\r\n> When you said other frameworks work fine - can you confirm that you were able to use the equivalent Swin PyTorch model on the same swin checkpoint?\r\n4. I should have elaborate more. I'm not PyTorch 1st user. Swin PyTorch model works fine is reported by other practitioners. \r\n\r\n---\r\n\r\n> What would help most and answer all of these would be a saved kaggle notebook that you could share.\r\n\r\n[Notebook Files](https://gist.github.com/innat/edf5d2c64d55e341efaee2884a8536e8)\r\n\r\nIt contains TensorFlow ConvNeXt and Swin Model pipelines and relevant package's version. The modeling strategy, saving, and submission process is followed according to the rules. The [evaluation page](https://www.kaggle.com/competitions/google-universal-image-embedding/overview/evaluation) also describes how they evaluate both framework and expected modeling approach. Hope it helps.\r\n",
"Thank you for your response.\n\nOn Tue, Aug 2, 2022, 12:53 PM Mohammed Innat ***@***.***>\nwrote:\n\n> Hello @amyeroberts <https://github.com/amyeroberts>; thanks for checking.\n> To answer all of your query,\n>\n> Could you give the version of transformers you're using and any other\n> relevant packages?\n>\n>\n> 1. It can be done, we will share a notebook file. Shortly,\n>\n> tf.__version__, tfa.__version__, transformers.__version__\n> ('2.6.4', '0.14.0', '4.22.0.dev0')\n>\n> Does the notebook run successfully before entering it as a submission? If\n> not, what line of code causes the failure?\n>\n>\n> 1. I hardly used hugging face vision model. It's kind of my first look\n> of these vision models for current on-going kaggle competition\n> <https://www.kaggle.com/competitions/google-universal-image-embedding>.\n>\n> Could you give details on the checkpoint used for convnext? Can you\n> confirm the convnext model works with the exact same pipeline?\n>\n>\n> 1. Regarding the convnext checkpoint, yes, I can give you the exact\n> file and reproducible code. And I confirm that hugging face convnext\n> (larger one) runs fine whereas tiny swin gives OOM.\n>\n> When you said other frameworks work fine - can you confirm that you were\n> able to use the equivalent Swin PyTorch model on the same swin checkpoint?\n>\n>\n> 1. I should have elaborate more. I'm not PyTorch 1st user. Swin\n> PyTorch model works fine is reported by other practitioners.\n>\n> ------------------------------\n>\n> What would help most and answer all of these would be a saved kaggle\n> notebook that you could share.\n>\n> Notebook Files\n> <https://gist.github.com/innat/edf5d2c64d55e341efaee2884a8536e8>\n>\n> It contains TensorFlow ConvNeXt and Swin Model pipelines and relevant\n> package's version. The modeling strategy, saving, and submission process is\n> followed according to the rules. The evaluation page\n> <https://www.kaggle.com/competitions/google-universal-image-embedding/overview/evaluation>\n> also describes how they evaluate both framework and expected modeling\n> approach. Hope it helps.\n>\n> β\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/18376#issuecomment-1202385475>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AXV67Y2MCP2O5YEEYZPM7TDVXED2DANCNFSM55DNTU2A>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"Hi @innat, thank you for all of your detailed responses and for sharing the notebook. \r\n\r\nI ran the notebook in kaggle and was able to save out the model with the checkpoint you used in your first example: `\"microsoft/swin-base-patch4-window7-224-in22k\"`\r\n\r\nThe notebook is here: https://www.kaggle.com/code/aeroberts4444/test-swin-saving/notebook\r\n\r\nAre you able to run the notebook you shared on kaggle? Or do you still hit the OOM?",
"@amyeroberts Thanks for running the code. \r\n\r\nYes, if you run the code that I shared, you won't see any OOM effect instant. As I said, I tried to submit two model from hugging-face (`\"microsoft/swin-tiny-patch4-window7-224\"` and `\"facebook/convnext-large-224-22k-1k\"`) to [this](https://www.kaggle.com/competitions/google-universal-image-embedding/overview/evaluation) competition. \r\n\r\nThe convnext is comparatively much larger than tiny swin, but in the inference time, the submission status always exceed the allowed compute resource for tiny swin but works fine for large convnext model. That's why I kind of have **weak assumption** that, there may be some issue with swin implementation. Also, later I realized that pytorch practitioners use `timm` version of swin model, and not from `huggingface` and no issue found about OOM with that. \r\n\r\nThis competition is unique (no training or test data is provided), so it might be hard to debug the root cause. Please let me know if its out of scope to address such issue. \r\n",
"Hi @innat, thanks for clarifying. It's certainly a problem if there's a memory leak and one we'd want to address. I'm going to continue to look into this. As you said, because of the nature of kaggle and the competition it can be hard to debug. As such, it might take some time before I manage to figure out if there's a problem, what it is and how to solve. ",
"@amyeroberts Thanks for your cordial support. I also informed competition host (googler), [HERE](https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/336534#1883421), but no response yet.\r\n\r\ncc @kfrancischen ",
"Hi @innat. As mentioned above it's quite hard to debug without know what's happening during submission and logs from the kaggle notebook. My current best guess is it's due to the size of the saved Swin model. \r\n\r\nUsing your script to create and save out a model, I looked at the sizes across different checkpoints: \r\n```\r\n\"microsoft/resnet-50\" # 23,561,152 params\r\n\"google/vit-base-patch16-224-in21k\" # 86,389,248 params\r\n\"microsoft/swin-base-patch4-window7-224-in22k\" # 86,743,224 params\r\n\"microsoft/swin-tiny-patch4-window7-224\" # 27,519,354 params\r\n\"facebook/convnext-large-224-22k-1k\" # 196,230,336 params\r\n```\r\n\r\n```\r\ntf_hf_classifier_convnext_large_224_22k_1k:\r\ntotal 25712\r\ndrwxr-xr-x 6 amyroberts staff 192B 10 Aug 13:13 .\r\ndrwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 ..\r\ndrwxr-xr-x 2 amyroberts staff 64B 10 Aug 13:13 assets\r\n-rw-r--r-- 1 amyroberts staff 510K 10 Aug 13:13 keras_metadata.pb\r\n-rw-r--r-- 1 amyroberts staff 12M 10 Aug 13:13 saved_model.pb\r\ndrwxr-xr-x 4 amyroberts staff 128B 10 Aug 13:13 variables\r\n\r\ntf_hf_classifier_resnet_50:\r\ntotal 12048\r\ndrwxr-xr-x 6 amyroberts staff 192B 10 Aug 12:51 .\r\ndrwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 ..\r\ndrwxr-xr-x 2 amyroberts staff 64B 10 Aug 12:51 assets\r\n-rw-r--r-- 1 amyroberts staff 488K 10 Aug 12:51 keras_metadata.pb\r\n-rw-r--r-- 1 amyroberts staff 5.4M 10 Aug 12:51 saved_model.pb\r\ndrwxr-xr-x 4 amyroberts staff 128B 10 Aug 12:51 variables\r\n\r\ntf_hf_classifier_swin_base_patch4_window7_224_in22k:\r\ntotal 179216\r\ndrwxr-xr-x 6 amyroberts staff 192B 10 Aug 13:00 .\r\ndrwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 ..\r\ndrwxr-xr-x 2 amyroberts staff 64B 10 Aug 12:59 assets\r\n-rw-r--r-- 1 amyroberts staff 7.4M 10 Aug 13:00 keras_metadata.pb\r\n-rw-r--r-- 1 amyroberts staff 80M 10 Aug 13:00 saved_model.pb\r\ndrwxr-xr-x 4 amyroberts staff 128B 10 Aug 12:59 variables\r\n\r\ntf_hf_classifier_swin_tiny_patch4_window7_224:\r\ntotal 83944\r\ndrwxr-xr-x 6 amyroberts staff 192B 10 Aug 13:09 .\r\ndrwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 ..\r\ndrwxr-xr-x 2 amyroberts staff 64B 10 Aug 13:09 assets\r\n-rw-r--r-- 1 amyroberts staff 474K 10 Aug 13:09 keras_metadata.pb\r\n-rw-r--r-- 1 amyroberts staff 41M 10 Aug 13:09 saved_model.pb\r\ndrwxr-xr-x 4 amyroberts staff 128B 10 Aug 13:09 variables\r\n\r\ntf_hf_classifier_vit_base_patch16_224_in21k:\r\ntotal 21328\r\ndrwxr-xr-x 6 amyroberts staff 192B 10 Aug 12:53 .\r\ndrwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 ..\r\ndrwxr-xr-x 2 amyroberts staff 64B 10 Aug 12:53 assets\r\n-rw-r--r-- 1 amyroberts staff 162K 10 Aug 12:53 keras_metadata.pb\r\n-rw-r--r-- 1 amyroberts staff 10M 10 Aug 12:53 saved_model.pb\r\ndrwxr-xr-x 4 amyroberts staff 128B 10 Aug 12:53 variables\r\n```\r\n\r\nI haven't dug much into why the model is so much larger. A cursory glance at the model graphs didn't reveal anything particularly surprising. ",
"Randomly jumping in this thread :-)\r\n\r\n- Are you able to reproduce this issue in a machine with similar spec as Kaggle machines?\r\n- One way to narrow down to the root cause is to gradually remove some parts of code \r\n- From the provided notebook, we can't have any conclusion on memory leak. Memory leak refers to the memory usage increase during a repetition of the same call to a particular code block.\r\n- Suggestion: try to see if this issue occurs during model saving, or the memory usage increases during inference time.",
"@amyeroberts \r\nThanks for checking. I'll quickly check the size of these models in torch version. \r\n\r\n@kfrancischen\r\nYour feedback is really much appreciate here. ([more info](https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/336534#1859160))",
"I would suggest debug this in a VM outside Kaggle though. I remembered there is limited GPU/TPU hours per week on Kaggle. Don't waste your quota :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,659
| 1,662
| 1,662
|
NONE
| null |
### System Info
Info:
```
Framework: TensorFlow 2 (Keras)
Version: 2.6
OS: Kaggle
```
### Who can help?
[Swin Model Card](https://huggingface.co/microsoft/swin-small-patch4-window7-224) @amyeroberts
TensorFlow: @Rocketknight1
Vision: @NielsRogge, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
A recent [kaggle competition](https://www.kaggle.com/competitions/google-universal-image-embedding) (hosted by Google), I tried to use pretrained `tf` swin transformer model from hugging face but even with the base model, I consistently received out of memory error. Below is the submission status with a `base_tf_swin` model.

Some note:
- Other framework like pytorch works fine here.
- Other than this model, much larger model like `tf_convnext_xlarge` is able to run without OOM.
So, I'm assuming there might be some potential memory leakage in `tf_swin` implementation. Below is the code I use to build the complete model.
```python
id = "microsoft/swin-base-patch4-window7-224-in22k"
from transformers import AutoFeatureExtractor, TFSwinModel
feature_extractor = AutoFeatureExtractor.from_pretrained(id)
```
```python
inputs = keras.Input(shape=(None, None, 3), dtype='uint8')
mode_inputs = tf.cast(inputs, tf.float32)
mode_inputs = keras.layers.Resizing(*INPUT_SHAPE)(mode_inputs)
mode_inputs = keras.layers.Rescaling(scale=1.0 / 255)(mode_inputs)
mode_inputs = keras.layers.Normalization(
mean=feature_extractor.image_mean,
variance=[x ** 2 for x in feature_extractor.image_std ],
axis=3
)(mode_inputs)
mode_inputs = keras.layers.Permute(dims=(3, 1, 2))(mode_inputs)
tf_huggingface_module = TFSwinModel.from_pretrained(id)
tf_huggingface_module.trainable = False
```
```python
logits = tf_huggingface_module(mode_inputs)
adv_logits = keras.Dense(64)(logits.pooler_output)
outputs = keras.layers.Lambda(
lambda x: tf.math.l2_normalize(x, axis=-1), name='embedding_norm'
)(adv_logits)
tf_huggingface_classifier = keras.Model(inputs, outputs)
```
### Expected behavior
It should work like other model. To reproduce the issue exactly, (in the worst case), you may need to run it on kaggle platform. Kaggle submission status (as shown in the above diagram) is not very descriptive other than just showing submission status :(. Mainly, I like to know what could be the cause of it and any possible solution.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18376/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18375
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18375/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18375/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18375/events
|
https://github.com/huggingface/transformers/pull/18375
| 1,323,134,256
|
PR_kwDOCUB6oc48XWRt
| 18,375
|
Correct the spelling of bleu metric
|
{
"login": "ToluClassics",
"id": 38908008,
"node_id": "MDQ6VXNlcjM4OTA4MDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/38908008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ToluClassics",
"html_url": "https://github.com/ToluClassics",
"followers_url": "https://api.github.com/users/ToluClassics/followers",
"following_url": "https://api.github.com/users/ToluClassics/following{/other_user}",
"gists_url": "https://api.github.com/users/ToluClassics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ToluClassics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ToluClassics/subscriptions",
"organizations_url": "https://api.github.com/users/ToluClassics/orgs",
"repos_url": "https://api.github.com/users/ToluClassics/repos",
"events_url": "https://api.github.com/users/ToluClassics/events{/privacy}",
"received_events_url": "https://api.github.com/users/ToluClassics/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR corrects a simple spelling error. From `blue` to `bleu`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18375/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18375",
"html_url": "https://github.com/huggingface/transformers/pull/18375",
"diff_url": "https://github.com/huggingface/transformers/pull/18375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18375.patch",
"merged_at": 1659354688000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.