url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/16769
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16769/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16769/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16769/events
|
https://github.com/huggingface/transformers/issues/16769
| 1,203,793,444
|
I_kwDOCUB6oc5HwG4k
| 16,769
|
Invalid CLS masking in question answer pipelines top K calculation
|
{
"login": "antonyscerri",
"id": 881190,
"node_id": "MDQ6VXNlcjg4MTE5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/881190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonyscerri",
"html_url": "https://github.com/antonyscerri",
"followers_url": "https://api.github.com/users/antonyscerri/followers",
"following_url": "https://api.github.com/users/antonyscerri/following{/other_user}",
"gists_url": "https://api.github.com/users/antonyscerri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antonyscerri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antonyscerri/subscriptions",
"organizations_url": "https://api.github.com/users/antonyscerri/orgs",
"repos_url": "https://api.github.com/users/antonyscerri/repos",
"events_url": "https://api.github.com/users/antonyscerri/events{/privacy}",
"received_events_url": "https://api.github.com/users/antonyscerri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @antonyscerri ,\r\n\r\nThank you very much for the report ! \r\n\r\nThis definitely seems like a legitimate issue.\r\n\r\nDo you have an example where this triggers an error really in the pipeline ? (This is to craft a test to make sure this is tested against.)\r\nI will try to find one on a dummy example, but it's always better if we can have a real world example.\r\n",
"Unfortunately I cannot share the data I observed it with. However any text block with a question which produces an answer should show a change in its score between it being fixed or not, assuming an appropriate model that uses CLS token is used. If you have an example where the NIL answer (based on CLS token) is the \"best\" answer you may see the top answer change from some other span to the NIL answer. See below for a quick example i just put together, which i tested it using \"deepset/roberta-base-squad2\" model.\r\n\r\nAnd sorry but for completeness i also realised my quick fix involved changing the construction of p_mask to the following (moving the asarray inside the outer array:\r\n\r\n```\r\n p_mask = [\r\n np.asarray([tok != 1 if question_first else 0 for tok in encoded_inputs.sequence_ids(span_id)])\r\n for span_id in range(num_spans)\r\n ]\r\n```\r\n\r\nRunning the following example with the original code yields the top answer of:\r\n\r\n\tAnswer: orth individuals in >Europe< after Paris and the\r\n\tScore: 0.1588\r\n\r\nWith my quick fix i get:\r\n\r\n\tAnswer: ><London is the capita\r\n\tScore: 0.9948\r\n\r\nThe other answer is 2nd place with a score of 0.000.\r\n\r\nThe data used was a passage taken from a wikipedia page on London and was run with the handle_impossible_answer set to True and max_seq_length=512:\r\n\r\n```\r\n{\"context\": \"London is the capital and largest city of England and the United Kingdom. It stands on the River Thames in south-east England at the head of a 50-mile (80 km) estuary down to the North Sea, and has been a major settlement for two millennia. The City of London, its ancient core and financial centre, was founded by the Romans as Londinium and retains boundaries close to its medieval ones. Since the 19th century, \\\"London\\\" has also referred to the metropolis around this core, historically split between the counties of Middlesex, Essex, Surrey, Kent, and Hertfordshire, which largely comprises Greater London, governed by the Greater London Authority. The City of Westminster, to the west of the City of London, has for centuries held the national government and parliament. As one of the world's global cities, London exerts strong influence on its arts, commerce, education, entertainment, fashion, finance, health care, media, tourism, and communications, and has sometimes been called the capital of the world. Its GDP (€801.66 billion in 2017) makes it the biggest urban economy in Europe, and it is one of the major financial centres in the world. In 2019 it had the second-highest number of ultra high-net-worth individuals in Europe after Paris and the second-highest number of billionaires in Europe after Moscow. As of 2021, London has the most millionaires of any city. With Europe's largest concentration of higher education institutions, it includes Imperial College London in natural and applied sciences, the London School of Economics in social sciences, and the comprehensive University College London. The city is home to the most 5-star hotels of any city in the world. In 2012, London became the first city to host three Summer Olympic Games. London is the capital and largest city of England and the United Kingdom. It stands on the River Thames in south-east England at the head of a 50-mile (80 km) estuary down to the North Sea, and has been a major settlement for two millennia. The City of London, its ancient core and financial centre, was founded by the Romans as Londinium and retains boundaries close to its medieval ones. Since the 19th century, \\\"London\\\" has also referred to the metropolis around this core, historically split between the counties of Middlesex, Essex, Surrey, Kent, and Hertfordshire, which largely comprises Greater London, governed by the Greater London Authority. The City of Westminster, to the west of the City of London, has for centuries held the national government and parliament. As one of the world's global cities, London exerts strong influence on its arts, commerce, education, entertainment, fashion, finance, health care, media, tourism, and communications, and has sometimes been called the capital of the world. Its GDP (€801.66 billion in 2017) makes it the biggest urban economy in Europe, and it is one of the major financial centres in the world. In 2019 it had the second-highest number of ultra high-net-worth individuals in Europe after Paris and the second-highest number of billionaires in Europe after Moscow. As of 2021, London has the most millionaires of any city. With Europe's largest concentration of higher education institutions, it includes Imperial College London in natural and applied sciences, the London School of Economics in social sciences, and the comprehensive University College London. The city is home to the most 5-star hotels of any city in the world. In 2012, London became the first city to host three Summer Olympic Games.\", \"question\": \"What country is Paris the capital of?\"}\r\n```",
"Thank you so much for this test, definitely helps a lot ! \r\n\r\nCan't replicate with small random models (for obvious reasons) but at least we now have a slow tests covering this."
] | 1,649
| 1,650
| 1,650
|
NONE
| null |
## Environment info
- `transformers` version: 4.18.0
- Platform: Linux-5.10.102-99.473.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@Narsil
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Using any question and answering model which leverages CLS token and run it with pipelines.
2. Supply an input record which has (con)text longer than the max_seq_length set to cause it to chunk the input record.
3. If you inspect the results of p_mask after line https://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/pipelines/question_answering.py#L307
4. You will see the token position used by CLS in the model (typically the first) has not been set to False and remains True meaning it will be masked out as an undesired token later on.
This will work if the input record is not chunked. The reason is the code before that constructs p_mask doesn't work as expected when the chunk record are different lengths (when calling np.asarray) and the lines pointed to above then fail silently to correctly mask the CLS token position.
This then causes differences in the calculations of the results answer spans and their associated probabilities.
## Expected behavior
The CLS token position should be correctly set to False so that it is considered a valid token for consideration in answer calculations regardless of whether the input record was chunked or not.
As a simple but possibly not an efficient fix I replaced the two lines in the if pointed to above with the following:
```
for span_id in range(num_spans):
cls_index = np.nonzero(np.array(encoded_inputs["input_ids"][span_id]) == self.tokenizer.cls_token_id)
p_mask[span_id][cls_index] = 0
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16769/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16768
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16768/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16768/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16768/events
|
https://github.com/huggingface/transformers/pull/16768
| 1,203,790,560
|
PR_kwDOCUB6oc42M7ey
| 16,768
|
Missing commas causing concatenation fix
|
{
"login": "code-review-doctor",
"id": 72647856,
"node_id": "MDQ6VXNlcjcyNjQ3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/72647856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/code-review-doctor",
"html_url": "https://github.com/code-review-doctor",
"followers_url": "https://api.github.com/users/code-review-doctor/followers",
"following_url": "https://api.github.com/users/code-review-doctor/following{/other_user}",
"gists_url": "https://api.github.com/users/code-review-doctor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/code-review-doctor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/code-review-doctor/subscriptions",
"organizations_url": "https://api.github.com/users/code-review-doctor/orgs",
"repos_url": "https://api.github.com/users/code-review-doctor/repos",
"events_url": "https://api.github.com/users/code-review-doctor/events{/privacy}",
"received_events_url": "https://api.github.com/users/code-review-doctor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
Fixes #16767
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16768/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16768",
"html_url": "https://github.com/huggingface/transformers/pull/16768",
"diff_url": "https://github.com/huggingface/transformers/pull/16768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16768.patch",
"merged_at": 1649968948000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16767
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16767/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16767/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16767/events
|
https://github.com/huggingface/transformers/issues/16767
| 1,203,789,893
|
I_kwDOCUB6oc5HwGBF
| 16,767
|
Missing commas causing concatenation
|
{
"login": "code-review-doctor",
"id": 72647856,
"node_id": "MDQ6VXNlcjcyNjQ3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/72647856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/code-review-doctor",
"html_url": "https://github.com/code-review-doctor",
"followers_url": "https://api.github.com/users/code-review-doctor/followers",
"following_url": "https://api.github.com/users/code-review-doctor/following{/other_user}",
"gists_url": "https://api.github.com/users/code-review-doctor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/code-review-doctor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/code-review-doctor/subscriptions",
"organizations_url": "https://api.github.com/users/code-review-doctor/orgs",
"repos_url": "https://api.github.com/users/code-review-doctor/repos",
"events_url": "https://api.github.com/users/code-review-doctor/events{/privacy}",
"received_events_url": "https://api.github.com/users/code-review-doctor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
Missing comma on results in strings being implicitly concatenated together. Probably not what was intended
This is the affected line:
https://github.com/huggingface/transformers/blob/main/tests/bert_japanese/test_tokenization_bert_japanese.py#L176
https://github.com/huggingface/transformers/blob/main/tests/bert_japanese/test_tokenization_bert_japanese.py#L249
I found this issue automatically, see other issues [here](https://codereview.doctor/huggingface/transformers)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16767/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16766
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16766/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16766/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16766/events
|
https://github.com/huggingface/transformers/pull/16766
| 1,203,683,762
|
PR_kwDOCUB6oc42Mkjg
| 16,766
|
Fix PT TF ViTMAE
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Let's merge since CI fails quite a lot on this one",
"> Let's merge since CI fails quite a lot on this one\r\n\r\nOk! "
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
Fix PT TF ViTMAE: just use some settings both in PT/TF (instead of in only one model). Otherwise, the PT/TF equivalence tests for them won't use something like `std = 0.02` , and gets larger (init) weights --> larger diff in outputs.
Also, **the `eps` for `layer norm` layers should be the same in PT/TF**.
(not a real big deal in practice, since here is `1e-5` v.s. `1e-12` -> but it also affects the tests)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16766/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16766/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16766",
"html_url": "https://github.com/huggingface/transformers/pull/16766",
"diff_url": "https://github.com/huggingface/transformers/pull/16766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16766.patch",
"merged_at": 1649997430000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16765
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16765/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16765/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16765/events
|
https://github.com/huggingface/transformers/pull/16765
| 1,203,646,211
|
PR_kwDOCUB6oc42McjN
| 16,765
|
Fixup no_trainer examples scripts and add more tests
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
# Fixup `no_trainer` Examples and Bolster their tests
## What does this add?
This changes the logging behavior inside the `no_trainer` scripts, slightly changes how the initial configuration is stored, and adds tests for the tracking API.
## Who is it for?
Users of `transformers` who want to try out `Accelerate` quickly
## Why is this needed?
I was made aware that the scripts were laggy when it came to how logs were sent to weights and biases when using the `no_trainer` scripts, and this was due to the step being passed in as a parameter, causing a lag in when it gets uploaded.
To follow akin to the original Accelerate scripts, these are now passed in as a `"step"` parameter to the overall dictionary logged via `accelerate.log()`
`TensorBoard` also does not like when `Enum`'s are logged, so there is a manual adjustment rightr before saving the hyperparemeters to get the enum value from the LR Scheduler type.
Finally, as `TensorBoard` is a test requirement, I added in tests for tracking inside the no_trainer tests, as `TensorBoard` is also how we test that behavior in the CI in Accelerate proper.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16765/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16765",
"html_url": "https://github.com/huggingface/transformers/pull/16765",
"diff_url": "https://github.com/huggingface/transformers/pull/16765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16765.patch",
"merged_at": 1649875248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16764
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16764/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16764/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16764/events
|
https://github.com/huggingface/transformers/issues/16764
| 1,203,619,437
|
I_kwDOCUB6oc5HvcZt
| 16,764
|
NER training crash
|
{
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Was that issue resolved? I am facing a similar problem with the HF implementation of LUKE: https://github.com/huggingface/transformers/tree/main/examples/research_projects/luke"
] | 1,649
| 1,658
| 1,653
|
NONE
| null |
transformers version: 4.17.0
Hi, I run the script for NER task for [few-nerd](https://huggingface.co/datasets/dfki-nlp/few-nerd) dataset
https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py
```
CUDA_VISIBLE_DEVICES=3 python -u run_ner_no_trainer.py \
--model_name_or_path roberta-large \
--dataset_name dfki-nlp/few-nerd \
--dataset_config_name "supervised" \
--output_dir /scratch/w/wluyliu/yananc/finetunes/roberta_nerd_fine \
--text_column_name "tokens" \
--label_column_name "fine_ner_tags" \
--num_train_epochs 7 --local_files_only --debug --debug_cnt 50000
```
When I use small samples, for example below 30000, things go smoothly and the precision, recall and F1 are in good alignment with the original paper.
However, when I increase the samples used for training, for example, 50000, or full set, the metrics become zero, where the predictions from model are all "O". It is quite weird.
I also try the conll2003 dataset, it is the same.
Am I miss something ?
Thanks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16764/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16763
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16763/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16763/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16763/events
|
https://github.com/huggingface/transformers/pull/16763
| 1,203,576,111
|
PR_kwDOCUB6oc42MNwY
| 16,763
|
Fix batch size in evaluation loop
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
The batch size used in the evaluation loop is wrong: it's using the per device batch size, which is different from the actual batch size when using DataParallel with more than one GPU. As a result, the `test_evaluate` test is failing for 2 GPUs (see #16716).
This PR fixes that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16763/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16763",
"html_url": "https://github.com/huggingface/transformers/pull/16763",
"diff_url": "https://github.com/huggingface/transformers/pull/16763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16763.patch",
"merged_at": 1649942574000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16762
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16762/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16762/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16762/events
|
https://github.com/huggingface/transformers/pull/16762
| 1,203,564,326
|
PR_kwDOCUB6oc42MLRp
| 16,762
|
[Flax `.from_pretrained`] Raise a warning if model weights are not in float32
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As an example, loading a set of PyTorch float16 Bart model weights into a FlaxBartForCausalLM model produces the following warning:\r\n```python\r\nfrom transformers import FlaxBartForCausalLM\r\nmodel = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart-fp16', from_pt=True)\r\n```\r\n```\r\nSome of the weights of FlaxBartForCausalLM were initialized in float16 precision from the model checkpoint at sanchit-gandhi/tiny-random-bart-fp16:\r\n[('model', 'decoder', 'embed_positions', 'embedding'), ('model', 'decoder', 'embed_tokens', 'embedding'), ('model', 'decoder', 'layernorm_embedding', 'bias'), ('model', 'decoder', 'layernorm_embedding', 'scale'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'encoder_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '0', 'encoder_attn_layer_norm', 'scale'), ('model', 'decoder', 'layers', '0', 'fc1', 'bias'), ('model', 'decoder', 'layers', '0', 'fc1', 'kernel'), ('model', 'decoder', 'layers', '0', 'fc2', 'bias'), ('model', 'decoder', 'layers', '0', 'fc2', 'kernel'), ('model', 'decoder', 'layers', '0', 'final_layer_norm', 'bias'), ('model', 'decoder', 'layers', '0', 'final_layer_norm', 'scale'), ('model', 'decoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '0', 'self_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '0', 'self_attn_layer_norm', 'scale'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'encoder_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '1', 'encoder_attn_layer_norm', 'scale'), ('model', 'decoder', 'layers', '1', 'fc1', 'bias'), ('model', 'decoder', 'layers', '1', 'fc1', 'kernel'), ('model', 'decoder', 'layers', '1', 'fc2', 'bias'), ('model', 'decoder', 'layers', '1', 'fc2', 'kernel'), ('model', 'decoder', 'layers', '1', 'final_layer_norm', 'bias'), ('model', 'decoder', 'layers', '1', 'final_layer_norm', 'scale'), ('model', 'decoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', '1', 'self_attn_layer_norm', 'bias'), ('model', 'decoder', 'layers', '1', 'self_attn_layer_norm', 'scale')]\r\nYou should probably UPCAST the model weights to float32 if this was not intended. See [`~FlaxPreTrainedModel.to_fp32`] for further information on how to do this.\r\n```",
"Sorry this a super nitty question, but I just wanted to ask to make sure we're all on the same page for best practice! Should one not ideally merge their own PR's rather than the reviewer?",
"Aah, yes! One should merge their own PRs, I rushed a bit this one."
] | 1,649
| 1,687
| 1,649
|
CONTRIBUTOR
| null |
The Flax `.from_pretrained` method respects the dtype of the model weights from which it is loaded. For model weights stored in bfloat16/float16, Flax models are instantiated with parameter weights in bfloat16/float16 respectively (see #16736). The general assumption is that all Flax model weights are in float32. Loading and storing model weights in a lower precision (bfloat16/float16) is likely to lead to undesirable behaviour and model instabilities. This PR adds a warning to the `.from_pretrained` method should any of the model weights not be in float32, and advices the user to upcast the weights to float32 prior to use.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16762/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16762",
"html_url": "https://github.com/huggingface/transformers/pull/16762",
"diff_url": "https://github.com/huggingface/transformers/pull/16762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16762.patch",
"merged_at": 1649929935000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16761
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16761/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16761/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16761/events
|
https://github.com/huggingface/transformers/pull/16761
| 1,203,553,466
|
PR_kwDOCUB6oc42MI_z
| 16,761
|
CI: pip install now updates
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This PR actually needs further changes, the adding `-U` doesn't solve it (possibly because of other version requirements in the packages installed in the image). \r\n\r\nShould I update the dependencies of click (to `click>=8.0`, required by black) and protobuf (to `protobuf>=3.8.0`, required by tensorflow)? cc @LysandreJik @sgugger ",
"_The documentation is not available anymore as the PR was closed or merged._",
"It would be nice to find a general solution that won't have us needing to add a new dependency update in three weeks.",
"(closing the PR after some offline discussion -- going to attempt to change to cache fresh venvs instead)"
] | 1,649
| 1,650
| 1,649
|
MEMBER
| null |
# What does this PR do?
Follow up from https://github.com/huggingface/transformers/pull/16751: `pip install` in non-remote GHA workflows now updates packages, to allow us to override pre-installed versions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16761/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16761",
"html_url": "https://github.com/huggingface/transformers/pull/16761",
"diff_url": "https://github.com/huggingface/transformers/pull/16761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16761.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16760
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16760/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16760/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16760/events
|
https://github.com/huggingface/transformers/pull/16760
| 1,203,549,280
|
PR_kwDOCUB6oc42MIIb
| 16,760
|
[Data2Vec] Add data2vec vision
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Uploaded checkpoints are here: https://huggingface.co/models?other=data2vec-vision . Will add a README to those ones and all other data2vec ones after this PR is merged"
] | 1,649
| 1,650
| 1,650
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Finishes the Data2Vec integration by adding https://huggingface.co/models?other=data2vec-vision from https://github.com/facebookresearch/data2vec_vision/tree/main/beit
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16760/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16760",
"html_url": "https://github.com/huggingface/transformers/pull/16760",
"diff_url": "https://github.com/huggingface/transformers/pull/16760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16760.patch",
"merged_at": 1650297133000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16759
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16759/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16759/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16759/events
|
https://github.com/huggingface/transformers/issues/16759
| 1,203,545,122
|
I_kwDOCUB6oc5HvKQi
| 16,759
|
How to use Wav2Vec2ProcessorWithLM in pipeline?
|
{
"login": "gxbag",
"id": 10001642,
"node_id": "MDQ6VXNlcjEwMDAxNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/10001642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gxbag",
"html_url": "https://github.com/gxbag",
"followers_url": "https://api.github.com/users/gxbag/followers",
"following_url": "https://api.github.com/users/gxbag/following{/other_user}",
"gists_url": "https://api.github.com/users/gxbag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gxbag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gxbag/subscriptions",
"organizations_url": "https://api.github.com/users/gxbag/orgs",
"repos_url": "https://api.github.com/users/gxbag/repos",
"events_url": "https://api.github.com/users/gxbag/events{/privacy}",
"received_events_url": "https://api.github.com/users/gxbag/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @patrickvonplaten ",
"Hey @gxbag,\r\n\r\nPlease make sure to provide a reproducible code snippet. I cannot run the above snippet because I don't have access to `\"language_model/vocabulary.txt\"`.\r\n\r\nRegarding the issue, you should not pass a processor object as the model object. The model object should only be used for models of type `PreTrainedModel`. To pass the model with the processor you could do the following:\r\n\r\n```py\r\nfrom transformers import AutoProcessor\r\nprocessor = AutoProcessor.from_pretrained(\"facebook/wav2vec2-large-960h-lv60-self\")\r\nvocab_dict = processor.tokenizer.get_vocab()\r\n\r\nfrom pyctcdecode import build_ctcdecoder\r\nunigrams_file = open(\"language_model/vocabulary.txt\", \"r\")\r\nunigrams_list = unigrams_file.readlines()\r\ndecoder = build_ctcdecoder(\r\n labels=list(vocab_dict.keys()),\r\n kenlm_model_path=\"language_model/5gram.bin\",\r\n unigrams=unigrams_list\r\n)\r\n\r\nfrom transformers import Wav2Vec2ProcessorWithLM\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM(\r\n feature_extractor=processor.feature_extractor,\r\n tokenizer=processor.tokenizer,\r\n decoder=decoder\r\n)\r\n\r\nfrom transformers import pipeline\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"facebook/wav2vec2-large-960h-lv60-self\", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder, device=0)\r\n```\r\n\r\nThis should correctly initialize the pipeline.",
"Hi @patrickvonplaten, thank you very much for answering. Your above provided code does not seem to use the language model. Here is the minimal working example to reproduce the error:\r\n```python\r\nfrom transformers import Wav2Vec2ProcessorWithLM\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained(\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\")\r\n\r\nfrom transformers import pipeline\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder)\r\n```\r\n\r\nI believe I should be able to just\r\n```python\r\nfrom transformers import pipeline\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\")\r\n```\r\nif I had set it up correctly, which I unfortunately have not.\r\n\r\nI manually copied config.json from https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/blob/main/config.json and added it to my repository as I believed this missing file could be the cause of the problem but I think there are possibly more problems. Could you help me out?",
"Hey @gxbag,\r\n\r\nYeah, your model here: https://huggingface.co/gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm/tree/main should indeed work out of the box.\r\n\r\nWhy doesn't it work? Can you show a codesnippet that shows how it doesn't work?",
"Hey @patrickvonplaten,\r\n\r\nWhen I run this snippet:\r\n```python\r\nfrom transformers import pipeline\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\")\r\n```\r\n\r\nThe error output is exactly the following:\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n/media/home/run.ipynb Cell [1](vscode-notebook-cell://ssh-remote%2Bstudent2/media/home/run.ipynb#ch0000000vscode-remote?line=0)' in <cell line: 2>()\r\n 1[ from transformers import pipeline\r\n----> ]()[2](vscode-notebook-cell://ssh-remote%2Bstudent2/media/home/run.ipynb#ch0000000vscode-remote?line=1)[ pipe = pipeline(\"automatic-speech-recognition\", model=\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\")\r\n\r\nFile ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py:549, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, pipeline_class, **kwargs)\r\n ]()[545](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=544)[ # Infer the framework from the model\r\n ]()[546](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=545)[ # Forced if framework already defined, inferred if it's None\r\n ]()[547](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=546)[ # Will load the correct model if possible\r\n ]()[548](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=547)[ model_classes = {\"tf\": targeted_task[\"tf\"], \"pt\": targeted_task[\"pt\"]}\r\n--> ]()[549](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=548)[ framework, model = infer_framework_load_model(\r\n ]()[550](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=549)[ model,\r\n ]()[551](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=550)[ model_classes=model_classes,\r\n ]()[552](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=551)[ config=config,\r\n ]()[553](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=552)[ framework=framework,\r\n ]()[554](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=553)[ revision=revision,\r\n ]()[555](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=554)[ task=task,\r\n ]()[556](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=555)[ **model_kwargs,\r\n ]()[557](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=556)[ )\r\n ]()[559](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=558)[ model_config = model.config\r\n ]()[561](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py?line=560)[ load_tokenizer = type(model_config) in TOKENIZER_MAPPING or model_config.tokenizer_class is not None\r\n\r\nFile ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py:255, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)\r\n ]()[252](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=251)[ continue\r\n ]()[254](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=253)[ if isinstance(model, str):\r\n--> ]()[255](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=254)[ raise ValueError(f\"Could not load model {model} with any of the following classes: {class_tuple}.\")\r\n ]()[257](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=256)[ framework = \"tf\" if model.__class__.__name__.startswith(\"TF\") else \"pt\"\r\n ]()[258](file:///home/ubuntu/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py?line=257)[ return framework, model\r\n\r\nValueError: Could not load model gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCTC'>, <class 'transformers.models.auto.modeling_auto.AutoModelForSpeechSeq2Seq'>, <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'>).]()\r\n```\r\n\r\nWhen I run the longer snippet:\r\n```python\r\nfrom transformers import Wav2Vec2ProcessorWithLM\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained(\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\")\r\n\r\nfrom transformers import pipeline\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder)\r\n```\r\nThe output is exactly the same:\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n/media/home/run.ipynb Cell 2' in <cell line: 5>()\r\n 2 processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained(\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\")\r\n 4 from transformers import pipeline\r\n----> 5 pipe = pipeline(\"automatic-speech-recognition\", model=\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder)\r\n\r\nFile ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/__init__.py:549, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, pipeline_class, **kwargs)\r\n 545 # Infer the framework from the model\r\n 546 # Forced if framework already defined, inferred if it's None\r\n 547 # Will load the correct model if possible\r\n 548 model_classes = {\"tf\": targeted_task[\"tf\"], \"pt\": targeted_task[\"pt\"]}\r\n--> 549 framework, model = infer_framework_load_model(\r\n 550 model,\r\n 551 model_classes=model_classes,\r\n 552 config=config,\r\n 553 framework=framework,\r\n 554 revision=revision,\r\n 555 task=task,\r\n 556 **model_kwargs,\r\n 557 )\r\n 559 model_config = model.config\r\n 561 load_tokenizer = type(model_config) in TOKENIZER_MAPPING or model_config.tokenizer_class is not None\r\n\r\nFile ~/mambaforge/lib/python3.9/site-packages/transformers/pipelines/base.py:255, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)\r\n 252 continue\r\n 254 if isinstance(model, str):\r\n--> 255 raise ValueError(f\"Could not load model {model} with any of the following classes: {class_tuple}.\")\r\n 257 framework = \"tf\" if model.__class__.__name__.startswith(\"TF\") else \"pt\"\r\n 258 return framework, model\r\n\r\nValueError: Could not load model gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCTC'>, <class 'transformers.models.auto.modeling_auto.AutoModelForSpeechSeq2Seq'>, <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'>).\r\n```",
"Hey @gxbag,\r\n\r\nYour repo does not have a model PyTorch file. Could you add the correct `pytorch_model.bin` to your folder here: https://huggingface.co/gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm/tree/main ? ",
"Hey @patrickvonplaten,\r\n\r\nThank you so much for the hint! With this almost everything is solved as the model with the above snippet can now produce a result and it correctly uses the language model.\r\n\r\nSomething still seems off: When I use a longer audio file and use the striding method (as per this blog post: https://huggingface.co/blog/asr-chunking) of the pipeline to process longer audio, the last bit of text output is cut off.\r\n\r\nTo reproduce:\r\n```python\r\nfrom transformers import pipeline\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm\")\r\noutput = pipe(\"/any/long/audio/file.wav\", chunk_length_s=30, stride_length_s=(6, 3))\r\noutput\r\n```\r\n\r\nThis will cut off the last 3 seconds (as specified in stride_length_s) of the audio file from generating output. I didn't see this behavior when I used regular models without an added language model.\r\n\r\nWhat could be the cause here?",
"Ah yeah this was a bug in Transformes that we recently fixed I think :-) See https://github.com/huggingface/transformers/pull/16730 . Could you check whether everything works correctly on master? :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@patrickvonplaten Is it possible to use Wav2Vec2ProcessorWithLM during _training_ to use a LM at train time? Or, is there another way to do this with some other HF tool?",
"Sure this is possibleb @kaleko - you just need to adapt the training script a bit, but it should be pretty trivial :-) ",
"> Hey @gxbag,\r\n> \r\n> Please make sure to provide a reproducible code snippet. I cannot run the above snippet because I don't have access to `\"language_model/vocabulary.txt\"`.\r\n> \r\n> Regarding the issue, you should not pass a processor object as the model object. The model object should only be used for models of type `PreTrainedModel`. To pass the model with the processor you could do the following:\r\n> \r\n> ```python\r\n> from transformers import AutoProcessor\r\n> processor = AutoProcessor.from_pretrained(\"facebook/wav2vec2-large-960h-lv60-self\")\r\n> vocab_dict = processor.tokenizer.get_vocab()\r\n> \r\n> from pyctcdecode import build_ctcdecoder\r\n> unigrams_file = open(\"language_model/vocabulary.txt\", \"r\")\r\n> unigrams_list = unigrams_file.readlines()\r\n> decoder = build_ctcdecoder(\r\n> labels=list(vocab_dict.keys()),\r\n> kenlm_model_path=\"language_model/5gram.bin\",\r\n> unigrams=unigrams_list\r\n> )\r\n> \r\n> from transformers import Wav2Vec2ProcessorWithLM\r\n> processor_with_lm = Wav2Vec2ProcessorWithLM(\r\n> feature_extractor=processor.feature_extractor,\r\n> tokenizer=processor.tokenizer,\r\n> decoder=decoder\r\n> )\r\n> \r\n> from transformers import pipeline\r\n> pipe = pipeline(\"automatic-speech-recognition\", model=\"facebook/wav2vec2-large-960h-lv60-self\", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder, device=0)\r\n> ```\r\n> \r\n> This should correctly initialize the pipeline.\r\n\r\nHi @patrickvonplaten ,\r\n\r\nI've just tried your solution. However, it does not use the LM for decoding. `self.type` is always `\"ctc\"` as `feature_extractor._processor_class` is alway `None`. See here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/b487096b02307cd6e0f132b676cdcc7255fe8e74/src/transformers/pipelines/automatic_speech_recognition.py#L127\r\n\r\nAnd this is my code:\r\n\r\n``` python\r\n\r\nmodel = Wav2Vec2ForCTC.from_pretrained(\"./results/checkpoint-11600\").to(\"cuda\")\r\ntokenizer = Wav2Vec2CTCTokenizer.from_pretrained(\"./\", unk_token=\"[UNK]\", pad_token=\"[PAD]\", word_delimiter_token=\"|\")\r\nfeature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)\r\nprocessor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)\r\n\r\nvocab_dict = processor.tokenizer.get_vocab()\r\nsorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}\r\n\r\nfrom pyctcdecode import build_ctcdecoder\r\ndecoder = build_ctcdecoder(\r\n\tlabels=list(sorted_vocab_dict.keys()),\r\n\tkenlm_model_path=\"lm.small_3gram_correct.arpa\",\r\n)\r\n\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM(\r\n\tfeature_extractor=processor.feature_extractor,\r\n\ttokenizer=processor.tokenizer,\r\n\tdecoder=decoder\r\n)\r\n\r\npipe = AutomaticSpeechRecognitionPipeline(\r\n\tmodel=model,\r\n\ttokenizer=processor_with_lm.tokenizer,\r\n\tfeature_extractor=processor_with_lm.feature_extractor,\r\n\tdecoder=processor_with_lm.decoder,\r\n\tdevice=0)\r\n```\r\n\r\nAny clues?",
"Hey @anderleich, sorry could you add a new issue for the problem? It's always a bit hard to keep track of already answered issues :sweat_smile: ",
"Done! ;)"
] | 1,649
| 1,661
| 1,654
|
NONE
| null |
I created a `Wav2Vec2ProcessorWithLM` as described in the blog post (https://huggingface.co/blog/wav2vec2-with-ngram).
How can I use it in the `pipeline`?
```python
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
vocab_dict = processor.tokenizer.get_vocab()
from pyctcdecode import build_ctcdecoder
unigrams_file = open("language_model/vocabulary.txt", "r")
unigrams_list = unigrams_file.readlines()
decoder = build_ctcdecoder(
labels=list(vocab_dict.keys()),
kenlm_model_path="language_model/5gram.bin",
unigrams=unigrams_list
)
from transformers import Wav2Vec2ProcessorWithLM
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model=processor_with_lm, device=0)
```
outputs
```
[AttributeError: 'Wav2Vec2ProcessorWithLM' object has no attribute 'config']()
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16759/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16758
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16758/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16758/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16758/events
|
https://github.com/huggingface/transformers/pull/16758
| 1,203,536,355
|
PR_kwDOCUB6oc42MFZt
| 16,758
|
Add onnx export of models with a multiple choice classification head
|
{
"login": "echarlaix",
"id": 80481427,
"node_id": "MDQ6VXNlcjgwNDgxNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/80481427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echarlaix",
"html_url": "https://github.com/echarlaix",
"followers_url": "https://api.github.com/users/echarlaix/followers",
"following_url": "https://api.github.com/users/echarlaix/following{/other_user}",
"gists_url": "https://api.github.com/users/echarlaix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echarlaix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echarlaix/subscriptions",
"organizations_url": "https://api.github.com/users/echarlaix/orgs",
"repos_url": "https://api.github.com/users/echarlaix/repos",
"events_url": "https://api.github.com/users/echarlaix/events{/privacy}",
"received_events_url": "https://api.github.com/users/echarlaix/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review as well as the reminder @sgugger !",
"Thanks for the reviews @lewtun and @michaelbenayoun. \r\nI have added some comments to make things clearer as well as the BigBird, Data2VecText, Electra and FlauBERT models support. Also when running the command line `RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py`, all the tests are passing.",
"Thanks for iterating on this @echarlaix - it looks great! "
] | 1,649
| 1,650
| 1,650
|
COLLABORATOR
| null |
This PR adds the export support of models with a multiple-choice classification head, resolving [#16695](https://github.com/huggingface/transformers/issues/16695)
This includes the following additions:
* The `"multiple-choice"` feature was added to the corresponding model topologies
* The dummy inputs are generated to match the expected inputs shape which includes an extra dimension corresponding to the number of candidates answers
* The `inputs` method of the models corresponding `OnnxConfig` were modified to support the additional dynamic axis corresponding to the number of candidates
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16758/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16758",
"html_url": "https://github.com/huggingface/transformers/pull/16758",
"diff_url": "https://github.com/huggingface/transformers/pull/16758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16758.patch",
"merged_at": 1650376311000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16757
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16757/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16757/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16757/events
|
https://github.com/huggingface/transformers/pull/16757
| 1,203,517,591
|
PR_kwDOCUB6oc42MBYy
| 16,757
|
[self-scheduled ci] explain where dependencies are
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
as discussed on slack, explain where deps are when docker images are used.
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16757/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16757",
"html_url": "https://github.com/huggingface/transformers/pull/16757",
"diff_url": "https://github.com/huggingface/transformers/pull/16757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16757.patch",
"merged_at": 1649867282000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16756
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16756/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16756/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16756/events
|
https://github.com/huggingface/transformers/pull/16756
| 1,203,496,746
|
PR_kwDOCUB6oc42L8-j
| 16,756
|
Add warning when using older version of torch for ViltFeatureExtractor
|
{
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'd update to use `logging.warning` instead of `warnings.warn`, but other than that it looks like a sound approach.",
"Ok I made the change",
"Sorry should have been clearer, in this instance you should move the `logger` instantiation above, and use `logger.warning`. See an example here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/pipelines/base.py#L234-L237",
"@LysandreJik Ok done",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Ah, before merging we'll need to update the code quality.\r\n\r\nCould you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:\r\n```\r\npip install -e \".[quality]\"\r\n```\r\nAnd then run them with:\r\n```\r\nmake fixup\r\n```",
"@LysandreJik just updated it",
"@xhlulu it seems that the CI still isn't green, you can click on the check \"check_code_quality\" above to see why it's failing.",
"```\r\nTraceback (most recent call last):\r\n File \"/home/circleci/.local/bin/doc-builder\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/circleci/.local/lib/python3.6/site-packages/doc_builder/commands/doc_builder_cli.py\", line 43, in main\r\n args.func(args)\r\n File \"/home/circleci/.local/lib/python3.6/site-packages/doc_builder/commands/style.py\", line 28, in style_command\r\n raise ValueError(f\"{len(changed)} files should be restyled!\")\r\nValueError: 30 files should be restyled!\r\n\r\nExited with code exit status 1\r\n\r\nCircleCI received exit code 1\r\n```",
"@NielsRogge i just updated to match upstream, should work now",
"Hmm @xhlulu I checked and there's no `meshgrid` being used in `ViltFeatureExtractor`.\r\n\r\nIt's only the model that requires torch 1.10 or higher, right? Not the feature extractor?",
"It's in `ViltEmbeddings`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/39f8eafc1b6f00769240f714e2df5b2c5f111c32/src/transformers/models/vilt/modeling_vilt.py#L115-L151",
"I moved the error message, but when trying to run the make fixup, it gives this:\r\n```\r\n(venv) xhlu@XHL-Desktop:~/dev/transformers$ make fixup\r\nmake: execvp: /bin/sh: Argument list too long\r\nmake: *** [Makefile:10: modified_only_fixup] Error 127\r\n```",
"Unrelated error, merging!"
] | 1,649
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
Closes https://github.com/huggingface/transformers/issues/16637
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16756/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/16756/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16756",
"html_url": "https://github.com/huggingface/transformers/pull/16756",
"diff_url": "https://github.com/huggingface/transformers/pull/16756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16756.patch",
"merged_at": 1654082139000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16755
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16755/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16755/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16755/events
|
https://github.com/huggingface/transformers/pull/16755
| 1,203,491,402
|
PR_kwDOCUB6oc42L72N
| 16,755
|
Kill async pushes when calling push_to_hub with blocking=True
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I think this is the right solution, but I wonder if we can't test it in a reproducible manner. Maybe we could do something like the following:\r\n\r\n- Commit a largish file, and push that\r\n- Reset the head without keeping the changes so that we're back on the previous commit\r\n- Commit a small file, and push that.\r\n\r\nNow the first commit will likely fail with the error above. Unfortunately, this is likely a very heavy test, so I'm not sure it should be part of the CI. It can stil be used to test the validity of the solution above, though!",
"Actually @philschmid but let's say we are the same person ;-) ",
"For more context tested with `roberta-large`"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
This PR fixes a bug that sometimes appear in the `Trainer` when `push_to_hub=True`: if one of the async pushes finishes after a regular non-async push, the history gets messed up and we end up with an error like this:
```
The push command with PID 1468 failed.
remote: error: cannot lock ref 'refs/heads/main': is at 07c85fd69cd46a7daee6323c5a5eefc3e6a886da but expected 1fd7f122ef725f8e340a14bc97d537812de44076
```
To fix this, when the `Trainer` (or the user) calls `push_to_hub` with `blocking=True`, we interrupt any push in progress. The commit history will still be good since the commit don't take time.
cc @philschmid who had the error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16755/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16755/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16755",
"html_url": "https://github.com/huggingface/transformers/pull/16755",
"diff_url": "https://github.com/huggingface/transformers/pull/16755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16755.patch",
"merged_at": 1649944950000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16754
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16754/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16754/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16754/events
|
https://github.com/huggingface/transformers/pull/16754
| 1,203,476,381
|
PR_kwDOCUB6oc42L4qo
| 16,754
|
Update GPT2 I/O definition to be recognized by ORT optimizer.
|
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16754). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,655
| 1,655
|
MEMBER
| null |
This PR aims at making the GPT2 + past more compatible with ONNX Runtime optimizer (especially attention fusion).
Past key values should be presented as a single tensor with both key and value stacked on the leading axis.
It also provide a concat mecanism to merge the resulting past key values so it's a single tensor too.
- [x] I/O shapes changes
- [x] I/O dtype changes
- [ ] Past keys wrapper
- [ ] Monkey Patching
- [ ] Unittests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16754/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16754",
"html_url": "https://github.com/huggingface/transformers/pull/16754",
"diff_url": "https://github.com/huggingface/transformers/pull/16754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16754.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16753
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16753/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16753/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16753/events
|
https://github.com/huggingface/transformers/issues/16753
| 1,203,434,877
|
I_kwDOCUB6oc5HuvV9
| 16,753
|
ValueError: Reference at 'refs/heads/master' does not exist
|
{
"login": "deema-A",
"id": 60605574,
"node_id": "MDQ6VXNlcjYwNjA1NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/60605574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deema-A",
"html_url": "https://github.com/deema-A",
"followers_url": "https://api.github.com/users/deema-A/followers",
"following_url": "https://api.github.com/users/deema-A/following{/other_user}",
"gists_url": "https://api.github.com/users/deema-A/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deema-A/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deema-A/subscriptions",
"organizations_url": "https://api.github.com/users/deema-A/orgs",
"repos_url": "https://api.github.com/users/deema-A/repos",
"events_url": "https://api.github.com/users/deema-A/events{/privacy}",
"received_events_url": "https://api.github.com/users/deema-A/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Same question from training yolov5, it seems that there are no effective solutions..."
] | 1,649
| 1,680
| 1,653
|
NONE
| null |
Hi,
in the RAG example.
I got the error (ValueError: Reference at 'refs/heads/master' does not exist)
in
opt/anaconda3/envs/chatbotper/lib/python3.7/site-packages/git/refs/symbolic.py", line 184, in_get_ref_info_helper
raise ValueError("Reference at %r does not exist" % ref_path)
after running:
python examples/research_projects/rag/finetune_rag.py \
--data_dir data_dir \
--output_dir output_dir \
--model_name_or_path facebook/rag-sequence-nq \
--model_type rag_sequence \
--fp16 \
--gpus 8
--index_name custom
--passages_path path/to/data/my_knowledge_dataset
--index_path path/to/my_knowledge_dataset_hnsw_index.faiss
any advice? thanx
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16753/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16752
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16752/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16752/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16752/events
|
https://github.com/huggingface/transformers/issues/16752
| 1,203,387,760
|
I_kwDOCUB6oc5Huj1w
| 16,752
|
`translation_XX_to_YY` pipeline warnings about no max_length when both max_length and truncation are provided
|
{
"login": "erip",
"id": 2348806,
"node_id": "MDQ6VXNlcjIzNDg4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2348806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erip",
"html_url": "https://github.com/erip",
"followers_url": "https://api.github.com/users/erip/followers",
"following_url": "https://api.github.com/users/erip/following{/other_user}",
"gists_url": "https://api.github.com/users/erip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erip/subscriptions",
"organizations_url": "https://api.github.com/users/erip/orgs",
"repos_url": "https://api.github.com/users/erip/repos",
"events_url": "https://api.github.com/users/erip/events{/privacy}",
"received_events_url": "https://api.github.com/users/erip/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @erip ,\r\n\r\nIt seems the tokenizer does not define itself `tokenizer.model_max_length` which is usually used to set the max length (so `truncation=True` can have a meaning).\r\n\r\nThe problem with passing `max_length` as you do, is that this is actually passed to the `generate(..)` function, which **also** has a `max_length` (it means the maximum length of the generated content).\r\n\r\nYou can tentatively fix by doing this:\r\n\r\n```python\r\n\r\nmodel = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-50-many-to-one-mmt\")\r\ntokenizer = MBart50TokenizerFast.from_pretrained(\"facebook/mbart-large-50-many-to-one-mmt\")\r\ntokenizer.src_lang = \"es_XX\"\r\ntokenizer.tgt_lang = \"en_XX\"\r\ntokenizer.model_max_length = 1024 # <--------------------------------------\r\n\r\npipe = pipeline(\"translation_es_to_en\", model=model, tokenizer=tokenizer, src_lang=\"es_XX\", tgt_lang=\"en_XX\", device=0, batch_size=16)\r\n\r\ntranslations = pipe(\"X\"*1000, num_beams=5, max_length=512, truncation=True)\r\n# Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.\r\nprint(translations)\r\n````\r\n\r\nHowever I feel like this field should be inferred automatically, can you confirm/infirm @patil-suraj ?",
"Hmm, I thought I had also tried populating `max_model_length` when using `...Tokenizer.from_pretrained`, but I will need to double-check.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.18.0
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: None
### Who can help
@patil-suraj @Narsil
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): mBART
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create `translation_XX_to_YY` pipeline with `max_length` populated and `truncation=True`
2. Run an example through the pipeline
3. Observe warning
```
from transformers import pipeline, MBart50TokenizerFast, MBartForConditionalGeneration
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
tokenizer.src_lang = "es_XX"
tokenizer.tgt_lang = "en_XX"
pipe = pipeline("translation_es_to_en", model=model, tokenizer=tokenizer, src_lang="es_XX", tgt_lang="en_XX", device=0, batch_size=16)
translations = pipe("X"*1000, num_beams=5, max_length=512, truncation=True)
# Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
print(translations)
# [{'translation_text': 'enXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'}]
```
## Expected behavior
This warning should not appear. It's unclear whether the max_length is actually respected here, but since the model doesn't die, it seems it might.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16752/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16751
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16751/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16751/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16751/events
|
https://github.com/huggingface/transformers/pull/16751
| 1,203,374,732
|
PR_kwDOCUB6oc42LjFA
| 16,751
|
CI: setup-dependent pip cache
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool, porting the changes to the other two files as well then 👍 ",
"@gante: to learn more from you: how you figured out the cause for the error you mentioned:\r\n\r\nhttps://github.com/huggingface/transformers/runs/6007067240?check_suite_focus=true\r\n\r\nIf it was me, I don't even know if I could figure this out!",
"@ydshieh It definitely helps that I had this exact issue (stale CI caches) in my previous role :) \r\n\r\nTo pin the error to this issue, I reran the failing CI workflow locally, from a fresh virtual env. Since it ran without issues, I had a look at the `.yml` file, and saw that it had a cache for `pip`. Then I went on to see that `pip install -e .[dev]` was doing in the failing CI file, and I noticed that it had error messages due to incompatible package versions, which I did not have locally -- because an old version was cached."
] | 1,649
| 1,649
| 1,649
|
MEMBER
| null |
# What does this PR do?
This PR makes two changes to the way we cache our pip dependencies in the `add-model-like.yml` GH actions workflow:
1. The name of the cache depends on the hash of `setup.py`;
2. We do not restore the cache from partial name matches.
(this pattern exists in one of our CI files, `github-torch-hub.yml` , [here](https://github.com/huggingface/transformers/blob/main/.github/workflows/github-torch-hub.yml#L30))
Together, these changes will make us start from a fresh environment whenever we change `setup.py`. Having a stale cache was causing us to have dependency problems (e.g. [old, incompatible protobuf version](https://github.com/huggingface/transformers/runs/6007067240?check_suite_focus=true)), and potentially miss dependency issues from fresh installs.
If you agree, I will also port these changes to `model-templates.yml` and `update_metdata.yml`, which have the same pattern/issue. EDIT: ported.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16751/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16751",
"html_url": "https://github.com/huggingface/transformers/pull/16751",
"diff_url": "https://github.com/huggingface/transformers/pull/16751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16751.patch",
"merged_at": 1649863154000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16750
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16750/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16750/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16750/events
|
https://github.com/huggingface/transformers/issues/16750
| 1,203,360,682
|
I_kwDOCUB6oc5HudOq
| 16,750
|
Batch size < GPU number when training with Trainer and deepspeed.
|
{
"login": "zhaowei-wang-nlp",
"id": 22047467,
"node_id": "MDQ6VXNlcjIyMDQ3NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaowei-wang-nlp",
"html_url": "https://github.com/zhaowei-wang-nlp",
"followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers",
"following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs",
"repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos",
"events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"cc @stas00 for DeepSpeed ;-)",
"@zhaowei-wang98, could you please try again explaining what is the problem that you're running into? Please show the actual command line / config you're running as I have a hard time understanding your Issue.\r\n\r\n> So when I use more GPUs, the batch size must increase at the same time, which will cost must more GPU memory\r\n\r\nThere is no such requirement. \r\n\r\nIn general there is no problem running T5-11B on A100 (40GB) w/ Deepspeed ZeRO-3 - or at least it worked last time I run it - It was done already more than a year ago, perhaps have a look at this old thread https://github.com/huggingface/transformers/issues/9996 and then if you're still stuck tell us more details about your particular setup?",
"> \r\n\r\nHi @stas00,\r\nI am trying to fine-tune t11-3b without CPU offload. So, all the parameters in the model and momentum in the optimizer are loaded on the GPUs. I do this because I found it is very slow to use CPU offload (I have 500k data with an average length of 32 for both input and output). \r\nIn other words, I deleted:\r\n\"offload_param\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n },\r\nin the deepspeed configuration file: https://github.com/huggingface/transformers/blob/main/tests/deepspeed/ds_config_zero3.json\r\n\r\nIn contrast, your old thread #9996 used CPU RAM to store the model.",
"OK, but I still can't help you since I don't know how to reproduce your issue as you gave no specific instructions to do so, nor you shared anything about your setup other than the type of GPUs.\r\n\r\nBut I can probably do some guessing:\r\n\r\n### understanding the memory requirements\r\n\r\nTo train t5-11b you need at least `11*18=200`GB of memory just for the optim/grads/weights (I assume mixed precision) plus memory for activations and temps, so let's say roughly 240GB. \r\n\r\nWith 40GB GPUs, that means at least 6 gpus. So it should be possible to load it on 8x 40GB gpus using deepspeed w/o any offload.\r\n\r\n### use the sharded checkpoint\r\n\r\nAlso I recommend for you to switch to the sharded version of t5-11b which I have just [made](https://github.com/huggingface/transformers/issues/16884), by passing to the trainer: `--model_name_or_path t5-11b --model_revision sharded` and use `huggingface@main` as this feature hasn't yet been released.\r\n\r\nBecause if you don't shard you would need 44GB of CPU memory per process, just to load the checkpoint (deepspeed shards it directly to gpus). And with 8 gpus you'd need 352GB of CPU memory just to load 8 checkpoints concurrently.\r\n\r\nI think with 10GB shards some 100GB of CPU memory should be enough to load the checkpoints concurrently in 8 processes, but then there are extras to copy things and temps.\r\n\r\n----------------\r\n\r\nI'd be very happy to help you sort it out, but you need to help me first. To continue please be very specific:\r\n\r\n1. here is my hardware setup\r\n2. here is my software setup\r\n3. here is my command line (using public data) and ds config file to reproduce the problem with \r\n4. here is the traceback\r\n\r\nThank you!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,653
| 1,653
|
NONE
| null |
# 🚀 Feature request
Hi, I am fine-tuning T5-11b using Trainer with deepspeed feature. I use the deepspeed zero3 stage to split T5-11b and its gradients to different GPUs. However, when I try to use multi GPUs, I found that the argument `per_device_train_batch_size` must be an integer, which means it at least is 1. So when I use more GPUs, the batch size must increase at the same time, which will cost must more GPU memory. Thus, it turns out that I can't fine-tune T5-11b with 2, 4 or 8 A100 (40G) GPUs. So, in general, deepspeed feature doesn't solve the memory issue if the model's size is similar to or larger than the memory of one of the multi GPUs.
So, I request the feature that batch size is smaller than the number of GPUs like train batch size of 2 on 4 GPUs.
here is the link to the argument`per_device_train_batch_size`
https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L122
## Motivation
Fine-tune an extremely large model with a few small memory GPUs.
## Your contribution
I think the deepspeed package supports this feature already. So, adding this feature to Trainer is not hard.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16750/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16749
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16749/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16749/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16749/events
|
https://github.com/huggingface/transformers/issues/16749
| 1,203,359,334
|
I_kwDOCUB6oc5Huc5m
| 16,749
|
Large differences between T5 weight initialization in TF and torch
|
{
"login": "jorgemcgomes",
"id": 3987574,
"node_id": "MDQ6VXNlcjM5ODc1NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3987574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorgemcgomes",
"html_url": "https://github.com/jorgemcgomes",
"followers_url": "https://api.github.com/users/jorgemcgomes/followers",
"following_url": "https://api.github.com/users/jorgemcgomes/following{/other_user}",
"gists_url": "https://api.github.com/users/jorgemcgomes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorgemcgomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorgemcgomes/subscriptions",
"organizations_url": "https://api.github.com/users/jorgemcgomes/orgs",
"repos_url": "https://api.github.com/users/jorgemcgomes/repos",
"events_url": "https://api.github.com/users/jorgemcgomes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorgemcgomes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante @Rocketknight1 ",
"Thanks a lot for the issue @jorgemcgomes! I think for PyTorch and Tensorflow we actually never really made sure that the init is correct because we mostly focused on fine-tuning. But we should correct this! \r\n\r\nI think I made sure that the init is correct in Flax's T5 implementation so we could/should use this as a gold-standard. So let's look there at the Embedding and lm_head:\r\n- https://github.com/huggingface/transformers/blob/048443db863214aef9c8341517b427edced63c81/src/transformers/models/t5/modeling_flax_t5.py#L1366\r\n- https://github.com/huggingface/transformers/blob/048443db863214aef9c8341517b427edced63c81/src/transformers/models/t5/modeling_flax_t5.py#L1384\r\n\r\nSo it looks like for both the word embeddings and the lm_head the init should be:\r\n\r\n`random_normal(mean=0, stddev=config.initializer_factor)`\r\n\r\nGuess PyTorch got one right and TF the other one. \r\n\r\n@craffel can you confirm this, that a gaussian normal distribution is used as an init for T5's word embeddings and language model head (in case it's not tied to the word embeddings)",
"Yep, MTF initializes embeddings as a standard Gaussian. https://github.com/tensorflow/mesh/blob/a32810e32709e0eaad3b241475d3be0957409adc/mesh_tensorflow/layers.py#L2096",
"@jorgemcgomes Thanks for spotting this! Would you be willing to make a PR to bring the TF/PT implementations in line with the JAX one?",
"Sure. I can do that in the coming days. But there might be more to this.\r\n\r\nBased on the experiences I was doing (my problem/data is very specific, and I'm running some modifications in T5, so take this with a grain of salt), \"fixing\" the lm_head init (from σ=0.03 to σ=1.0) caused huge initial train/valid losses, even causing instability with the same LR.\r\n\r\nThere's this interesting bit:\r\n\r\nhttps://github.com/huggingface/transformers/blob/de8b06f9bf908ef1e6317ecb1f74a02313eee72e/src/transformers/models/t5/modeling_t5.py#L1662-L1667\r\n\r\n* with tie_word_embeddings=True, the input to the final layer is scaled down by d^-0.5 and multiplied with standard gaussian weights (the embeddings weights).\r\n* with tie_word_embeddings=False, the input to the final layer is **not** scaled down, and **if the proposed fix is introduced** it is also multiplied with standard gaussian weights (the lm_head weights). This doesn't sound right, and can explain the large loss values and instability I mentioned.\r\n\r\nAnd it might also explain why the current PT implementation of T5v1.1 appears to be working fine: the sequence input is not scaled down, but it is being multiplied with small weights instead (initialised with σ=d^-0.5). Two wrongs that cancel each other?\r\nThis would mean that the current PT implementation is \"fine\", but TF and Flax are broken.",
"That explanation makes sense to me. Just to confirm, is training stable in the current version with the small TF/Flax init?",
"To summarise, based on my experiments with a non-tied LM head (T5v1.1):\r\n\r\n- small embeddings init, small lm_head init --> stable\r\n- small embeddings init, large lm_head init [as in TF] --> unstable\r\n- large embeddings init, small lm_head init [as in PT] --> stable\r\n- large embeddings init, large lm_head init [as in Flax] --> unstable\r\n\r\nThe init of the embeddings doesn't seem to matter that much at all. Maybe layer norm takes care of that?\r\n\r\nAnd large lm_head inits (as found in the current TF and Flax implementations) are always unstable.",
"Training\r\n\r\n> Yep, MTF initializes embeddings as a standard Gaussian. https://github.com/tensorflow/mesh/blob/a32810e32709e0eaad3b241475d3be0957409adc/mesh_tensorflow/layers.py#L2096\r\n\r\nThanks for looking this up. So I think both embeddings should then be initialized as: tf.random_normal_initializer(\r\n mean=0.0, stddev=0.05, seed=None\r\n)\r\n\r\nmeaning `self.config.initializer_factor` should be set to 0.05. \r\n\r\nThe most important thing is to match the original code-base here. Don't think we need to run different pretrainings to find the best init scheme since stability is always data-dependent. \r\n\r\n=> Seems like the Flax init methods were good to me so I'd suggest to just apply this to PT and TF as well ",
"@jorgemcgomes, \r\n\r\nWould you like to open a PR to fix the initialization for T5 here as described in the comment above? Otherwise happy to take over the issue!",
"Please take over the issue @patrickvonplaten . This got pretty muddy and I'm not sure what is the right approach here."
] | 1,649
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
- `transformers` version: 4.18.0, master branch
### Who can help
@patrickvonplaten
I found some significant differences in weight init between the PT and TF implementations of T5.
The **embeddings** (model.shared):
- In PT, according to `T5PreTrainedModel._init_weights`, they are initialized with random normal with std=1.0:
`module.shared.weight.data.normal_(mean=0.0, std=factor * 1.0)`
- In TF (TFT5Model), the embeddings are initialized as such:
`self.shared = TFSharedEmbeddings(config.vocab_size, config.d_model, name="shared")`
Since initializer_range is not being provided, it is using the default, which is `hidden_size**-0.5` (see TFSharedEmbeddings).
This means that in the base model (d=768), the weights in PT are being initialized with **stdev=1.0**, and in TF they are being initialized with **stdev=0.036**.
The **LM head** (model.lm_head):
- In PT, the initializer is not specified, meaning it is being initialized with a uniform distribution in [-sqrt(1/d_model), sqrt(1/d_model)] (https://pytorch.org/docs/stable/generated/torch.nn.Linear.html). The weights don't seem to be initialized in _init_weights either.
`lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)`
- In TF, the initializer is explicitly provided (TFT5ForConditionalGeneration):
`lm_head_initializer = tf.keras.initializers.RandomNormal(mean=0, stddev=config.initializer_factor)`
So, in the base model, the weights in PT are initialized with a uniform distribution of **[-0.036, 0.036]**, and in TF they are initialized with a random normal with **stdev=1.0**.
I'm not entirely sure about the actual implications of this in model training. But at least the lm_head weights will have a huge impact in loss values initially.
Based on other transformer models I've seen, the "correct" answer seems to be that both weights should be initialised with stdev=1.0. But none of the implementations actually does this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16749/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16748
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16748/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16748/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16748/events
|
https://github.com/huggingface/transformers/pull/16748
| 1,203,306,291
|
PR_kwDOCUB6oc42LUj_
| 16,748
|
[SpeechEncoderDecoderModel] Fix bug in reshaping labels
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Sounds good! ",
"If I remember correctly `reshape()` == `view()` if the tensor does not need to call `contiguous()`, so good for me!",
"> If I remember correctly `reshape()` == `view()` if the tensor does not need to call `contiguous()`, so good for me!\r\n\r\nYes, exactly that! Calling `reshape()` returns `view()` if the shapes are compatible, and copies (equivalent to calling [`contiguous()`](https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html#torch.Tensor.contiguous)) otherwise."
] | 1,649
| 1,687
| 1,649
|
CONTRIBUTOR
| null |
Currently, the target `labels` are reshaped using the `view` method before being passed into the loss function:
https://github.com/huggingface/transformers/blob/06b4aac9ebab77a0065ec2cab40a8085ad71946f/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L560
The `view` method requires the Torch Tensor to be _contiguous_ (_cf_ https://pytorch.org/docs/stable/generated/torch.Tensor.view.html).
There are certain operations that are commonly performed on the `labels` that might cause them to not be contiguous, for example _slicing_. For speech seq2seq models, if the bos token is appended in the tokenisation step, we cut the bos token by slicing the `labels` as follows:
https://github.com/huggingface/transformers/blob/06b4aac9ebab77a0065ec2cab40a8085ad71946f/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L207-L210
This slicing operation causes the `labels` to be not contiguous. If `labels` are not contiguous, calling `labels.view(-1)` will throw a RuntimeError. This is demonstrated by the following code snippet:
```python
import torch
labels = torch.ones((2, 10), dtype=torch.int64)
print(f"Contiguous without slicing: {labels.is_contiguous()}")
labels.view(-1)
labels = torch.ones((2, 10), dtype=torch.int64)
labels = labels[:, 1:]
print(f"Contiguous with slicing: {labels.is_contiguous()}")
labels.view(-1)
```
Output:
```
Contiguous without slicing: True
Contiguous with slicing: False
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [137], in <cell line: 10>()
8 labels = labels[:, 1:]
9 print(f"Contiguous with slicing: {labels.is_contiguous()}")
---> 10 labels.view(-1)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
And similarly for the speech encoder-decoder model:
```python
import torch
from transformers import SpeechEncoderDecoderModel
model = SpeechEncoderDecoderModel.from_pretrained('hf-internal-testing/tiny-random-speech-encoder-decoder')
input_values = torch.ones((2, 1000), dtype=torch.float32)
labels = torch.ones((2, 10), dtype=torch.int64)
labels = labels[:, 1:]
outputs = model(input_values, labels=labels)
```
Output:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [138], in <cell line: 11>()
8 labels = torch.ones((2, 10), dtype=torch.int64)
9 labels = labels[:, 1:]
---> 11 outputs = model(input_values, labels=labels)
File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py:560, in SpeechEncoderDecoderModel.forward(self, inputs, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, input_values, input_features, return_dict, **kwargs)
558 logits = decoder_outputs.logits if return_dict else decoder_outputs[0]
559 loss_fct = CrossEntropyLoss()
--> 560 loss = loss_fct(logits.reshape(-1, self.decoder.config.vocab_size), labels.view(-1))
562 if not return_dict:
563 if loss is not None:
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
This PR follows the advice provided in the PyTorch docs by calling the `.reshape(...)` method instead of `.view(...)`. Calling `reshape` returns `view` if the shapes are compatible, and copies (equivalent to calling [`contiguous()`](https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html#torch.Tensor.contiguous)) otherwise.
```python
import torch
labels = torch.ones((2, 10), dtype=torch.int64)
labels = labels[:, 1:]
print(f"Contiguous with slicing: {labels.is_contiguous()}")
labels.reshape(-1) # no error despite labels being non-contiguous
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16748/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16748",
"html_url": "https://github.com/huggingface/transformers/pull/16748",
"diff_url": "https://github.com/huggingface/transformers/pull/16748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16748.patch",
"merged_at": 1649959360000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16747
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16747/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16747/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16747/events
|
https://github.com/huggingface/transformers/issues/16747
| 1,203,290,627
|
I_kwDOCUB6oc5HuMID
| 16,747
|
pointer to transformer (big) model
|
{
"login": "anirudt",
"id": 5916149,
"node_id": "MDQ6VXNlcjU5MTYxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5916149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anirudt",
"html_url": "https://github.com/anirudt",
"followers_url": "https://api.github.com/users/anirudt/followers",
"following_url": "https://api.github.com/users/anirudt/following{/other_user}",
"gists_url": "https://api.github.com/users/anirudt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anirudt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anirudt/subscriptions",
"organizations_url": "https://api.github.com/users/anirudt/orgs",
"repos_url": "https://api.github.com/users/anirudt/repos",
"events_url": "https://api.github.com/users/anirudt/events{/privacy}",
"received_events_url": "https://api.github.com/users/anirudt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"@anirudt Can I work on this issue?",
"Any leads on this?\r\n"
] | 1,649
| 1,681
| null |
NONE
| null |
# 🌟 New model addition
## Model description
<!-- Important information -->
Hi, needed a pointer on how to instantiate a Transformer-big from the original Vaswani et. al. paper (Attention Is All You Need). I could only find versions of Transformer-like architectures, so would be useful if this could also be added.
## Open source status
* [x] the model implementation is available: (give details): https://research.google/pubs/pub46201/
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16747/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/16746
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16746/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16746/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16746/events
|
https://github.com/huggingface/transformers/issues/16746
| 1,203,228,057
|
I_kwDOCUB6oc5Ht82Z
| 16,746
|
Tensor size mismatch in RoBERTa
|
{
"login": "d-miketa",
"id": 320321,
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-miketa",
"html_url": "https://github.com/d-miketa",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I found out what it was:\r\n```\r\nToken indices sequence length is longer than the specified maximum sequence length for this model (746 > 512). Running this sequence through the model will result in indexing errors\r\n```\r\nIt would've been easier to diagnose if whatever triggered this message had also emitted an explicit error, I think.\r\n\r\nWrapping my loop in `try:` and `except RuntimeError:` allowed me to skip this problematic datapoint even without filtering the dataset based on input sequence length.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Getting the same error here. I think that the tokenizer is not truncating correctly, that would be a bug, no?",
"Getting the same error here.Have you solved this problem?"
] | 1,649
| 1,693
| 1,653
|
CONTRIBUTOR
| null |
The following error pops up while running a `TranslationPipeline` using a PyTorch `EncoderDecoderModel` (@patrickvonplaten) consisting of two `RoBERTas`. (@LysandreJik) Curiously it happens on a very specific datapoint in a large-ish dataset, but I'm having trouble digging it out. (It does well on tens of thousands of examples prior to that though.) I think it's the same issue as https://github.com/microsoft/CodeBERT/issues/73, but I don't know how to go about fixing it. Many thanks for any pointers! I'm using `transformers==4.18.0` and the same issue was present on `4.17`, too.
```python
File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py:159, in Text2TextGenerationPipeline._forward(self, model_inputs, **generate_kwargs)
157 generate_kwargs["max_length"] = generate_kwargs.get("max_length", self.model.config.max_length)
158 self.check_inputs(input_length, generate_kwargs["min_length"], generate_kwargs["max_length"])
--> 159 output_ids = self.model.generate(**model_inputs, **generate_kwargs)
160 out_b = output_ids.shape[0]
161 if self.framework == "pt":
File ~/my-repo/.venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/generation_utils.py:1156, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs)
1149 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
1150 inputs_tensor, pad_token_id, eos_token_id
1151 )
1153 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
1154 # if model is encoder decoder encoder_outputs are created
1155 # and added to `model_kwargs`
-> 1156 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
1157 inputs_tensor, model_kwargs, model_input_name
1158 )
1160 # 4. Prepare `input_ids` which will be used for auto-regressive generation
1161 if self.config.is_encoder_decoder:
File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/generation_utils.py:524, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
522 encoder_kwargs["return_dict"] = True
523 encoder_kwargs[model_input_name] = inputs_tensor
--> 524 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
526 return model_kwargs
File ~/my-repo/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~/my-repo/.venv/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:817, in RobertaModel.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
815 if hasattr(self.embeddings, "token_type_ids"):
816 buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
--> 817 buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
818 token_type_ids = buffered_token_type_ids_expanded
819 else:
RuntimeError: The expanded size of the tensor (746) must match the existing size (514) at non-singleton dimension 1. Target sizes: [8, 746]. Tensor sizes: [1, 514]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16746/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16745
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16745/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16745/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16745/events
|
https://github.com/huggingface/transformers/issues/16745
| 1,203,184,118
|
I_kwDOCUB6oc5HtyH2
| 16,745
|
KeyError when using AutoTokenizer for facebook/detr-resnet-*
|
{
"login": "grassjelly",
"id": 5070395,
"node_id": "MDQ6VXNlcjUwNzAzOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5070395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grassjelly",
"html_url": "https://github.com/grassjelly",
"followers_url": "https://api.github.com/users/grassjelly/followers",
"following_url": "https://api.github.com/users/grassjelly/following{/other_user}",
"gists_url": "https://api.github.com/users/grassjelly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grassjelly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grassjelly/subscriptions",
"organizations_url": "https://api.github.com/users/grassjelly/orgs",
"repos_url": "https://api.github.com/users/grassjelly/repos",
"events_url": "https://api.github.com/users/grassjelly/events{/privacy}",
"received_events_url": "https://api.github.com/users/grassjelly/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nDETR is a vision model, not a text model, hence it doesn't have a tokenizer, but a so-called feature extractor (useful for preparing images for the model). You can load it using the AutoFeatureExtractor API:\r\n\r\n```\r\nfrom transformers import AutoFeatureExtractor\r\n\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(\"facebook/detr-resnet-101\")\r\n```",
"thanks @NielsRogge . That worked for me."
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.0
- Platform: Ubuntu 20.04
- Python version: 3.8.12
- PyTorch version (GPU?): 1.11.0
- Tensorflow version (GPU?): NA
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: No
### Who can help
- DETR: @NielsRogge
- Tokenizers: @SaulLu
## Information
I'm following this tutorial https://huggingface.co/docs/transformers/serialization on how to export models to ONNX. Tr
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/detr-resnet-101")ying to export one for DETR but I can't proceed as I'm stuck with this error on AutoTokenizer:
```
Traceback (most recent call last):
File "detr_config.py", line 2, in <module>
tokenizer = AutoTokenizer.from_pretrained("facebook/detr-resnet-101")
File "/home/juan/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 530, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/home/juan/miniconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 565, in __getitem__
raise KeyError(key)
KeyError: <class 'transformers.models.detr.configuration_detr.DetrConfig'>
```
Here's the snippet of code to reproduce the error:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/detr-resnet-101")
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16745/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16744
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16744/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16744/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16744/events
|
https://github.com/huggingface/transformers/pull/16744
| 1,203,037,714
|
PR_kwDOCUB6oc42KbQQ
| 16,744
|
Reduce Funnel PT/TF diff
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
Same as #15684, but on PT test side.
As mentioned in #15684, this is not a real bug in model. Just a setting in the test configuration.
## Comment
The change in `modeling_tf_funnel.py` is to address **a real issue** regarding **weight initialization**, see
https://github.com/huggingface/transformers/blob/15de7a010ddcdec0532b30d1eb6c28e7b314a6a9/src/transformers/models/funnel/configuration_funnel.py#L79-L84
and
https://github.com/huggingface/transformers/blob/15de7a010ddcdec0532b30d1eb6c28e7b314a6a9/src/transformers/models/funnel/modeling_funnel.py#L812-L814
**(But, this issue is not the cause of the large diff between PT/TF)**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16744/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16744",
"html_url": "https://github.com/huggingface/transformers/pull/16744",
"diff_url": "https://github.com/huggingface/transformers/pull/16744.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16744.patch",
"merged_at": 1649863192000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16743
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16743/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16743/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16743/events
|
https://github.com/huggingface/transformers/pull/16743
| 1,203,010,170
|
PR_kwDOCUB6oc42KVvo
| 16,743
|
[TAPEX] Update drop_rows_to_fit
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this as \"drop_rows_to_fit\" probably deserves to be a truncation strategy on its own."
] | 1,649
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR makes `drop_rows_to_fit` an attribute of `TapexTokenizer`, rather than a standalone `TruncationStrategy`.
The truncation strategies that can be used are the same as those of BART (as TAPEX is a BART model), meaning `truncation=True` will truncate to the maximum length. However, one can still randomly drop rows based on answers using this attribute.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16743/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16743",
"html_url": "https://github.com/huggingface/transformers/pull/16743",
"diff_url": "https://github.com/huggingface/transformers/pull/16743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16743.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16742
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16742/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16742/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16742/events
|
https://github.com/huggingface/transformers/issues/16742
| 1,202,804,225
|
I_kwDOCUB6oc5HsVYB
| 16,742
|
Some weights of the model checkpoint at microsoft/layoutlmv2-base-uncased were not used when initializing LayoutLMv2Model
|
{
"login": "baoyuxu",
"id": 18047650,
"node_id": "MDQ6VXNlcjE4MDQ3NjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/18047650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baoyuxu",
"html_url": "https://github.com/baoyuxu",
"followers_url": "https://api.github.com/users/baoyuxu/followers",
"following_url": "https://api.github.com/users/baoyuxu/following{/other_user}",
"gists_url": "https://api.github.com/users/baoyuxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baoyuxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baoyuxu/subscriptions",
"organizations_url": "https://api.github.com/users/baoyuxu/orgs",
"repos_url": "https://api.github.com/users/baoyuxu/repos",
"events_url": "https://api.github.com/users/baoyuxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/baoyuxu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nYes this is expected, as you can see the warning only prints \"num_batches_tracked\", these are statistics for batch norm layers, these aren't trainable parameters.",
"@NielsRogge I understand now, thank you for your reply🤗 "
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
## Environment info
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.8.2+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help
@NielsRogge
## Information
Model I am using (Bert, XLNet ...): LayoutLMv2 and LayoutXLM
The problem arises when using:
* [√] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
I just try to load pretrained LayoutLMv2Model and it seems like weights mismatch in visual backbone. It also happens when I try to load LayoutXLM model.
It said: `This IS NOT expected if you are initializing LayoutLMv2Model from the checkpoint of a model that you expect to be exactly identical.`
Detectron2 installed with:
```
python -m pip install detectron2 -f \
https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.8/index.html
```
CODE:
```
from transformers import LayoutLMv2Model
model = LayoutLMv2Model.from_pretrained("microsoft/layoutlmv2-base-uncased")
```
and
```
from transformers import LayoutLMv2Model
model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")
```
OUTPUT:
```
Some weights of the model checkpoint at microsoft/layoutlmv2-base-uncased were not used when initializing LayoutLMv2Model: ['layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.num_batches_tracked']
- This IS expected if you are initializing LayoutLMv2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LayoutLMv2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
Is it normal for this mismatch or I have something wrong?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16742/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16741
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16741/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16741/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16741/events
|
https://github.com/huggingface/transformers/pull/16741
| 1,202,683,580
|
PR_kwDOCUB6oc42JWvx
| 16,741
|
[modeling_utils] better explanation of ignore keys
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
Integrating the improved explanation of ignore keys by @sgugger at https://github.com/huggingface/transformers/issues/16719#issuecomment-1096878395 with some tweaks from myself.
It's still unclear whether they should include the base model prefix or not, but we can sort it out when https://github.com/huggingface/transformers/issues/16719 gets more clarity
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16741/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16741",
"html_url": "https://github.com/huggingface/transformers/pull/16741",
"diff_url": "https://github.com/huggingface/transformers/pull/16741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16741.patch",
"merged_at": 1649862200000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16740
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16740/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16740/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16740/events
|
https://github.com/huggingface/transformers/pull/16740
| 1,202,668,867
|
PR_kwDOCUB6oc42JTvB
| 16,740
|
[trainer / deepspeed] fix hyperparameter_search
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
This PR:
- fixes `hyperparameter_search` deepspeed config reset fix up that got out of sync with the normal code path
- adds a test so that will not happen in the future.
- adds a new group of pip deps: `deepspeed-testing`
@sgugger
Fixes: https://github.com/huggingface/transformers/pull/11966#issuecomment-1058493821
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16740/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16740",
"html_url": "https://github.com/huggingface/transformers/pull/16740",
"diff_url": "https://github.com/huggingface/transformers/pull/16740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16740.patch",
"merged_at": 1649982278000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16739
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16739/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16739/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16739/events
|
https://github.com/huggingface/transformers/pull/16739
| 1,202,444,414
|
PR_kwDOCUB6oc42IiHF
| 16,739
|
Replace assertion with exception
|
{
"login": "anmolsjoshi",
"id": 17307490,
"node_id": "MDQ6VXNlcjE3MzA3NDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/17307490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anmolsjoshi",
"html_url": "https://github.com/anmolsjoshi",
"followers_url": "https://api.github.com/users/anmolsjoshi/followers",
"following_url": "https://api.github.com/users/anmolsjoshi/following{/other_user}",
"gists_url": "https://api.github.com/users/anmolsjoshi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anmolsjoshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anmolsjoshi/subscriptions",
"organizations_url": "https://api.github.com/users/anmolsjoshi/orgs",
"repos_url": "https://api.github.com/users/anmolsjoshi/repos",
"events_url": "https://api.github.com/users/anmolsjoshi/events{/privacy}",
"received_events_url": "https://api.github.com/users/anmolsjoshi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16739). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Replaces assert with Exceptions as per https://github.com/huggingface/transformers/issues/12789.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16739/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16739",
"html_url": "https://github.com/huggingface/transformers/pull/16739",
"diff_url": "https://github.com/huggingface/transformers/pull/16739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16739.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16738
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16738/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16738/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16738/events
|
https://github.com/huggingface/transformers/pull/16738
| 1,202,416,116
|
PR_kwDOCUB6oc42Ib2A
| 16,738
|
Add self training code for text classification
|
{
"login": "tuvuumass",
"id": 23730882,
"node_id": "MDQ6VXNlcjIzNzMwODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/23730882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuvuumass",
"html_url": "https://github.com/tuvuumass",
"followers_url": "https://api.github.com/users/tuvuumass/followers",
"following_url": "https://api.github.com/users/tuvuumass/following{/other_user}",
"gists_url": "https://api.github.com/users/tuvuumass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuvuumass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuvuumass/subscriptions",
"organizations_url": "https://api.github.com/users/tuvuumass/orgs",
"repos_url": "https://api.github.com/users/tuvuumass/repos",
"events_url": "https://api.github.com/users/tuvuumass/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuvuumass/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Very nice, thanks a lot for adding this new example! Just to be sure, the empty strata file is intended? I didn't get why it's there.\r\n\r\nGood catch, @sgugger. Just removed the empty strata file. Thanks!"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
This is an implementation of the self-training algorithm (without task augmentation) for classification tasks proposed in the [EMNLP 2021](https://2021.emnlp.org/) paper: [STraTA: Self-Training with Task Augmentation for Better Few-shot Learning](https://arxiv.org/abs/2109.06270). For the original codebase, please check out https://github.com/google-research/google-research/tree/master/STraTA. Note that this code can be used as a tool for automatic data labeling.
The pull request includes a README.md file with detailed instructions on how to set up a virtual environment and install necessary packages. It also includes a demo `run.sh` on how to perform self-training with a BERT Base model on the SciTail science entailment dataset using 8 labeled examples per class.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16738/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16738",
"html_url": "https://github.com/huggingface/transformers/pull/16738",
"diff_url": "https://github.com/huggingface/transformers/pull/16738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16738.patch",
"merged_at": 1649865804000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16737
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16737/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16737/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16737/events
|
https://github.com/huggingface/transformers/pull/16737
| 1,202,279,551
|
PR_kwDOCUB6oc42H-74
| 16,737
|
Fix the Conda package build
|
{
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger I saw you have worked on the Conda packaging of this repo before. Can you look into it? IMHO this PR doesn't take a lot of time to review.",
"@LysandreJik friendly ping!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@LysandreJik friendly ping :)",
"Sorry, just got to it! I managed to get the build to run correctly by just adding the `tokenizers` line :tada: Let me know if you'd like for me to send logs.\r\n\r\nAre you sure we need the rest? I'd be happy to merge your PR with just the tokenizers changes :)",
"Yeah. I explained why in the PR description",
"Would you like me to share the logs I got locally showing only the `tokenizers` change was necessary?",
"> Would you like me to share the logs I got locally showing only the `tokenizers` change was necessary?\r\n\r\nI left only the `tokenizers` change now.",
"The failing tests are flaky, right?"
] | 1,649
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
I saw that [the Conda builds are failing since v4.12](https://github.com/huggingface/transformers/actions/workflows/release-conda.yml). The main problem is that, for some reason, [the build tries to install `setuptools` but Conda build forbids it](https://github.com/huggingface/transformers/runs/5853294684?check_suite_focus=true#step:6:2172). I found [an answer in StackOverflow](https://stackoverflow.com/a/64825075/1165181) that shows it can be fixed by adding the flags ` --single-version-externally-managed --record=record.txt` to the `python setup.py install` command in the `build.sh` file (note the `--record` flag is also needed, otherwise the command fails, stating so).
I also updated the tokenizers version specification, which seemed to have been forgotten to be updated in this file as well.
I added `conda-verify`, which `conda build` uses for some sanity checks.
Finally, I changed `conda-build` to `conda build`, which seems to be the way to use this command.
It'd be good if somebody can check this on their end, to double-check it's working fine:
```bash
conda create -n build-transformers -c huggingface python=3.8 anaconda-client conda-build conda-verify
conda activate build-transformers
TRANSFORMERS_VERSION=$(python setup.py --version) conda build .github/conda
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16737/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16737",
"html_url": "https://github.com/huggingface/transformers/pull/16737",
"diff_url": "https://github.com/huggingface/transformers/pull/16737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16737.patch",
"merged_at": 1656496997000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16736
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16736/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16736/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16736/events
|
https://github.com/huggingface/transformers/issues/16736
| 1,202,222,688
|
I_kwDOCUB6oc5HqHZg
| 16,736
|
[Flax] Torch fp16 model weights not upcast when loaded in Flax
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @patrickvonplaten @patil-suraj ",
"Great catch! Gosh, we should have never uploaded `bart-large` with fp16 weights - I think this happened accidentally and a long time ago :-/ Usually we want all weights to be stored as full fp32 weights. \r\n\r\nTo be honest, for now I think this is really an edge-case - I don't know any model besides bart that has its weights uploaded in fp16, so I think we could do three things here:\r\n\r\n- 1. Don't do anything - it's an edge case\r\n- 2. Make `from_pretrained(...)` error out\r\n- 3. Automatically convert to fp32\r\n\r\nI strongly advocate fro 1. or 2. here. \r\n\r\nI'll upload the original weights of `bart-large` in full fp32 probably in a separate repo now.\r\n\r\nWhat do you think ? @patil-suraj @sanchit-gandhi \r\n\r\n",
"Might be related to https://github.com/huggingface/transformers/issues/15559",
"If this is solely an issue concerning `bart-large` and this truly is an edge-case, then 1 or 2 seem reasonable. 3 could cause some serious ramifications for instances where the fp16 model is currently used (e.g. new OOM's with training). In 2, would the error out completely prohibit the user from loading weights in fp16, or just provide them with a warning and the advice to upcast the weights/load from fp32?",
"In 2. I'd completely error out and state that the two checkpoints have different precision and can't be combined",
"My worry with a complete error out would be that it prevents the user from ever being able to load the model, even if they have the intent of upcasting/correcting for the dtype. My suggestion would be to add a warning to the [`from_pretrained`](https://github.com/huggingface/transformers/blob/b24201fa44e1a14e83be890dcbc231e926c37ec1/src/transformers/modeling_flax_utils.py#L298) method in [`modeling_flax_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_utils.py) if the Flax model weights are loaded in a dtype other than fp32. This could then be proceeded by the advice that the user should upcast to fp32 using the provided method `to_fp32`. By displaying a warning instead of an error out, the user is still able to load the model and then subsequently rectify any dtype mismatch.",
"Guess adding a warning and no automatic upcasting is fine as well! Just not in favor of automatic upcasting :-)",
"Agree with @sanchit-gandhi here. I'm in favour of adding a warning and letting the user know that weights are not `fp32`.",
"The user warning for the Flax `.from_pretrained` method was implemented in #16762. As an extreme edge case and following an extensive offline discussion, it was decided that the fp16 PyTorch weights for [bart-large](https://huggingface.co/facebook/bart-large) will remain as is. The original checkpoint has been reconverted and uploaded it in fp32 to another repo for those wishing to explicitly use full-precision weights: https://huggingface.co/patrickvonplaten/bart-large-fp32 Note that the fp16 weights should not be an issue for any PyTorch models: the PyTorch `.from_pretrained` method automatically upcasts model weights to fp32.\r\n"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
In some scenarios, one may want to load a Flax model directly from pre-trained PyTorch model weights. In this process, the original dtype of the PyTorch model weights is maintained when loaded into Flax. For models such as [bart-large](https://huggingface.co/facebook/bart-large), which has it's PyTorch weights stored in fp16 on the Hub, this can result in a Flax model with weights in an undesirable dtype. This is highlighted by the following code snippet, which first loads a FlaxSpeechEncoderDecoderModel from entirely fp32 PyTorch weights, and then again from fp32 encoder weights and fp16 decoder weights:
```python
from transformers import FlaxSpeechEncoderDecoderModel
# fp32 PyTorch weights
encoder_id = 'hf-internal-testing/tiny-random-wav2vec2'
decoder_id = 'hf-internal-testing/tiny-random-bart'
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_from_pt=True, decoder_from_pt=True)
print("-----------From fp32 PyTorch weights-----------")
print(f"Encoder dtype: {model.params['encoder']['masked_spec_embed'].dtype}")
print(f"Decoder dtype: {model.params['decoder']['model']['decoder']['embed_tokens']['embedding'].dtype}")
# same decoder as previously, but with weights downcasted to fp16
decoder_id = 'sanchit-gandhi/tiny-random-bart-fp16'
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_from_pt=True, decoder_from_pt=True)
print("---------From fp32/fp16 PyTorch weights---------")
print(f"Encoder dtype: {model.params['encoder']['masked_spec_embed'].dtype}")
print(f"Decoder dtype: {model.params['decoder']['model']['decoder']['embed_tokens']['embedding'].dtype}")
```
Output:
```
-----------From fp32 PyTorch weights-----------
Encoder dtype: float32
Decoder dtype: float32
---------From fp32/fp16 PyTorch weights---------
Encoder dtype: float32
Decoder dtype: float16
```
Having a model stored in two different dtype raises issues with training - Optax optimisers expect the model to maintain one uniform dtype. Furthermore, the default assumption is that all Flax model weights are in fp32.
This weight conversion is handled by the general conversion script: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_pytorch_utils.py. Would it be wise to inform the user of the potentially erroneous model dtype in this scenario? If informed, they could then call the `to_fp32` method from `modeling_flax_utils` to upcast the weights to fp32:
https://github.com/huggingface/transformers/blob/a9604067225219e132abdff2793f78ead798453b/src/transformers/modeling_flax_utils.py#L231
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16736/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16735
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16735/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16735/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16735/events
|
https://github.com/huggingface/transformers/issues/16735
| 1,202,211,812
|
I_kwDOCUB6oc5HqEvk
| 16,735
|
[PegasusConfig] wrong default vocab_size
|
{
"login": "yaozhaogoogle",
"id": 60202961,
"node_id": "MDQ6VXNlcjYwMjAyOTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/60202961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaozhaogoogle",
"html_url": "https://github.com/yaozhaogoogle",
"followers_url": "https://api.github.com/users/yaozhaogoogle/followers",
"following_url": "https://api.github.com/users/yaozhaogoogle/following{/other_user}",
"gists_url": "https://api.github.com/users/yaozhaogoogle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaozhaogoogle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaozhaogoogle/subscriptions",
"organizations_url": "https://api.github.com/users/yaozhaogoogle/orgs",
"repos_url": "https://api.github.com/users/yaozhaogoogle/repos",
"events_url": "https://api.github.com/users/yaozhaogoogle/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaozhaogoogle/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @yaozhaogoogle,\r\n\r\nThanks for the issue, could you maybe link to the original configuration that shows a default vocab size of 96000?",
"from the github https://github.com/google-research/pegasus , there is a link to the checkpoints and vocabs, https://pantheon.corp.google.com/storage/browser/pegasus_ckpt . They are all using a single vocab size of 96k",
"Thanks for the link @yaozhaogoogle, \r\n\r\nNote that in the configuration we just provide a default value that could be used when initializing Pegagus from scratch. If one loads a pretrained checkpoint the vocab size is overwritten by the value defined in the config on the HF Hub. \r\n\r\nE.g. this Pegagus checkpoint: https://huggingface.co/google/pegasus-large/blob/main/config.json#L122 has a vocab size of 96000 which would be used when doing:\r\n\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/pegasus-large\")\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/pegasus-large\")\r\n```",
"Thanks for the explanation!"
] | 1,649
| 1,650
| 1,650
|
NONE
| null |
In PegasusConfig (https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/pegasus/configuration_pegasus.py#L108), the default vocab size should be 96000 instead of 50265.
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16735/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16734
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16734/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16734/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16734/events
|
https://github.com/huggingface/transformers/pull/16734
| 1,202,153,166
|
PR_kwDOCUB6oc42HjiL
| 16,734
|
Partial checkpoint support for SMP
|
{
"login": "cavdard",
"id": 44590949,
"node_id": "MDQ6VXNlcjQ0NTkwOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/44590949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cavdard",
"html_url": "https://github.com/cavdard",
"followers_url": "https://api.github.com/users/cavdard/followers",
"following_url": "https://api.github.com/users/cavdard/following{/other_user}",
"gists_url": "https://api.github.com/users/cavdard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cavdard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cavdard/subscriptions",
"organizations_url": "https://api.github.com/users/cavdard/orgs",
"repos_url": "https://api.github.com/users/cavdard/repos",
"events_url": "https://api.github.com/users/cavdard/events{/privacy}",
"received_events_url": "https://api.github.com/users/cavdard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16734). All of your documentation changes will be reflected on that endpoint.",
"@cavdard could you please run `make style` to apply the correct coding formatting? ",
"> @cavdard could you please run `make style` to apply the correct coding formatting?\r\n\r\nUpdate: Resolved by running `pip install -e .[quality]`\r\n\r\n@philschmid Having this error. Am I missing a step?\r\n```\r\nmake style\r\nblack examples tests src utils\r\nmake: black: No such file or directory\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
- Adds 3 new training args(`smp_save_partial` and `smp_load_partial`) to support partial checkpointing with SMP. `smp_tensor_parallel_full_model` to apply tensor parallelism to whole model.
- Uses the right ranks for partial checkpoint saving in should_save.
- Uses `local_state_dict()` with partial checkpoint saving.
- Uses `smp.save` instead of `torch.save` when partial checkpoint saving is enabled.
- Uses `smp.load` instead of `torch.load` when partial checkpoint loading is enabled. Reorders partial checkpoint loading to happen after wrapping of model, since `smp.load` can only load to a smp model.
- Updated checks for the existence of checkpoint files since smp partial checkpoints contain postfixes in addition to filename(example: filename_0_0 or filename_0_0_0).
- Skip checkpoint sharding when smp is enabled.
- `smp_gather` is causing increased memory usage on GPU0 when tensor parallelism is enabled. Switches to `distributed_concat` for ddp.
- adds `load_best_model_at_end` support for SMP.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16734/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16734",
"html_url": "https://github.com/huggingface/transformers/pull/16734",
"diff_url": "https://github.com/huggingface/transformers/pull/16734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16734.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16733
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16733/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16733/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16733/events
|
https://github.com/huggingface/transformers/issues/16733
| 1,202,083,988
|
I_kwDOCUB6oc5HpliU
| 16,733
|
[FlaxBartForCausalLM] Embed tokens not loaded in Flax decoder model from encoder-decoder weights
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"In the PyTorch Bart modelling script, we first define a 'shared' nn.Embedding module, which we then directly pass to the encoder and decoder modules to explicitly tie their word embeddings:\r\nhttps://github.com/huggingface/transformers/blob/14daa6102a0e8a35ef734dd21bfcf31d9b0207d1/src/transformers/models/bart/modeling_bart.py#L1146-L1149\r\nDue to the stateful nature of PyTorch modules, we can then overwrite this embedding in the `init` method of the encoder or decoder, depending on whether or not the optional keyword argument `embed_tokens` is specified:\r\nhttps://github.com/huggingface/transformers/blob/cc034f72eb6137f4c550e911fba67f8a0e1e98fa/src/transformers/models/bart/modeling_bart.py#L710-L713\r\n(Note that for decoder-only models, we do not specify the argument `embed_tokens` for the decoder module. Thus, it defaults to being initialised in the decoder module's `init`). For the encoder-decoder model, there are three instances in which the embeddings are defined: as `shared` under the BartModel, and again as `embed_tokens` in the encoder and decoder models. This yields the following parameter tree:\r\n```\r\nPT enc-dec model\r\n shared\r\n encoder\r\n embed_tokens\r\n ...\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\nLikewise, in the Flax Bart modelling script, we first define a 'shared' nn.Embed module, which we then directly pass to the encoder and decoder modules to explicitly tie their word embeddings:\r\nhttps://github.com/huggingface/transformers/blob/cc034f72eb6137f4c550e911fba67f8a0e1e98fa/src/transformers/models/bart/modeling_flax_bart.py#L839-L847\r\nHowever, due to the stateless nature of JAX/Flax models, we cannot then overwrite this embedding in the `setup` method of the encoder or decoder. To address this, it was decided in #15920 that the keyword argument `embed_tokens` must always be specified to the encoder/decoder modules. Thus, there is only one instance in which the embeddings are defined: as `shared` under the FlaxBartModel. This results in different parameter tree to that in PyTorch:\r\n```\r\nFX enc-dec model\r\n shared\r\n encoder\r\n ...\r\n decoder\r\n ...\r\n```\r\nFor encoder-decoder models, PyTorch to Flax conversion is possible: the Flax encoder-decoder model is able to leverage the PyTorch `shared` embedding weights, and then pass these into the encoder and decoder separately (effectively tying the weights, but only having one variable)
.\r\n\r\nHowever, an issue arises for decoder only models. Here, the Flax decoder cannot leverage all of the Flax encoder-decoder model weights. This is due to the format of its parameter tree, which is constructed jointly through the [FlaxBartDecoderWrapper](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_flax_bart.py#L1863) and [FlaxBartForCausalLM](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_flax_bart.py#L1885) module:\r\n```\r\nFX dec-only model\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\n Since there is no module `shared` in Flax decoder only models, the system is not able to leverage the embedding weights registered under `shared` from the Flax encoder-decoder model weights. However, `embed_tokens` is now defined under the decoder module, meaning that we are able to leverage PyTorch encoder-decoder or decoder-only model weights and load them into Flax:\r\n```\r\nPT dec-only model\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\n\r\nPotential solutions:\r\n- In the [FlaxBartDecoderWrapper](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_flax_bart.py#L1863), we can rename `embed_tokens` to `self.shared`, thus bringing the param trees of the Flax encoder-decoder and decoder-only models into alignment. Doing so enables the decoder only embeddings to be loaded from Flax encoder-decoder model weights. However, this is not an ideal solution: by renaming the module, we will no longer be able to load Flax decoder only model weights from PyTorch (encoder-)decoder weights, as these parameter trees will not match.\r\n- We could define a `self.shared` nn.Embedding module in the PyTorch [DecoderWrapper](https://github.com/huggingface/transformers/blob/a192f61e0825150e54e15fdc451cf37e23532b3f/src/transformers/models/bart/modeling_bart.py#L1680) and then pass this into the decoder model. This maintains consistency between the encoder-decoder style models and the decoder-only ones. (Define `shared` outside the modules, then pass it in, thus registering a state-dict of `(shared), (decoder, embed_tokens)` instead of just `(decoder, embed_tokens)`). However, this is a breaking change for PyTorch Bart models, and should be avoided.\r\n- What is probably easier and more effective than both of the above is explicit naming of the `embed_tokens` module in the Flax Bart encoder and decoder modules, giving a parameter tree that exactly matches the PyTorch one, both for encoder-decoder and decoder only models.\r\n```\r\nEnc-dec model\r\n shared\r\n encoder\r\n embed_tokens\r\n ...\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\n```\r\nDec-only model\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\nIt is not apparent from the Flax docs how this explicitly naming can be achieved, but I have asked the Flax community on the Flax discussions page how to go about doing this: https://github.com/google/flax/discussions/2046#discussion-4004536",
"cc @patrickvonplaten @patil-suraj ",
"As a temporary fix, a standalone Flax decoder model can be loaded entirely from it's equivalent PyTorch weights and the `embed_tokens` made to match:\r\n\r\n```python\r\nfrom transformers import BartForCausalLM, FlaxBartForCausalLM\r\nimport tempfile\r\nfrom flax.traverse_util import flatten_dict\r\n\r\npt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')\r\n# force Flax weights to be loaded from PyTorch - enables `embed_tokens` to be loaded correctly\r\nfx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart', from_pt=True)\r\n\r\n# Convert the PT model to FX \r\nwith tempfile.TemporaryDirectory() as tmpdirname:\r\n pt_dec_model.save_pretrained(tmpdirname)\r\n pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True)\r\n \r\n# easier to work in terms of flattened dicts\r\npt_dec_params_to_fx = flatten_dict(pt_dec_model_to_fx.params)\r\nfx_dec_params = flatten_dict(fx_dec_model.params)\r\n\r\n# Check that all keys match\r\nassert fx_dec_params.keys() == pt_dec_params_to_fx.keys()\r\n\r\n# Check that all the weights are **precisely** equal\r\nfor param in pt_dec_params_to_fx:\r\n assert (fx_dec_params[param] == pt_dec_params_to_fx[param]).all(), param\r\n```",
"Hmm isn't the problem here that the weights are not correctly mapped? E.g. when I run the first part of your codesnippet:\r\n\r\n```py\r\nfrom transformers import BartForCausalLM, FlaxBartForCausalLM\r\nimport tempfile\r\nfrom flax.traverse_util import flatten_dict\r\n\r\npt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')\r\nfx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')\r\n\r\n# Convert the PT model to FX \r\nwith tempfile.TemporaryDirectory() as tmpdirname:\r\n pt_dec_model.save_pretrained(tmpdirname)\r\n pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True)\r\n```\r\n\r\nIt says that:\r\n\r\n```\r\n...\r\nSome weights of FlaxBartForCausalLM were not initialized from the model checkpoint at sanchit-gandhi/tiny-random-bart and are newly initialized: {('model', 'decoder', 'embed_tokens', 'embedding')}\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nSome weights of the model checkpoint at /tmp/tmpqaqqrtft were not used when initializing FlaxBartForCausalLM: {('lm_head', 'kernel')}\r\n- This IS expected if you are initializing FlaxBartForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing FlaxBartForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nwhich shows that those weights are not correctly loaded into the Flax model, so IMO the bug is that the Flax model doesn't correctly convert the naming here. \r\n\r\nAlso note that 99% of all Bart models have their output embeddings tied to their input embeddings, see: https://github.com/huggingface/transformers/blob/eb5bdcdfa51f743887ee1d9c7f230444d7a8b23c/src/transformers/models/bart/modeling_flax_bart.py#L1929 in which case the `lm_head` weights are irrelevant. But agree that there is an error nevertheless.\r\n\r\nThe solution should be to correct the naming / weight conversion here though IMO",
"The initialisation of parameters works slightly differently between PyTorch and Flax. In PyTorch, any module defined under a model's `init` will be added to the model's state-dict. In Flax, modules are first defined in the `setup` method, but are only added to the param dict if traced in the `call` method when the dummy forward pass is performed during model initialisation. \r\n\r\nWith that being said, the `lm_head` is always added to the state-dict in PyTorch, whether or not the word embeddings are tied. However, this is not the case in Flax - the `lm_head` is only added to the param dict _if_ used in the `call` method. Inspecting the model code, we see this is only the case if the word embeddings are not tied:\r\nhttps://github.com/huggingface/transformers/blob/eb5bdcdfa51f743887ee1d9c7f230444d7a8b23c/src/transformers/models/bart/modeling_flax_bart.py#L1927-L1931\r\n\r\nIf we look back to the code snippet for the loading of the decoder-only PyTorch and Flax models, we can confirm that the word embeddings are tied, and that the `lm_head` is instantiated for the PyTorch model and not the Flax one (as expected):\r\n```python\r\nfrom transformers import BartForCausalLM, FlaxBartForCausalLM\r\n\r\npt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')\r\nfx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')\r\n\r\nprint(f\"Tie word embeddings? PyTorch: {pt_dec_model.config.tie_word_embeddings}, Flax: {fx_dec_model.config.tie_word_embeddings}\")\r\nprint(f\"PyTorch Decoder modules: {[n for n, _ in pt_dec_model.named_children()]}\")\r\nprint(f\"Flax Decoder modules: {fx_dec_model.params.keys()}\")\r\n```\r\nOutput:\r\n```\r\n...\r\nSome weights of FlaxBartForCausalLM were not initialized from the model checkpoint at sanchit-gandhi/tiny-random-bart \r\nand are newly initialized: {('model', 'decoder', 'embed_tokens', 'embedding')}\r\n...\r\nTie word embeddings? PyTorch: True, Flax: True\r\nPyTorch Decoder modules: ['model', 'lm_head']\r\nFlax Decoder modules: dict_keys(['model'])\r\n```\r\nWhen loading from pre-trained Flax weights, we see that the only parameters randomly initialised are the `embed_tokens`.\r\n\r\nWe perform a slightly different operation when comparing the PyTorch weights to those in Flax - we first save the PyTorch model to a temporary directory (`.save_pretrained(tmpdirname)`) and then load this model from it's PyTorch weights into Flax: \r\n```python\r\n# Convert the PT model to FX \r\nwith tempfile.TemporaryDirectory() as tmpdirname:\r\n pt_dec_model.save_pretrained(tmpdirname)\r\n pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True)\r\n```\r\nHere, the PyTorch weights for the `lm_head` are saved. However, since the `lm_head` is not used in the `call` method of the Flax model, they are subsequently not used when loading the PyTorch model into Flax. Thus, we expect to see the aforementioned message:\r\n```\r\nSome weights of the model checkpoint at /tmp/tmpqaqqrtft were not used when initializing FlaxBartForCausalLM: {('lm_head', 'kernel')}\r\n```\r\n\r\nWhen we run the full code snippet, we see that the only weights that do not match between PyTorch and Flax decoder-only models are the `embed_tokens` - the `lm_head` is not used in Flax and so is ignored from this comparison. This is the core issue, which arises due to a different parameter structure between the Flax encoder-decoder models and the Flax decoder-only models.\r\n\r\nFor Flax encoder-decoder, the tied word embeddings are held under the module `shared`, which explicitly ties the word embedding tokens for the encoder and decoder:\r\n```\r\nFX enc-dec model\r\n shared\r\n encoder\r\n ...\r\n decoder\r\n ...\r\n```\r\nFor Flax decoder-only models, we do not have the module `shared`, giving the modified parameter tree:\r\n```\r\nFX dec-only model\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\nThe reason we omit the `shared` module is to give one-to-one equivalence to the corresponding PyTorch decoder-only state-dict:\r\n```\r\nPT dec-only model\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\nTo remedy this issue, we have three choices:\r\n1. We can either insert a module named `shared` for the Flax decoder-only, and enable it to be compatible with Flax encoder-decoder models:\r\n```\r\nFX dec-only model\r\n shared\r\n decoder\r\n ...\r\n```\r\nHowever, this would then break equivalence between PT and FX decoder-only models, the parameter trees now differing.\r\n2. We keep the current structure and allow for PT and FX decoder-only model equivalence.\r\n3. We explicitly add `embed_tokens` as a named module under the FX encoder-decoder model:\r\n```\r\nFX enc-dec model\r\n shared\r\n encoder\r\n embed_tokens\r\n ...\r\n decoder\r\n embed_tokens\r\n ...\r\n```\r\nWhich enables PT - FX equivalence for both encoder-decoder and decoder-only models.\r\n\r\nOf the three, the latter is my preference, as it allows for full compatibility between the different frameworks.",
"The full code snippet that examines the `tie_word_embeddings` variable as well as the parameter weights:\r\n```python\r\nfrom transformers import BartForCausalLM, FlaxBartForCausalLM\r\n\r\npt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')\r\nfx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')\r\n\r\nprint(f\"Tie word embeddings? PyTorch: {pt_dec_model.config.tie_word_embeddings}, Flax: {fx_dec_model.config.tie_word_embeddings}\")\r\nprint(f\"PyTorch Decoder modules: {[n for n, _ in pt_dec_model.named_children()]}\")\r\nprint(f\"Flax Decoder modules: {fx_dec_model.params.keys()}\")\r\n\r\n# Convert the PT model to FX \r\nwith tempfile.TemporaryDirectory() as tmpdirname:\r\n pt_dec_model.save_pretrained(tmpdirname)\r\n pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True)\r\n \r\n# easier to work in terms of flattened dicts\r\npt_dec_params_to_fx = flatten_dict(pt_dec_model_to_fx.params)\r\nfx_dec_params = flatten_dict(fx_dec_model.params)\r\n\r\n# Check that all keys match\r\nassert fx_dec_params.keys() == pt_dec_params_to_fx.keys()\r\n\r\n# Check that all the weights are **precisely** equal\r\nmismatch_params = []\r\nprint(\"Checking weights match...\")\r\nfor param in pt_dec_params_to_fx:\r\n if (fx_dec_params[param] != pt_dec_params_to_fx[param]).all():\r\n mismatch_params.append(param)\r\nif len(mismatch_params) == 0:\r\n print(\"✅ All PyTorch and Flax parameters match\")\r\nelse:\r\n print(\"❌ The following weights do not match:\")\r\n for param in mismatch_params:\r\n print(param)\r\n```\r\nOutput:\r\n```\r\nTie word embeddings? PyTorch: True, Flax: True\r\nPyTorch Decoder modules: ['model', 'lm_head']\r\nFlax Decoder modules: dict_keys(['model'])\r\nChecking weights match...\r\n❌ The following weights do not match:\r\n('model', 'decoder', 'embed_tokens', 'embedding')\r\n```",
"cc @patil-suraj here since he added `bart-large`",
"I'm fine with whatever solution as this is really an edge case - however we should not break backward compatibility here especially with respect to the weights structure. Also we should **not** touch the PyTorch Bart code",
"Agree that we should not change the PyTorch modelling code! My preference is modifying the Flax encoder-decoder param dict to explicitly include `embed_tokens` under the encoder and decoder modules (as with the PyTorch models and the Flax decoder-only models) which will bring compatibility between all four models (PyTorch encoder-decoder, Flax encoder-decoder, PyTorch decoder-only and Flax decoder-only)",
"Ok for me! @patil-suraj what do you think?",
"I'm fine with modifying the param dict of `bart` here, since flax doesn't add those `embed_tokens` weights under `encoder` and `decoder` if they are initialised outside and shared. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"From the discussion with the Flax authors at https://github.com/google/flax/discussions/2046#discussion-4004536, the best option appears to be handling this in Flax weights loading script.",
"Generally, not very keen on changing the general Flax weight conversion script because of only a single model. But happy to iterate over the design in a PR. @sanchit-gandhi, could you maybe open a PR to show how you would like to solve the problem and then we take it from there? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
The embeddings module `embed_tokens` is not loaded from pre-trained Flax model weights when a FlaxBartForCausalLM model is instantiated in isolation. As a consequence, these embedding weights are randomly initialised. The following code snippet demonstrates this fact by comparing the FlaxBartForCausalLM model to its PyTorch equivalent, BartForCausalLM. For the PyTorch (resp. Flax) model, the weights are loaded from pre-trained PyTorch (resp. Flax) weights at https://huggingface.co/sanchit-gandhi/tiny-random-bart. These model weights are identical to those in the repository at https://huggingface.co/hf-internal-testing/tiny-random-bart, but with the exception that this repository contains both Flax and PyTorch weights, unlike those at hf-internal-testing which contain only PyTorch weights.
```python
from transformers import BartForCausalLM, FlaxBartForCausalLM
import tempfile
from flax.traverse_util import flatten_dict
pt_dec_model = BartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')
fx_dec_model = FlaxBartForCausalLM.from_pretrained('sanchit-gandhi/tiny-random-bart')
# Convert the PT model to FX
with tempfile.TemporaryDirectory() as tmpdirname:
pt_dec_model.save_pretrained(tmpdirname)
pt_dec_model_to_fx = FlaxBartForCausalLM.from_pretrained(tmpdirname, from_pt=True)
# easier to work in terms of flattened dicts
pt_dec_params_to_fx = flatten_dict(pt_dec_model_to_fx.params)
fx_dec_params = flatten_dict(fx_dec_model.params)
# Check that all keys match
assert fx_dec_params.keys() == pt_dec_params_to_fx.keys()
# Check that all the weights are **precisely** equal
for param in pt_dec_params_to_fx:
assert (fx_dec_params[param] == pt_dec_params_to_fx[param]).all(), param
```
Output:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Input In [211], in <cell line: 21>()
20 # Check that all the weights are **precisely** equal
21 for param in pt_dec_params_to_fx:
---> 22 assert (fx_dec_params[param] == pt_dec_params_to_fx[param]).all(), param
AssertionError: ('model', 'decoder', 'embed_tokens', 'embedding')
```
We see here that the embedding weights do not match for the standalone decoder models: the `embed_tokens` are not loaded from pre-trained in Flax, and are instead randomly initialised.
Loading full encoder-decoder models, we see that the weights match for the embeddings:
```python
from transformers import BartModel, FlaxBartModel
import tempfile
from flax.traverse_util import flatten_dict
pt_model = BartModel.from_pretrained('sanchit-gandhi/tiny-random-bart')
fx_model = FlaxBartModel.from_pretrained('sanchit-gandhi/tiny-random-bart')
# Convert the PT model to FX
with tempfile.TemporaryDirectory() as tmpdirname:
pt_model.save_pretrained(tmpdirname)
pt_model_to_fx = FlaxBartModel.from_pretrained(tmpdirname, from_pt=True)
# easier to work in terms of flattened dicts
pt_params_to_fx = flatten_dict(pt_model_to_fx.params)
fx_params = flatten_dict(fx_model.params)
# Check that all keys match
assert fx_params.keys() == pt_params_to_fx.keys()
# Check that all the weights are **precisely** equal
for param in pt_params_to_fx:
assert (fx_params[param] == pt_params_to_fx[param]).all(), param
```
A fix is needed to be able to load the Flax encoder-decoder embedding weights into a standalone decoder module.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16733/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16732
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16732/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16732/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16732/events
|
https://github.com/huggingface/transformers/pull/16732
| 1,202,030,086
|
PR_kwDOCUB6oc42HIx8
| 16,732
|
Remove duplicate header in doc
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
<s>`doc-builder` doesn't accept duplicated headers anymore, this PR should fix the doc build. Will merge as soon as the Build Doc PR job is green to fix the main branch :-)</s>
The change has been reverted on the `doc-builder` side, but this was a mistake worth fixing anyway.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16732/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16732",
"html_url": "https://github.com/huggingface/transformers/pull/16732",
"diff_url": "https://github.com/huggingface/transformers/pull/16732.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16732.patch",
"merged_at": 1649781433000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16731
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16731/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16731/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16731/events
|
https://github.com/huggingface/transformers/pull/16731
| 1,202,027,732
|
PR_kwDOCUB6oc42HIRp
| 16,731
|
Improve test_pt_tf_model_equivalence on PT side
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merged (after a rebase)"
] | 1,649
| 1,650
| 1,650
|
COLLABORATOR
| null |
# What does this PR do?
Same as in #16557, but on PT test side.
Now we only have 2 `def test_pt_tf_model_equivalence` in the common test files 💯
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16731/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16731",
"html_url": "https://github.com/huggingface/transformers/pull/16731",
"diff_url": "https://github.com/huggingface/transformers/pull/16731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16731.patch",
"merged_at": 1650395608000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16730
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16730/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16730/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16730/events
|
https://github.com/huggingface/transformers/pull/16730
| 1,201,950,098
|
PR_kwDOCUB6oc42G36I
| 16,730
|
Change the chunk_iter function to handle
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Very nice - thanks for fixing it!",
"@sgugger - I think the build doc failing test is unrelated here no?",
"Yes, will look into that."
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
the subtle cases where the last chunk gets ignored since all the
data is in the `left_strided` data.
We need to remove the right striding on the previous item.
# What does this PR do?
Change the chunk_iter function to handle
the subtle cases where the last chunk gets ignored since all the
data is in the `left_strided` data.
We need to remove the right striding on the previous item.
Fixes https://github.com/huggingface/transformers/issues/16671
@LysandreJik @patrickvonplaten
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16730/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16730",
"html_url": "https://github.com/huggingface/transformers/pull/16730",
"diff_url": "https://github.com/huggingface/transformers/pull/16730.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16730.patch",
"merged_at": 1649780702000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16729
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16729/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16729/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16729/events
|
https://github.com/huggingface/transformers/pull/16729
| 1,201,934,829
|
PR_kwDOCUB6oc42G0ok
| 16,729
|
TF: remove set_tensor_by_indices_to_value
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger The last diff belongs to @patrickvonplaten, but because the function was moved (from [here](https://github.com/huggingface/transformers/blame/a3dbbc346763c8eaa49577a448e5b5a2da1428ed/src/transformers/generation_tf_utils.py#L1631) (link from the commit immediatly before it was moved)). Last time it was touched was 2 years ago 😅 "
] | 1,649
| 1,649
| 1,649
|
MEMBER
| null |
# What does this PR do?
Removes our TF `set_tensor_by_indices_to_value` function and replaces all its uses by `tf.where`. They are the same, but with a different input order -- removing it means one fewer function to test while making the code easier for TF users.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16729/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16729",
"html_url": "https://github.com/huggingface/transformers/pull/16729",
"diff_url": "https://github.com/huggingface/transformers/pull/16729.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16729.patch",
"merged_at": 1649782308000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16728
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16728/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16728/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16728/events
|
https://github.com/huggingface/transformers/pull/16728
| 1,201,818,328
|
PR_kwDOCUB6oc42Gb9E
| 16,728
|
[FlaxSpeechEncoderDecoder] Fix input shape bug in weights init
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
The tuple `input_shape` is required in the `init` method of the FlaxSpeechEncoderDecoderModel in order to initialise the model weights - one must specify these input shapes to enable JAX to trace through the model dimensions.
This tuple consists of two entries: the encoder and decoder input lengths. Speech encoders almost always downsample the sequence length dimension. Given an encoder input length, the decoder input length is computed through a convolutional formula. This convolutional formula should take into consideration two convolutional based modules:
1. Feature extractor
2. Adapter module (optional)
Currently, only the first of these two convolutional based modules is accounted for. This PR amends the model script to account for the second of the two, i.e. the adapter module.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16728/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16728",
"html_url": "https://github.com/huggingface/transformers/pull/16728",
"diff_url": "https://github.com/huggingface/transformers/pull/16728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16728.patch",
"merged_at": 1649784837000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16727
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16727/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16727/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16727/events
|
https://github.com/huggingface/transformers/pull/16727
| 1,201,757,881
|
PR_kwDOCUB6oc42GO18
| 16,727
|
Add image classification script, no trainer
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger not sure why, but the test for the script fails:\r\n```\r\nWARNING datasets.builder:builder.py:388 Using custom data configuration huggingface--image-classification-test-sample-b7448dc7ae37f2cf\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:388 ***** Running training *****\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:389 Num examples = 8\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:390 Num Epochs = 3\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:391 Instantaneous batch size per device = 2\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:392 Total train batch size (w. parallel, distributed & accumulation) = 2\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:393 Gradient Accumulation steps = 1\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:394 Total optimization steps = 12\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:471 epoch 0: {'accuracy': 0.0}\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:471 epoch 1: {'accuracy': 0.0}\r\nINFO run_image_classification_no_trainer:run_image_classification_no_trainer.py:471 epoch 2: {'accuracy': 0.0}\r\n```\r\nWeirdly, it passes locally for me.",
"I'm getting issues when only passing `id2label` and `label2id` to the config, but not the `num_labels`:\r\n\r\n```\r\nif size_average is not None or reduce is not None:\r\n reduction = _Reduction.legacy_get_string(size_average, reduce)\r\n> return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\nE IndexError: Target 6 is out of bounds.\r\n```",
"Oh ok, shouldn't be the case. Let's put back the `num_labels` for now and I'll have a look later at why it failed to update properly."
] | 1,649
| 1,650
| 1,650
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds an example script for image classification that leverages Accelerate instead of the HuggingFace Trainer.
To do:
- [x] verify local `train_dir` and `validation_dir`
- [x] update README
- [x] add log fixes (Tensorboard)
Both can be updated after #16585 is merged.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16727/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16727",
"html_url": "https://github.com/huggingface/transformers/pull/16727",
"diff_url": "https://github.com/huggingface/transformers/pull/16727.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16727.patch",
"merged_at": 1650378728000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16726
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16726/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16726/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16726/events
|
https://github.com/huggingface/transformers/pull/16726
| 1,201,730,972
|
PR_kwDOCUB6oc42GI_w
| 16,726
|
[ASR pipeline] fix chunking
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Superseded by https://github.com/huggingface/transformers/pull/16730"
] | 1,649
| 1,649
| 1,649
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
ASR chunking currently cuts final pieces of the transcription. The error lies in the postprocessing of the ASR pipeline.
Fixes #https://github.com/huggingface/transformers/issues/16671
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16726/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16726",
"html_url": "https://github.com/huggingface/transformers/pull/16726",
"diff_url": "https://github.com/huggingface/transformers/pull/16726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16726.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16725
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16725/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16725/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16725/events
|
https://github.com/huggingface/transformers/pull/16725
| 1,201,710,030
|
PR_kwDOCUB6oc42GEcw
| 16,725
|
[FlaxWav2Vec2Model] Fix bug in attention mask
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,687
| 1,649
|
CONTRIBUTOR
| null |
Currently, the FlaxWav2Vec2 reduced attention mask is computed by calling the function `_get_feat_extract_output_lengths`, without explicit specification of whether an (optional) adapter module is used:
https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L959-L960
By default, if `add_adapter` is `None`, the boolean `add_adapter` will be set based on the `config`:
https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L1001-L1008
For this default setting, if the model contains an adapter module, then `add_adapter` will be set to `True`. This results in the convolutional formula including the downsampling performed by the convolutional layers in the feature extractor **and** the adapter module.
However, since the reduced attention mask is required for the encoder module, it should be computed based on the convolutional layers of the feature extractor **only**, and not those of the subsequent adapter module. This is highlighted by the PyTorch Wav2Vec2 modelling code:
https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1350-L1354
The following code snippet demonstrates the effect of this bug by means of a PyTorch-Flax cross-test:
```python
import torch
import numpy as np
from transformers import Wav2Vec2Model, FlaxWav2Vec2Model
import tempfile
import random
encoder_id = "hf-internal-testing/tiny-random-wav2vec2"
fx_model = FlaxWav2Vec2Model.from_pretrained(encoder_id, add_adapter=True, from_pt=True)
with tempfile.TemporaryDirectory() as tmpdirname:
fx_model.save_pretrained(tmpdirname)
pt_model = Wav2Vec2Model.from_pretrained(tmpdirname, config=fx_model.config, from_flax=True)
# create synthetic input data
def ids_tensor(shape, vocab_size, rng=None):
"""Creates a random int32 tensor of the shape within the vocab size."""
if rng is None:
rng = random.Random()
total_dims = 1
for dim in shape:
total_dims *= dim
values = []
for _ in range(total_dims):
values.append(rng.randint(0, vocab_size - 1))
output = np.array(values).reshape(shape)
return output
def random_attention_mask(shape, rng=None):
attn_mask = ids_tensor(shape, vocab_size=2, rng=rng)
# make sure that at least one token is attended to for each batch
attn_mask[:, -1] = 1
return attn_mask
def floats_tensor(shape, scale=1.0):
"""Creates a random float32 tensor"""
total_dims = 1
for dim in shape:
total_dims *= dim
values = []
for _ in range(total_dims):
values.append(np.random.randn() * scale)
return np.array(values, dtype=np.float32).reshape(shape)
def fx_batch(batch_size=2, input_length=96000):
input_ids = floats_tensor([batch_size, input_length])
attention_mask = random_attention_mask([batch_size, input_length])
fx_inputs = {
"input_values": input_ids,
"attention_mask": attention_mask,
}
return fx_inputs
fx_inputs = fx_batch()
pt_inputs = {k: torch.tensor(v.tolist()) for k, v in fx_inputs.items()}
fx_outputs = fx_model( **fx_inputs, output_hidden_states=True)
pt_outputs = pt_model(**pt_inputs, output_hidden_states=True)
# helper function for our analysis
def assert_almost_equals(a: np.ndarray, b: np.ndarray, tol: float = 1e-2):
diff = np.abs((a - b)).max()
if diff < tol:
print(f"✅ Difference between Flax and PyTorch is {diff} (< {tol})")
else:
print(f"❌ Difference between Flax and PyTorch is {diff} (>= {tol})")
print("--------------------------Checking hidden states match--------------------------")
for fx_state, pt_state in zip(fx_outputs.hidden_states, pt_outputs.hidden_states):
assert fx_state.shape == pt_state.shape
assert_almost_equals(fx_state, pt_state.detach().numpy())
print("--------------------------Checking last hidden states match--------------------------")
print(f"Encoder-decoder output shape: {fx_outputs.last_hidden_state.shape}, encoder-only output shape: {pt_outputs.last_hidden_state.shape}")
assert_almost_equals(fx_outputs.last_hidden_state, pt_outputs.last_hidden_state.detach().numpy())
```
Output prior to fix:
```
--------------------------Checking encoder hidden states match--------------------------
❌ Difference between Flax and PyTorch is 0.43152332305908203 (>= 0.01)
❌ Difference between Flax and PyTorch is 0.43074753880500793 (>= 0.01)
❌ Difference between Flax and PyTorch is 0.42613524198532104 (>= 0.01)
❌ Difference between Flax and PyTorch is 0.4301084578037262 (>= 0.01)
❌ Difference between Flax and PyTorch is 4.519614219665527 (>= 0.01)
--------------------------Checking encoder last hidden states match--------------------------
Encoder-decoder output shape: (2, 188, 16), encoder-only output shape: torch.Size([2, 188, 16])
✅ Difference between Flax and PyTorch is 0.0015139428433030844 (< 0.01)
```
Output following fix:
```
--------------------------Checking encoder hidden states match--------------------------
✅ Difference between Flax and PyTorch is 3.9674341678619385e-07 (< 0.01)
✅ Difference between Flax and PyTorch is 4.041939973831177e-07 (< 0.01)
✅ Difference between Flax and PyTorch is 4.041939973831177e-07 (< 0.01)
✅ Difference between Flax and PyTorch is 3.948807716369629e-07 (< 0.01)
✅ Difference between Flax and PyTorch is 4.947185516357422e-06 (< 0.01)
--------------------------Checking encoder last hidden states match--------------------------
Encoder-decoder output shape: (2, 188, 16), encoder-only output shape: torch.Size([2, 188, 16])
✅ Difference between Flax and PyTorch is 1.0913936421275139e-09 (< 0.01)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16725/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16725",
"html_url": "https://github.com/huggingface/transformers/pull/16725",
"diff_url": "https://github.com/huggingface/transformers/pull/16725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16725.patch",
"merged_at": 1649785705000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16724
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16724/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16724/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16724/events
|
https://github.com/huggingface/transformers/pull/16724
| 1,201,708,958
|
PR_kwDOCUB6oc42GENh
| 16,724
|
Add type hints GPT-J pytorch
|
{
"login": "ChTauchmann",
"id": 35799429,
"node_id": "MDQ6VXNlcjM1Nzk5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/35799429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChTauchmann",
"html_url": "https://github.com/ChTauchmann",
"followers_url": "https://api.github.com/users/ChTauchmann/followers",
"following_url": "https://api.github.com/users/ChTauchmann/following{/other_user}",
"gists_url": "https://api.github.com/users/ChTauchmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChTauchmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChTauchmann/subscriptions",
"organizations_url": "https://api.github.com/users/ChTauchmann/orgs",
"repos_url": "https://api.github.com/users/ChTauchmann/repos",
"events_url": "https://api.github.com/users/ChTauchmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChTauchmann/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
# What does this PR do?
Added type hints for GPT-J pytorch following #16059
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16724/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16724",
"html_url": "https://github.com/huggingface/transformers/pull/16724",
"diff_url": "https://github.com/huggingface/transformers/pull/16724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16724.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16723
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16723/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16723/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16723/events
|
https://github.com/huggingface/transformers/pull/16723
| 1,201,687,597
|
PR_kwDOCUB6oc42F_lV
| 16,723
|
[Quicktour Audio] Improve && remove ffmpeg dependency
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,650
| 1,650
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16563
As discussed in #16563 , it's not good if the official quicktour example depends on Quicktour. Let's rather let `datasets` handle the audio loading and resampling here. IMO, it's also important to directly showcase here how to resample the audio.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16723/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16723",
"html_url": "https://github.com/huggingface/transformers/pull/16723",
"diff_url": "https://github.com/huggingface/transformers/pull/16723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16723.patch",
"merged_at": 1650293413000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16722
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16722/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16722/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16722/events
|
https://github.com/huggingface/transformers/pull/16722
| 1,201,305,981
|
PR_kwDOCUB6oc42ErPF
| 16,722
|
[Bart] correct doc test
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes doc test `transformers.models.bart.modeling_bart.BartForConditionalGeneration.forward` after @gante 's https://github.com/huggingface/transformers/pull/16668
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16722/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16722",
"html_url": "https://github.com/huggingface/transformers/pull/16722",
"diff_url": "https://github.com/huggingface/transformers/pull/16722.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16722.patch",
"merged_at": 1649751589000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16721
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16721/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16721/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16721/events
|
https://github.com/huggingface/transformers/issues/16721
| 1,201,028,598
|
I_kwDOCUB6oc5Hlj32
| 16,721
|
ResumableUploadAbortException: 409 The object has already been created in an earlier attempt and was overwritten, possibly due to a race condition.
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,653
| 1,653
|
NONE
| null |
I am fine tuning masked language model from XLM Roberta large on google machine specs.
When I copy the model using `gsutil and subprocess` from container to GCP bucket it gives me error.
### Versions
Versions torch==1.11.0+cu113
torchvision==0.12.0+cu113
torchaudio==0.11.0+cu113
transformers==4.17.0
I am using pre-trained Hugging face model.
`I launch it as train.py file which I copy inside docker image and use vertex-ai ( GCP) to launch it using Containerspec`
`machineSpec = MachineSpec(machine_type="a2-highgpu-4g",accelerator_count=4,accelerator_type="NVIDIA_TESLA_A100")`
```
python -m torch.distributed.launch --nproc_per_node 4 train.py --bf16
```
I am using
https://huggingface.co/xlm-roberta-large
```
tokenizer = tr.XLMRobertaTokenizer.from_pretrained("xlm-roberta-large",local_files_only=True)
model = tr.XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-large", return_dict=True,local_files_only=True)
```
**Training Code**
```
training_args = tr.TrainingArguments(
output_dir='****'
,logging_dir='****' # directory for storing logs
,save_strategy="epoch"
,run_name="****"
,learning_rate=2e-5
,logging_steps=1000
,overwrite_output_dir=True
,num_train_epochs=10
,per_device_train_batch_size=4
,prediction_loss_only=True
,gradient_accumulation_steps=2
# ,gradient_checkpointing=True
,bf16=True #57100
,optim="adafactor"
)
trainer = tr.Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_data
)
```
**Train.py**
```
import torch
import numpy as np
import pandas as pd
from transformers import BertTokenizer, BertForSequenceClassification
import transformers as tr
from sentence_transformers import SentenceTransformer
from transformers import XLMRobertaTokenizer, XLMRobertaForMaskedLM
from transformers import AdamW
from transformers import AutoTokenizer
from transformers import BertTokenizerFast as BertTokenizer, BertModel, AdamW, get_linear_schedule_with_warmup,BertForMaskedLM
from transformers import DataCollatorForLanguageModeling
from scipy.special import softmax
import scipy
import random
import pickle
import os
import time
import subprocess as sp
# torch.cuda.empty_cache()
start=time.time()
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print("Using", device)
torch.backends.cudnn.deterministic = True
tr.trainer_utils.set_seed(0)
print("here")
tokenizer = tr.XLMRobertaTokenizer.from_pretrained("xlm-roberta-large",local_files_only=True)
model = tr.XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-large", return_dict=True,local_files_only=True)
model.gradient_checkpointing_enable() #included as new line
print("included gradient checkpoint")
model.to(device)
print("Model loaded successfully")
df=pd.read_csv("data.csv")
train_df=df.text.tolist()
print(len(train_df))
train_df=list(set(train_df))
train_df = [x for x in train_df if str(x) != 'nan']
print("Length of training data is \n ",len(train_df))
print("DATA LOADED successfully")
train_encodings = tokenizer(train_df, truncation=True, padding=True, max_length=512, return_tensors="pt")
print("encoding done")
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
print("data collector done")
class SEDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
return item
def __len__(self):
return len(self.encodings["attention_mask"])
train_data = SEDataset(train_encodings)
print("train data created")
training_args = tr.TrainingArguments(
output_dir='results_mlm_exp1'
,logging_dir='logs_mlm_exp1' # directory for storing logs
,save_strategy="epoch"
,learning_rate=2e-5
,logging_steps=500
,overwrite_output_dir=True
,num_train_epochs=20
,per_device_train_batch_size=4
,prediction_loss_only=True
,gradient_accumulation_steps=2
,bf16=True #Ampere GPU
)
trainer = tr.Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_data
)
trainer.train()
print("model training finished")
trainer.save_model("model_mlm_exp1")
print("training finished")
end=time.time()
print("total time taken in hours is", (end-start)/3600)
```
**Error**
trainer.save_model("model_mlm_exp1")
subprocess.call('gsutil cp -r /pythonPackage/trainer/model_mlm_exp1 gs://******/model_mlm_exp1', shell=True, stdout=subprocess.PIPE)
ERROR ResumableUploadAbortException: 409 The object has already been created in an earlier attempt and was overwritten, possibly due to a race condition.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16721/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16720
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16720/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16720/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16720/events
|
https://github.com/huggingface/transformers/pull/16720
| 1,201,020,152
|
PR_kwDOCUB6oc42DqFt
| 16,720
|
Replace assertion with exception
|
{
"login": "anmolsjoshi",
"id": 17307490,
"node_id": "MDQ6VXNlcjE3MzA3NDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/17307490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anmolsjoshi",
"html_url": "https://github.com/anmolsjoshi",
"followers_url": "https://api.github.com/users/anmolsjoshi/followers",
"following_url": "https://api.github.com/users/anmolsjoshi/following{/other_user}",
"gists_url": "https://api.github.com/users/anmolsjoshi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anmolsjoshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anmolsjoshi/subscriptions",
"organizations_url": "https://api.github.com/users/anmolsjoshi/orgs",
"repos_url": "https://api.github.com/users/anmolsjoshi/repos",
"events_url": "https://api.github.com/users/anmolsjoshi/events{/privacy}",
"received_events_url": "https://api.github.com/users/anmolsjoshi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @LysandreJik thanks for the review! I have incorporated all the requested changes. ",
"@sgugger unsure why the PR Documentation check is failing",
"Failure is unrelated to this PR and is fixed independently.\r\nThanks a lot for addressing all comments!"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
# What does this PR do?
Replaces assert with Exceptions as per https://github.com/huggingface/transformers/issues/12789.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16720/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16720",
"html_url": "https://github.com/huggingface/transformers/pull/16720",
"diff_url": "https://github.com/huggingface/transformers/pull/16720.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16720.patch",
"merged_at": 1649778421000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16719
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16719/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16719/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16719/events
|
https://github.com/huggingface/transformers/issues/16719
| 1,201,016,641
|
I_kwDOCUB6oc5Hlg9B
| 16,719
|
[modeling] keys to ignore revisited
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I'll address point 4 for now. It looks like you are just missing an `ignore_mismatched_sizes=True`. `_keys_to_ignore_on_save` does not impact the loading, as its name indicates, and there is no `_keys_to_ignore_on_load` as if a weight should be ignored on load, it shouldn't be in the checkpoint in the first place.\r\n\r\nI'll comment more on points 1 to 3 when I have more time.",
"Thank you, Sylvain.\r\n\r\n1. I can't pass `ignore_mismatched_sizes` since I'm using examples in the tests\r\n2. Unless I'm missing something `ignore_mismatched_sizes` is an incorrect solution for positional encodings since if the size does match it'd still load and use the key and it shouldn't load any of the keys inside `_keys_to_ignore_on_save` - these need to be generated by the model and not overwritten.\r\n\r\nI can of course just make a new tiny checkpoint that doesn't have this problem in the first place, but I think it's a good exercise at validating paths that are quite undefined behavior-wise.",
"I'll let @patil-suraj and @patrickvonplaten give their advice on how to solve this problem with M2M100, but personally very much against any mechanism that will ignore keys in a checkpoint as it's a multitude of bugs waiting to happen. If keys are not supposed to be in a checkpoint, they should just not be inside it.",
"wrt to point 4 there is no problem with M2M100 per se, it just happened to be one of the tests that is failing since the tiny checkpoint was created in a way that made it somewhat inflexible and a new checkpoint can be made instead.",
"Regarding your other questions:\r\n\r\n1. It depends and that's some tricky logic of the `from_pretrained` method. The prefix will be removed/added in the model state_dict keys when the model you are using expects it or not, depending on whether the checkpoint has it or not. This is to deal with model with heads vs base models and make sure that you can load a checkpoint of a model with head in a base model and vice versa.\r\n\r\n2. `_keys_to_ignore_on_load_missing` and `_keys_to_ignore_on_load_unexpected` use re, so the dot should be escaped. Absolutely no problem on my side to have the same for `_keys_to_ignore_on_save` which currently does not use `re`, so should not escape the .\r\n\r\n3. `_keys_to_ignore_on_load_missing` -> those are keys that should be removed from the list of missing keys we find (keys inside the model but not in the checkpoint)\r\n`_keys_to_ignore_on_load_unexpected` -> those are keys that should be removed from the list of unexpected keys we find (keys inside the checkpoint but not the model)\r\nComments should be clearer, I completely agree!\r\n",
"> Regarding your other questions:\r\n> \r\n> 1. It depends and that's some tricky logic of the `from_pretrained` method. The prefix will be removed/added in the model state_dict keys when the model you are using expects it or not, depending on whether the checkpoint has it or not. This is to deal with model with heads vs base models and make sure that you can load a checkpoint of a model with head in a base model and vice versa.\r\n\r\nYes, and so how does one define the keys to ignore wrt prefix? It sounds like `base_model_prefix` should be excluded. Which means that this is incorrect then:\r\n\r\nhttps://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/m2m_100/modeling_m2m_100.py#L1215-L1227\r\n\r\n(and the same issue afflicts many other model files with deterministic positional encodings)\r\n\r\n> 2. `_keys_to_ignore_on_load_missing` and `_keys_to_ignore_on_load_unexpected` use re, so the dot should be escaped. Absolutely no problem on my side to have the same for `_keys_to_ignore_on_save` which currently does not use `re`, so should not escape the .\r\n\r\nYes, please! Let's make it consistent!\r\n\r\n> 3. `_keys_to_ignore_on_load_missing` -> those are keys that should be removed from the list of missing keys we find (keys inside the model but not in the checkpoint)\r\n> `_keys_to_ignore_on_load_unexpected` -> those are keys that should be removed from the list of unexpected keys we find (keys inside the checkpoint but not the model)\r\n> Comments should be clearer, I completely agree!\r\n\r\nSuper. That's a way easier to understand. Thank you! Made a PR here: https://github.com/huggingface/transformers/pull/16741",
"do we want to resolve this or let it lapse?",
"On my side, it's just missing the point 2, as you solved point 3. Do you want to make a PR or should I do it?\r\n\r\nFor point 4, pinging again @patrickvonplaten and @patil-suraj ",
"> On my side, it's just missing the point 2, as you solved point 3. Do you want to make a PR or should I do it?\r\n\r\nso we do we want to escape it or just have it unescaped everywhere? `r'.'` will just match any char, including the actual `.` , and the keys are usually quite unique to fail to match w/o escaping.\r\n\r\nIMHO just having the unescaped `.` everywhere is more readable and easier to copy-n-paste/extend/etc.",
"Agreed!",
"Thank you! \r\n\r\nOK, I will make a PR then.\r\n",
"hmm, as I started working on it I see that it'd be tricky to make it consistent w/o backslashes, as some keys have regex bits in them as in:\r\n\r\n```\r\nsrc/transformers/models/gptj/modeling_gptj.py: _keys_to_ignore_on_load_missing = \r\n[r\"h\\.\\d+\\.attn\\.masked_bias\", r\"h\\.\\d+\\.attn\\.bias\", r\"lm_head\\.weight\"]\r\n```\r\n\r\nNot sure then. Thoughts?\r\n",
"Those who have clear regex patterns should be escaped and use \\., for the ones that only use strings, I think it's okay to just leave the dot as is.",
"> Those who have clear regex patterns should be escaped and use ., \r\n\r\nDid you mean to say:\r\n\r\n> Those who have clear regex patterns should be escaped and use `\\.`,...\r\n\r\n? \r\n\r\nor as an example:\r\n\r\n```\r\n_keys_to_ignore_on_load_missing = \r\n[r\"h\\.\\d+\\.attn\\.masked_bias\", r\"h\\.\\d+\\.attn\\.bias\", r\"lm_head\\.weight\"]\r\n```\r\nshould remain as is, right?",
"Yes, sorry about the confusion.",
"So https://github.com/huggingface/transformers/pull/17722 will resolve item (2).\r\n\r\nSo point (4) is remaining to be resolved.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
If possible let's please revisit:
1. `_keys_to_ignore_on_save `
2. `_keys_to_ignore_on_load_unexpected`
3. `_keys_to_ignore_on_load_missing`
I'm trying to debug a key that refuses to be ignored on load and I'm not sure if I'm not setting it correctly in all those `keys_to_ignore_*` patterns.
-----------
1. should the keys include the model prefix or not? e.g. here it's a mixed bunch:
https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/m2m_100/modeling_m2m_100.py#L1215-L1227
should they all have the `model.` prefix, or all not have it?
2. should we consistently escape the \. or not? Again see the example above for a mixed bunch
I know I was adding non-escaped keys, because there was no ambiguity in things like: `encoder.embed_positions.weights` - do we ever need to escape it? Whatever the decision I ask that we use a consistent way so that when things don't work it's easy to know how it should be written correctly.
3. I'm not very clear about the naming of the last 2 keys, At the point of the model itself it's hard to remember what they mean, and their explanation is really hard to understand. Could the following explanation be revised. I have a hard time parsing this text:
https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/modeling_utils.py#L726-L731
4. I think the logic of defining which keys not to load is either completely missing or incomplete.
I'm trying to tell m2m_100 not to load `encoder.embed_positions.weights` (and same for decoder), I added it to all 3 keys to ignore and it still loads it, which is invalid since the [model](https://huggingface.co/hf-internal-testing/tiny-random-m2m_100/blob/main/config.json) has these saved and I want to load a model with a different `max_position_embeddings` value and I can't.
```
stderr: RuntimeError: Error(s) in loading state_dict for M2M100ForConditionalGeneration:
stderr: size mismatch for model.encoder.embed_positions.weights: copying a param with shape torch.Size([22, 16]) from checkpoint, the shape in current model is torch.Size([514, 16]).
stderr: size mismatch for model.decoder.embed_positions.weights: copying a param with shape torch.Size([22, 16]) from checkpoint, the shape in current model is torch.Size([514, 16]).
```
Either the current logic needs to be further refined or we need a new key `_keys_to_ignore_on_load_always`?
The current logic is here:
https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/modeling_utils.py#L1921-L1964
It's easy to see how it fails if `set(expected_keys) == set(loaded_keys))` which is the case in this situation:
https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/modeling_utils.py#L1953-L1954
I think the "bug" is here:
```
expected_keys = list(model_state_dict.keys())
```
This further needs to be processed to remove `_keys_to_ignore_on_save`, since they are not expected even if they are in the model.
I think the logic is missing and I propose to fix it with this additional chunk (first if):
1.
```
if cls._keys_to_ignore_on_save is not None:
for pat in cls._keys_to_ignore_on_save:
expected_keys = [k for k in expected_keys if re.search(pat, k) is None]
missing_keys = list(set(expected_keys) - set(loaded_keys))
unexpected_keys = list(set(loaded_keys) - set(expected_keys))
```
2. and it never removes the `unexpected_keys` from `state_dict` - so all these still get loaded in `_load_state_dict_into_model` which doesn't get the list of keys to load and loads everything from the `state_dict`
------------
If I piled up too many issues together please let me know and I will split it up, they are just all seem to be interconnected.
Thank you!
@LysandreJik, @sgugger, @patrickvonplaten, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16719/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16719/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16718
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16718/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16718/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16718/events
|
https://github.com/huggingface/transformers/pull/16718
| 1,200,977,656
|
PR_kwDOCUB6oc42Dgat
| 16,718
|
[WIP]-Add Fast Pitch 1.1
|
{
"login": "ArEnSc",
"id": 6252325,
"node_id": "MDQ6VXNlcjYyNTIzMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6252325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArEnSc",
"html_url": "https://github.com/ArEnSc",
"followers_url": "https://api.github.com/users/ArEnSc/followers",
"following_url": "https://api.github.com/users/ArEnSc/following{/other_user}",
"gists_url": "https://api.github.com/users/ArEnSc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArEnSc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArEnSc/subscriptions",
"organizations_url": "https://api.github.com/users/ArEnSc/orgs",
"repos_url": "https://api.github.com/users/ArEnSc/repos",
"events_url": "https://api.github.com/users/ArEnSc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArEnSc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the PR @ArEnSc!\r\n\r\nLet us know if you need assistance at any point!",
"> Thanks for the PR @ArEnSc!\r\n> \r\n> Let us know if you need assistance at any point!\r\n\r\nWill do! have been lagging on this due to family and my day job =)",
"update: going to continue to work on this soon",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still working on this, just doing some reading working on burn out.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"closing for now while I deal with covid =("
] | 1,649
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds FastPitch 1.1# (issue)[https://github.com/huggingface/transformers/issues/16349]
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16718/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16718",
"html_url": "https://github.com/huggingface/transformers/pull/16718",
"diff_url": "https://github.com/huggingface/transformers/pull/16718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16718.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16717
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16717/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16717/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16717/events
|
https://github.com/huggingface/transformers/pull/16717
| 1,200,735,956
|
PR_kwDOCUB6oc42Co0E
| 16,717
|
[deepspeed / m2m_100] make deepspeed zero-3 work with layerdrop
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
Same as I had to fix in `wav2vec2` it looks that this fix should eventually go to all models that use `LayerDrop`. At least at the moment Deepspeed is not capable of randomly skipping layers, so this PR uses the same now well tested workaround I used in `wav2vec2`, where all layers always run when deepspeed zero-3 is detected, but the results are ignored if it was meant to be skipped.
https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L817-L849
Perhaps one day Deepspeed will be able to randomly skip layers, at the moment the solution is not the most efficient one. I made a [request](https://github.com/microsoft/DeepSpeed/issues/1888).
When ZeRO-3 is not used the original code path is taken.
The test exercising this code path will be merged as part of this huge additional tests set PR https://github.com/huggingface/transformers/pull/12695 (it's been long overdue).
For posterity, the error for this issue will look something like:
```
RuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be
({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}})
but got
({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}).
```
Fixes: https://github.com/huggingface/transformers/issues/16688
@patil-suraj, @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16717/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16717",
"html_url": "https://github.com/huggingface/transformers/pull/16717",
"diff_url": "https://github.com/huggingface/transformers/pull/16717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16717.patch",
"merged_at": 1649944315000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16716
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16716/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16716/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16716/events
|
https://github.com/huggingface/transformers/issues/16716
| 1,200,723,750
|
I_kwDOCUB6oc5HkZcm
| 16,716
|
Predicting incorrect loss when eval data size is not a multiple of batch size
|
{
"login": "ajindal1",
"id": 32752809,
"node_id": "MDQ6VXNlcjMyNzUyODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/32752809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajindal1",
"html_url": "https://github.com/ajindal1",
"followers_url": "https://api.github.com/users/ajindal1/followers",
"following_url": "https://api.github.com/users/ajindal1/following{/other_user}",
"gists_url": "https://api.github.com/users/ajindal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajindal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajindal1/subscriptions",
"organizations_url": "https://api.github.com/users/ajindal1/orgs",
"repos_url": "https://api.github.com/users/ajindal1/repos",
"events_url": "https://api.github.com/users/ajindal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajindal1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"No, the evaluation loss is properly computed thanks to this line actually. Repeating it the number of times then truncating to the length of the dataset [here](https://github.com/huggingface/transformers/blob/924484ee4a6ebc79426d27eef31a1ee7d13cbb9a/src/transformers/trainer.py#L2551) makes the final evaluation loss the proper average of all losses.\r\n\r\nAs for the test not passing, I think you are running it on 2 GPUs? It's only intended to work on one.",
"Thank you for the quick reply. Yes, I was running the code on 2 GPUs and it works fine on 1 GPU. May I ask why is it intended to work on 1 GPU?",
"The batch size is actually wrong in that case. Pushing a fix!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,653
| 1,653
|
NONE
| null |
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-5.4.0-96-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.12.0.dev20220411+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.5
- JaxLib version: 0.3.5
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@sgugger
## Issue:
When the input data size is not a multiple of batch_size, the loss calculated seems wrong to me. As mentioned in this line https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/trainer.py#L2469 The loss is repeated batch_size times which does not makes sense for the last input which is not divisible by the batch_size. This also leads to the failure of HF test case (tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate) when I am running this on my device.
## To reproduce
Steps to reproduce the behavior:
1. Install pytest
2. RUN: pytest tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate
Error:
FAILED tests/trainer/test_trainer.py::TrainerIntegrationTest::test_evaluate - AssertionError: 0.517515242099762 != 0.41851458 within 7 places (0.09900066256523132 difference)
## Expected behavior
The test should pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16716/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16715
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16715/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16715/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16715/events
|
https://github.com/huggingface/transformers/pull/16715
| 1,200,655,165
|
PR_kwDOCUB6oc42CW_x
| 16,715
|
fix image type DETR feature extraction for panoptic segmentation
|
{
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4235521865,
"node_id": "LA_kwDOCUB6oc78dO9J",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20extractors",
"name": "Feature extractors",
"color": "c2e0c6",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16715). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,655
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
The __call__ function of the DETR, expects an object with a `shape` when `pad_and_return_pixel_mask=True`
If try to use the extract feature with `do_resize=False` and `do_normalize=False` will crash on https://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/detr/feature_extraction_detr.py#L584
because if pass the image as PIL format will need to convert this to an object with shape. And the conversion to Numpy array is done on this PR, to fix at `prepare_coco_panoptic`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16715/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16715",
"html_url": "https://github.com/huggingface/transformers/pull/16715",
"diff_url": "https://github.com/huggingface/transformers/pull/16715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16715.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16714
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16714/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16714/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16714/events
|
https://github.com/huggingface/transformers/issues/16714
| 1,200,650,451
|
I_kwDOCUB6oc5HkHjT
| 16,714
|
ValueError if answer in truncated table rows and columns in Tapas tokenization
|
{
"login": "kolk",
"id": 9049591,
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolk",
"html_url": "https://github.com/kolk",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"repos_url": "https://api.github.com/users/kolk/repos",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Ubuntu 20 LTS
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0
- Tensorflow version (GPU?): 2.8.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
TAPAS: @NielsRogge
## Information
Model I am using (TAPAS):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## Description
Tapas tokenization truncates long tables to `num_rows` when truncation strategy is `drop_rows_to_fit`. It does not truncate the dropped rows and columns from `answer_coordinates`. If `answer_coordinates` contains the dropped rows, it results in `ValueError: Couldn't find all answers` in `_get_answer_ids` due to mismatch in `row_ids`, `col_ids` and `answer_coordinates`
## To reproduce
Steps to reproduce the behavior:
1. Initialize a large table, and required fields. Answer coordinates exceeds the `model_max_length`
```python
import numpy as np
import pandas as pd
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-wikisql-supervised")
tab = np.random.choice(5, 513)
table = pd.DataFrame(data=tab, columns=["Value"]).astype('str')
answer_texts = [table.iloc[512]["Value"]]
answer_coordinates=[(512,0)]
question="dummy question"
```
2. Tokenize the large table
```python
encoding = tokenizer(
table=table,
queries=question,
answer_coordinates=answer_coordinates,
answer_text=answer_texts,
truncation=True,
padding="max_length",
return_tensors="pt",
)
```
Output:
```python
Traceback (most recent call last):
File "/py3/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-2827f7bd32ad>", line 1, in <module>
encoding = tokenizer(
File "/py3lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 624, in __call__
return self.encode_plus(
File "/py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 990, in encode_plus
return self._encode_plus(
File "/py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1044, in _encode_plus
return self.prepare_for_model(
File "//py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1203, in prepare_for_model
labels = self.get_answer_ids(column_ids, row_ids, table_data, answer_text, answer_coordinates)
File "/py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1789, in get_answer_ids
return self._get_answer_ids(column_ids, row_ids, answer_coordinates_question)
File /py3/lib/python3.8/site-packages/transformers/models/tapas/tokenization_tapas.py", line 1778, in _get_answer_ids
raise ValueError("Couldn't find all answers")
ValueError: Couldn't find all answers
```
## Expected behavior
Remove truncated rows and columns from `answer_coordinates` and return the truncated `labels`. Throw an exception if `answer_coordinates` is empty.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16714/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16713
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16713/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16713/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16713/events
|
https://github.com/huggingface/transformers/pull/16713
| 1,200,586,802
|
PR_kwDOCUB6oc42CIAC
| 16,713
|
TF generate refactor - XLA sample
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@patrickvonplaten the `stateless` TF functions accept a `seed` argument that is a tuple of two integers 😅 Not very intuitive, I agree. They correspond to the `key` and `counter` used in the internal RNG algorithms ([source](https://www.tensorflow.org/api_docs/python/tf/random/Generator#from_key_counter)).\r\n\r\nIf you think it will be unintuitive for users, I can change it so that our `seed` argument corresponds to the `key` of the tuple (i.e. a single integer), and fix the `counter` to `0`. For practical purposes, it should be the same thing.",
"> @patrickvonplaten the `stateless` TF functions accept a `seed` argument that is a tuple of two integers sweat_smile Not very intuitive, I agree. They correspond to the `key` and `counter` used in the internal RNG algorithms ([source](https://www.tensorflow.org/api_docs/python/tf/random/Generator#from_key_counter)).\r\n> \r\n> If you think it will be unintuitive for users, I can change it so that our `seed` argument corresponds to the `key` of the tuple (i.e. a single integer), and fix the `counter` to `0`. For practical purposes, it should be the same thing.\r\n\r\nI see - ok maybe better to leave as is then to be aligned with TF",
"While running tests for T5 (as suggested by @Rocketknight1), I found out that our XLA code is not behaving properly for T5, for both `sample` and `greedy_search`. Because the problem is not exclusive to `sample`, I'm merging this PR and fixing the issue in a future one.\r\n\r\n(example)\r\n\r\n"
] | 1,649
| 1,650
| 1,650
|
MEMBER
| null |
# What does this PR do?
This PR brings XLA to `sample`, in `generate`. Four important details before reviewing:
1. The diff has the changes of https://github.com/huggingface/transformers/pull/16704, review that PR first plz :) It fixes a test from `beam_search`. I will rebase as soon as the other PR gets merged (the changes were bundled to confirm that it passes all generate tests).
2. The body is mostly copy/paste from `greedy_search`;
3. The sample step was changed from the previous implementation -- if we want to seed sampling with XLA, we need to use the `stateless` functions;
4. The XLA sample tests do not compare all generated tokens to their non-XLA sample counterparts, due to the numerical instabilities discussed on Slack. We do compare the first tokens, which are the same.
Finally, tests have been run for the usual models (`gpt2`, `t5`, `rag`, `speech2text`, `encoder_decoder`, `vision_encoder_decoder`, `bart`).
____________________________
I've also run a quick sanity check on GPU. Using GPT2+sample, on an Nvidia T4:
- eager TF: ~1.7s
- XLA TF: ~54ms (~22s compile time) :point_right: 31x speedup
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16713/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16713/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16713",
"html_url": "https://github.com/huggingface/transformers/pull/16713",
"diff_url": "https://github.com/huggingface/transformers/pull/16713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16713.patch",
"merged_at": 1650275904000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16712
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16712/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16712/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16712/events
|
https://github.com/huggingface/transformers/pull/16712
| 1,200,543,780
|
PR_kwDOCUB6oc42B-u4
| 16,712
|
Run the scheduled tests
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16712). All of your documentation changes will be reflected on that endpoint."
] | 1,649
| 1,664
| null |
MEMBER
| null |
:warning: Do not merge this PR!
PR 2/2: in order to finish running suite, rebase this PR on [`test-tokenizers-main`](https://github.com/huggingface/transformers/tree/test-tokenizers-main), branch of PR https://github.com/huggingface/transformers/pull/16708.
---
This PR builds on top of https://github.com/huggingface/transformers/pull/16708.
It leverages the docker images created in the PR above, and updates the channel in which to report the tests to be a dummy one.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16712/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16712",
"html_url": "https://github.com/huggingface/transformers/pull/16712",
"diff_url": "https://github.com/huggingface/transformers/pull/16712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16712.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16711
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16711/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16711/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16711/events
|
https://github.com/huggingface/transformers/issues/16711
| 1,200,515,615
|
I_kwDOCUB6oc5Hjmof
| 16,711
|
AutoModelForMaskedLM produces NaN if no token is masked
|
{
"login": "nreimers",
"id": 10706961,
"node_id": "MDQ6VXNlcjEwNzA2OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10706961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nreimers",
"html_url": "https://github.com/nreimers",
"followers_url": "https://api.github.com/users/nreimers/followers",
"following_url": "https://api.github.com/users/nreimers/following{/other_user}",
"gists_url": "https://api.github.com/users/nreimers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nreimers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nreimers/subscriptions",
"organizations_url": "https://api.github.com/users/nreimers/orgs",
"repos_url": "https://api.github.com/users/nreimers/repos",
"events_url": "https://api.github.com/users/nreimers/events{/privacy}",
"received_events_url": "https://api.github.com/users/nreimers/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @nreimers, thanks for the issue! I think the propositions are sensible, what do you think @sgugger ?",
"I think this should be solved in the `MaskedLM` directly to return a loss of 0.0 and no NaNs.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Any fix on this? I'm still facing this issue."
] | 1,649
| 1,681
| 1,653
|
CONTRIBUTOR
| null |
This is a more for discussion if a bugfix is wanted (and if yes, how it should look like). I already fixed it locally for my training script.
**Background**
I currently run MLM pre-training for large models where each batch can only consist of a single example. In some cases, the text can be rather short, for example, just a sentence. Here it can happen that the `DataCollatorForLanguageModeling` does not mask any token and the computed loss is NaN, which produces problems down the multi-processing script as a NaN loss cannot be correctly back-propagated & shared across the workers.
Here a short simplified script that shows the problem:
```python
from transformers import DataCollatorForLanguageModeling, AutoTokenizer
from transformers import AutoModelForMaskedLM
model_name = "nreimers/BERT-Tiny_L-2_H-128_A-2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
coll = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
print("MASK token:", tokenizer.convert_tokens_to_ids(tokenizer.mask_token))
text = "This is an example"
data = tokenizer(text, padding=True, max_length=512, return_special_tokens_mask=True)
model = AutoModelForMaskedLM.from_pretrained(model_name)
for _ in range(10):
mini_batch = coll([data]) #Is equivalent to getting a mini batch from a DataSet / DataLoader
print("Input", mini_batch['input_ids'])
print("Labels", mini_batch['labels'])
output = model(**mini_batch)
print(output.loss)
print("------")
```
Output:
```
// Here tokens 1-3 were masked
Input tensor([[ 101, 103, 103, 2019, 2742, 102]])
Labels tensor([[-100, 2023, 2003, 2019, -100, -100]])
Loss tensor(0.9181, grad_fn=<NllLossBackward0>)
------
// Here no tokens were masked
Input tensor([[ 101, 2023, 2003, 2019, 2742, 102]])
Labels tensor([[-100, -100, -100, -100, -100, -100]])
Loss tensor(nan, grad_fn=<NllLossBackward0>)
```
When there are masked tokens, then the loss is computed correctly. But as we mask just 15% of the tokens, it can happen for short sequences that no tokens are masked (i.e. labels are all -100), hence the loss is `nan`.
For long sequences and / or large batches the issues does not really happen, as the probability that no token is masked is rather low. But for short sequences with small batch sizes this happens fairly often. If you train large models, often you cannot increase you batch size and short text sequences in your dataset can kill your process.
**Discussion**
If we want to fix this, we could fix in two possible ways:
1) Update the `DataCollatorForLanguageModeling` to make sure that at least 1 token per text is masked
2) Update the loss in `AutoModelForMaskedLM` that the loss is 0 if no token is selected for masking.
**Work around**
My current solution (for Pytorch) looks like this: If there is no token selected for masking, I select the first token (the first token after the CLS token). Not a perfect solution, but it solves the issue for me.
```python
class MyDataCollatorForLanguageModeling(DataCollatorForLanguageModeling):
def torch_mask_tokens(self, inputs, special_tokens_mask = None):
"""
Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original.
"""
import torch
labels = inputs.clone()
# We sample a few tokens in each sequence for MLM training (with probability `self.mlm_probability`)
probability_matrix = torch.full(labels.shape, self.mlm_probability)
if special_tokens_mask is None:
special_tokens_mask = [
self.tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
special_tokens_mask = torch.tensor(special_tokens_mask, dtype=torch.bool)
else:
special_tokens_mask = special_tokens_mask.bool()
probability_matrix.masked_fill_(special_tokens_mask, value=0.0)
masked_indices = torch.bernoulli(probability_matrix).bool()
# Nils added code: Make sure at least 1 token is masked
for idx in range(len(masked_indices)):
if not torch.any(masked_indices[idx]):
masked_indices[idx][1] = True
# /Nils added code
labels[~masked_indices] = -100 # We only compute loss on masked tokens
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16711/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16711/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16710
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16710/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16710/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16710/events
|
https://github.com/huggingface/transformers/issues/16710
| 1,200,372,910
|
I_kwDOCUB6oc5HjDyu
| 16,710
|
AutoConfig.from_pretrained can fail with Tokenizers
|
{
"login": "d-miketa",
"id": 320321,
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-miketa",
"html_url": "https://github.com/d-miketa",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The `AutoConfig` utility is a utility for models only, it's the tool one can use to instantiate a model. \r\n\r\nWhat would you like to do by instantiating a configuration for a tokenizer? The `AutoTokenizer` class should handle everything on its own.",
"Ahh I was under the impression that you needed to provide a separate `tokenizer-config.json`. Must've been an old example or someone overcomplicating their own usage. Thanks for clarifying!"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
`tokenizer_config = AutoConfig.from_pretrained("/path/to/tokenizer_config.json")` resolves the Auto Class by checking the `model_type` key, which is not always included for tokenizers, see e.g. https://huggingface.co/seyonec/ChemBERTa-zinc-base-v1/blob/main/tokenizer_config.json.
Perhaps `model_type` should be exported in `tokenizer_config.json` when running `trainer.save(/path/to)`?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16710/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16709
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16709/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16709/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16709/events
|
https://github.com/huggingface/transformers/pull/16709
| 1,200,332,235
|
PR_kwDOCUB6oc42BRzS
| 16,709
|
Add defensive check for config num_labels and id2label
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Might I suggest to add an example in the error message? Especially for regression where num_labels=1, it might not be obvious to users what the id2label map has to look like. Suggestion:\r\n\r\n```\r\nf\"You passed along `num_labels={kwargs['num_labels']}` with an incompatible ID to label map:\r\n{kwargs['id2label']}. The id2label map should be a dictionary of ID (int) to label (str). E.g., for\r\nnum_labels=1 (regression), it should look like this: {0: \"LABEL_0\"},\r\neven if you do not have any explicitly labelled data.\"\r\n```",
"I really don't understand why you want to pass both, as setting `num_labels=1` will do exactly that.",
"> I really don't understand why you want to pass both, as setting `num_labels=1` will do exactly that.\r\n\r\nExactly, but this comes back to the original issue that I posted. Passing num_labels=1, id2label=None causes issues, because id2label=None overwrites the generated id2label map. So the only thing that works is:\r\n\r\n```python\r\nconfig = BertConfig.from_pretrained(model_name_or_path, num_labels=1, id2label=None)\r\nconfig.num_labels = num_labels\r\n```\r\n\r\nor \r\n\r\n```python\r\nconfig = BertConfig.from_pretrained(model_name_or_path, num_labels=1, id2label={0: \"LABEL_0\"})\r\n```\r\n\r\nYet it is not obvious why\r\n\r\n```python\r\nconfig = BertConfig.from_pretrained(model_name_or_path, num_labels=1, id2label=None)\r\n```\r\n\r\ndoesn't work even though you don't strictly need labels for a regression problem, and even though id2label=None is the default argument.\r\n\r\nAnd yes, I am aware that _most users_ will not encounter this issue because they will not explicitly pass id2label=None, but that does not mean that it cannot happen. And if it does it should be made obvious to the user why something goes wrong. I often write code for different use-cases, and as my issue showed, you will encounter this issue if you need to write code for different num_labels/tasks dynamically. If users are not expected to write code like that, it doesn't hurt to tell them in the error message how they should write their code instead. \r\n\r\nYou are right though that my message seemed to imply that they _have_ to provide an id2label map. Suggestion:\r\n\r\nf\"You passed along `num_labels={kwargs['num_labels']}` with an incompatible ID to label map:\r\n{kwargs['id2label']}. If given (not required), the id2label map should be a dictionary of ID (int)\r\nto label (str). Note that explicitly setting id2label to None may lead to unexpected errors. \r\nInstead, do not pass the id2label argument at all or pass a dummy id2label with the same len \r\nas num_labels.\"",
"I adapted the error message slightly to insist on removing one of the incompatible kwarg."
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
As seen in #16600, there can be some unclear errors when the user tries to pass together an inconsistent `num_labels` and `id2label`. This PR addresses that with a clear error message.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16709/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16709",
"html_url": "https://github.com/huggingface/transformers/pull/16709",
"diff_url": "https://github.com/huggingface/transformers/pull/16709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16709.patch",
"merged_at": 1649863700000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16708
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16708/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16708/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16708/events
|
https://github.com/huggingface/transformers/pull/16708
| 1,200,284,611
|
PR_kwDOCUB6oc42BKOA
| 16,708
|
Build docker images for tokenizers main branch
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16708). All of your documentation changes will be reflected on that endpoint."
] | 1,649
| 1,664
| null |
MEMBER
| null |
:warning: Do not merge this PR!
PR 1/2: in order to run the full test suite (slow tests included), with the `main` branch of the `tokenizers` library, rebase this PR on `main`. Once the workflows have finished, head to PR 2/2 here: https://github.com/huggingface/transformers/pull/16712
---
This PR is one of two items to run the full test suite for the tokenizers current `main` branch.
In order to re-run, rebuild the docker images, publish them to the docker hub, and rebase this PR on the `main` branch of this repository.
Steps done in order to create this PR:
- Edit the dockerfiles so that they successfully install `tokenizers` from source in the container
- Edit the `build-docker-images.yml` action to:
- Have it push these images to the Docker Hub.
- Remove all non important images
- Edit the identifier to contain `internal` as a prefix, and `tokenizers-main` as a suffix.
- Convert these images to private visibility so that it does not surprise users
These images will be built on each commit, so rebasing this branch on `main` will retrigger the workflow
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16708/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16708",
"html_url": "https://github.com/huggingface/transformers/pull/16708",
"diff_url": "https://github.com/huggingface/transformers/pull/16708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16708.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16707
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16707/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16707/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16707/events
|
https://github.com/huggingface/transformers/pull/16707
| 1,200,233,857
|
PR_kwDOCUB6oc42A_Yg
| 16,707
|
Private repo TrainingArgument
|
{
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
# What does this PR do?
Creates a new argument for `TrainingArguments` called `hub_private_repo`. If True, the hub repo created by `Trainer` will be set to private. Defaults to False (public).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16707/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16707",
"html_url": "https://github.com/huggingface/transformers/pull/16707",
"diff_url": "https://github.com/huggingface/transformers/pull/16707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16707.patch",
"merged_at": 1649698636000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16706
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16706/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16706/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16706/events
|
https://github.com/huggingface/transformers/pull/16706
| 1,200,205,878
|
PR_kwDOCUB6oc42A5cx
| 16,706
|
[from_pretrained] refactor find_mismatched_keys
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
This PR refactors 2 large identical code copies introduced by the recent sharded checkpoint PR into a helper function which is then called from 2 places. There is no change in functionality.
It's an intermediary step for this PR: https://github.com/huggingface/transformers/pull/16657 which revamps `low_cpu_mem_usage` and integrates it better with the sharded checkpoint code branch.
I explained here why the helper function is not a closure but needs the input args explicitly: https://github.com/huggingface/transformers/pull/16657#discussion_r846812714
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16706/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16706",
"html_url": "https://github.com/huggingface/transformers/pull/16706",
"diff_url": "https://github.com/huggingface/transformers/pull/16706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16706.patch",
"merged_at": 1649850615000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16705
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16705/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16705/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16705/events
|
https://github.com/huggingface/transformers/issues/16705
| 1,200,202,665
|
I_kwDOCUB6oc5HiaOp
| 16,705
|
Cuda Memory leak (OOM) when using HF Trainer DDP mode
|
{
"login": "Smu-Tan",
"id": 79228128,
"node_id": "MDQ6VXNlcjc5MjI4MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/79228128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Smu-Tan",
"html_url": "https://github.com/Smu-Tan",
"followers_url": "https://api.github.com/users/Smu-Tan/followers",
"following_url": "https://api.github.com/users/Smu-Tan/following{/other_user}",
"gists_url": "https://api.github.com/users/Smu-Tan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Smu-Tan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Smu-Tan/subscriptions",
"organizations_url": "https://api.github.com/users/Smu-Tan/orgs",
"repos_url": "https://api.github.com/users/Smu-Tan/repos",
"events_url": "https://api.github.com/users/Smu-Tan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Smu-Tan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Smu-Tan \r\n\r\nHow were you able to get around this?"
] | 1,649
| 1,704
| 1,649
|
NONE
| null |
## Environment info
- `transformers` version: 4.12.0
- Platform: Linux-5.4.0-1073-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
## Description:
@patrickvonplaten @patil-suraj @sgugger
Hi! In a nut shell, Im trying to train mBart for a seq2seq2 generation task using huggingface transformers trainer with Distributed Data Parallel (DDP) mode, but encountered CUDA OOM error.
Specifically, the problem is, with the same setting (batch_size, and same length of data, etc), I can train it with single GPU successfully! But always encounter CUDA OOM error when using DDP mode. I also tried to decrease batch size to 1, and length of input data to 50 (it was 256 for encoder and 100 for decoder), but still had the issue.
I run the code below by: `%sh OMP_NUM_THREADS=10 python -m torch.distributed.launch --nproc_per_node=4 Train_MBart_DDP.py`
# Code:
#### Libraries
from transformers import MBartForConditionalGeneration, MBartTokenizer
from transformers import Trainer, TrainingArguments
from transformers.models.bart.modeling_bart import shift_tokens_right
from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer
from datasets import Dataset
from datasets import load_dataset, load_metric
import torch
import numpy as np
import nltk
import os
import logging
logging.basicConfig(level=logging.DEBUG,format='%(asctime)s %(message)s')
os.environ["WANDB_DISABLED"] = "true"
metric =load_metric('/metrics/rouge/rouge.py')
#### set up MLFOW and get local rank
os.environ["DATABRICKS_HOST"] = "[MASKED]"
os.environ["DATABRICKS_TOKEN"] = "[MASKED]"
os.environ["WANDB_WATCH"] = "false"
os.environ["NCCL_DEBUG"] = "INFO"
local_rank = int(os.environ["LOCAL_RANK"])
client = MlflowClient()
experiment = client.get_experiment([MASKED])
remote_server_uri = mlflow.tracking.get_tracking_uri()[]
mlflow.set_tracking_uri(remote_server_uri)
mlflow.set_experiment('[MASKED]/mBart_DDP')
#### get data
def apply_process(row):
question, answers, ctxs = row
answer = answers[0]
candidates = np.array([d['text'] for d in row['ctxs']])
candidates = pd.unique(candidates)
candidates = ' '.join(candidates[:5].tolist())
question_passage = question + ' ' + candidates
return question_passage, answer
df_path = '/tmp/top200_output.json'
f = open(df_path)
df = json.load(f)
f.close()
dff = pd.DataFrame(df)
dff[['question_passage', 'answer']] = dff.apply(apply_process, axis = 1, result_type="expand")
dff = dff[['question_passage', 'answer']]
#### get dataset and model
def convert_to_features(dataset):
input_encodings = tokenizer.batch_encode_plus(dataset['question_passage'], pad_to_max_length=True, padding='max_length', max_length = 256, truncation=True)
target_encodings = tokenizer.batch_encode_plus(dataset['answer'], pad_to_max_length=True, padding='max_length', max_length = 100, truncation=True)
labels = target_encodings['input_ids']
labels = torch.tensor(labels)
decoder_input_ids = shift_tokens_right(labels, model.config.pad_token_id, 0)
decoder_input_ids = np.array(decoder_input_ids)
labels[labels[:, :] == model.config.pad_token_id] = -100
labels = np.array(labels)
encodings = {
'input_ids': input_encodings['input_ids'],
'attention_mask': input_encodings['attention_mask'],
'decoder_input_ids': decoder_input_ids,
'labels': labels,
}
return encodings
tokenizer = MBartTokenizer.from_pretrained('/tmp/mbart-large-cc25', src_lang="en_XX", local_files_only=True)
model = MBartForConditionalGeneration.from_pretrained('/tmp/mbart-large-cc25', local_files_only=True)
model.config.decoder_start_token_id = tokenizer.lang_code_to_id["en_XX"]
dataset = Dataset.from_pandas(dff)
test = Dataset.from_dict(dataset[:10])
train = Dataset.from_dict(dataset[500:])
test = test.map(convert_to_features, batched=True)
columns = ['input_ids', 'labels', 'decoder_input_ids','attention_mask',]
test.set_format(type='torch', columns=columns)
test = test.remove_columns(['question_passage', 'answer'])
train = train.map(convert_to_features, batched=True)
columns = ['input_ids', 'labels', 'decoder_input_ids','attention_mask',]
train.set_format(type='torch', columns=columns)
train = train.remove_columns(['question_passage', 'answer'])
#### set trainer
args = Seq2SeqTrainingArguments(
"/tmp/bart_training",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
weight_decay=0.01,
save_total_limit=1,
num_train_epochs=3,
predict_with_generate=True,
gradient_accumulation_steps = 4,
disable_tqdm=False,
dataloader_num_workers = 10,
fp16=True,
local_rank= os.environ["LOCAL_RANK"],
do_train=True,
do_eval=True,
overwrite_output_dir = True,
sharded_ddp = 'simple',
dataloader_pin_memory = True,
adafactor =True,
skip_memory_metrics = True,
ddp_find_unused_parameters =True,
sortish_sampler=True,
generation_max_length =50,
gradient_checkpointing =False
)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
logging.info(decoded_preds)
logging.info('\n\n')
logging.info(decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=train,
eval_dataset=test,
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
trainer.train()
## The error:
#### Error:
0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO Bootstrap : Using eth0:10.232.244.83<0>
0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO NET/IB : No device found.
0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO NET/Socket : Using [0]eth0:10.232.244.83<0>
0220-221927-imgswodk-10-232-244-83:9106:9106 [2] NCCL INFO Using network Socket
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 00/02 : 0 1 2 3
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 01/02 : 0 1 2 3
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Setting affinity for GPU 0 to 0fff
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Setting affinity for GPU 1 to 0fff
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Setting affinity for GPU 2 to 0fff
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] -1/-1/-1->3->2
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Setting affinity for GPU 3 to 0fff
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 00 : 0[100000] -> 1[200000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 00 : 3[400000] -> 0[100000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Channel 01 : 0[100000] -> 1[200000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 01 : 3[400000] -> 0[100000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 00 : 2[300000] -> 3[400000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 00 : 1[200000] -> 2[300000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 01 : 2[300000] -> 3[400000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 01 : 1[200000] -> 2[300000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Connected all rings
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Connected all rings
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 00 : 3[400000] -> 2[300000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Channel 01 : 3[400000] -> 2[300000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 00 : 2[300000] -> 1[200000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Channel 01 : 2[300000] -> 1[200000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO Connected all trees
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Connected all rings
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Connected all rings
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 00 : 1[200000] -> 0[100000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Channel 01 : 1[200000] -> 0[100000] via direct shared memory
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO Connected all trees
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO Connected all trees
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO Connected all trees
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/512
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
0220-221927-imgswodk-10-232-244-83:9105:9265 [1] NCCL INFO comm 0x7fba7c001240 rank 1 nranks 4 cudaDev 1 busId 200000 - Init COMPLETE
0220-221927-imgswodk-10-232-244-83:9106:9266 [2] NCCL INFO comm 0x7f6a70001240 rank 2 nranks 4 cudaDev 2 busId 300000 - Init COMPLETE
0220-221927-imgswodk-10-232-244-83:9104:9263 [0] NCCL INFO comm 0x7f4f40001240 rank 0 nranks 4 cudaDev 0 busId 100000 - Init COMPLETE
0220-221927-imgswodk-10-232-244-83:9107:9264 [3] NCCL INFO comm 0x7f0918001240 rank 3 nranks 4 cudaDev 3 busId 400000 - Init COMPLETE
0220-221927-imgswodk-10-232-244-83:9104:9104 [0] NCCL INFO Launch mode Parallel
***** Running training *****
Num examples = 500
Num Epochs = 3
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 96
0%| | 0/96 [00:00<?, ?it/s][W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
1%| | 1/96 [00:01<01:53, 1.19s/it]Traceback (most recent call last):
File "Train_MBart_reader_DDP.py", line 197, in <module>
Traceback (most recent call last):
File "Train_MBart_reader_DDP.py", line 197, in <module>
Traceback (most recent call last):
File "Train_MBart_reader_DDP.py", line 197, in <module>
Traceback (most recent call last):
File "Train_MBart_reader_DDP.py", line 197, in <module>
trainer.train()
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train
trainer.train()
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train
tr_loss_step = self.training_step(model, inputs)trainer.train()
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train
tr_loss_step = self.training_step(model, inputs)
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step
loss.backward()
File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
tr_loss_step = self.training_step(model, inputs)
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step
loss.backward()torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError : torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
CUDA out of memory. Tried to allocate 382.00 MiB (GPU 0; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50 MiB free;
14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: CUDA out of memory. Tried to allocate 382.00 MiB (GPU 2; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50 MiB free; 14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
loss.backward()
File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: CUDA out of memory. Tried to allocate 382.00 MiB (GPU 3; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50 MiB free; 14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
trainer.train()
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train
tr_loss_step = self.training_step(model, inputs)
File "/databricks/python/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in training_step
loss.backward()
File "/databricks/python/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/databricks/python/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: CUDA out of memory. Tried to allocate 382.00 MiB (GPU 1; 15.78 GiB total capacity; 14.02 GiB already allocated; 339.50
MiB free; 14.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid
fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
## To reproduce
You could reproduce it using other data like summarization CNN Daily Dataset.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16705/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/transformers/issues/16705/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16704
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16704/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16704/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16704/events
|
https://github.com/huggingface/transformers/pull/16704
| 1,200,186,753
|
PR_kwDOCUB6oc42A1VW
| 16,704
|
TF beam search: handle case without past
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(merging as Patrick has approved #16713, which also contains these changes)"
] | 1,649
| 1,649
| 1,649
|
MEMBER
| null |
# What does this PR do?
Fixes `tests/vision_encoder_decoder/test_modeling_tf_vision_encoder_decoder.py::TFViT2GPT2ModelIntegrationTest::test_inference_coco_en`, whose root cause was in `beam_search` (it was not handling correctly cases without cache). This one slipped through the cracks, I probably forgot to run a final check on this test file before merging -- my bad 😅
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16704/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16704/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16704",
"html_url": "https://github.com/huggingface/transformers/pull/16704",
"diff_url": "https://github.com/huggingface/transformers/pull/16704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16704.patch",
"merged_at": 1649792771000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16703
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16703/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16703/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16703/events
|
https://github.com/huggingface/transformers/pull/16703
| 1,200,153,762
|
PR_kwDOCUB6oc42Auuz
| 16,703
|
Don't push checkpoints to hub in `no_trainer` scripts
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
# Don't push checkpoints to the Hub in `no_trainer` scripts
## What does this add?
- Creates a `gitignore` file in the base folder if `push_to_hub` was passed and one does not exist
- During each call to `save_state`, if `push_to_hub` was passed then then directory is added to the .gitignore
## Why is it needed?
Users shouldn't expect to have all of their checkpoints sent to both the Hub and saved locally, it should just be locally until they are ready for the final save
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16703/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16703",
"html_url": "https://github.com/huggingface/transformers/pull/16703",
"diff_url": "https://github.com/huggingface/transformers/pull/16703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16703.patch",
"merged_at": 1649695365000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16702
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16702/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16702/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16702/events
|
https://github.com/huggingface/transformers/issues/16702
| 1,200,028,480
|
I_kwDOCUB6oc5HhvtA
| 16,702
|
group_texts function in language-modeling seems get wrong
|
{
"login": "zheyuye",
"id": 37728728,
"node_id": "MDQ6VXNlcjM3NzI4NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/37728728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zheyuye",
"html_url": "https://github.com/zheyuye",
"followers_url": "https://api.github.com/users/zheyuye/followers",
"following_url": "https://api.github.com/users/zheyuye/following{/other_user}",
"gists_url": "https://api.github.com/users/zheyuye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zheyuye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zheyuye/subscriptions",
"organizations_url": "https://api.github.com/users/zheyuye/orgs",
"repos_url": "https://api.github.com/users/zheyuye/repos",
"events_url": "https://api.github.com/users/zheyuye/events{/privacy}",
"received_events_url": "https://api.github.com/users/zheyuye/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am confused about the efficiency of this function. Does anyone test the performance of trained models by this way?",
"I got my except answer from https://github.com/huggingface/transformers/issues/10737, closing this issue."
] | 1,649
| 1,677
| 1,652
|
NONE
| null |
During preprocssing of language-modeling, function `group_texts` as https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py#L447-L460
seeems ingore the special tokens like [CLS] and [SEP], and just concatenated them together. An obvious bug would appear that one example may have multiple [CLS] tokens that confuses model and also the [CLS] may not be sure to be placed in the first place.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16702/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16701
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16701/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16701/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16701/events
|
https://github.com/huggingface/transformers/issues/16701
| 1,200,018,869
|
I_kwDOCUB6oc5HhtW1
| 16,701
|
Optional keys in TrainingArguments aren't always labelled as such
|
{
"login": "d-miketa",
"id": 320321,
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-miketa",
"html_url": "https://github.com/d-miketa",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger "
] | 1,649
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
Inside `TrainingArguments` there are a few keys such as [tf32](https://github.com/huggingface/transformers/blob/098b0026447271a340d2d7e6bff428c82cb6d744/src/transformers/training_args.py#L594) which are optional and have a default of `None`, but aren't explicitly labelled as such. This can cause problems downstream; for example, OmegaConf will complain that
```
omegaconf.errors.ValidationError: Non optional field cannot be assigned None
full_key: tf32
object_type=Seq2SeqTrainingArguments
```
The fix is simple, just make sure every key with a default of `None` has `Optional` type. I'm busy with another PR atm, but can have a look later.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16701/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16700
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16700/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16700/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16700/events
|
https://github.com/huggingface/transformers/pull/16700
| 1,199,906,044
|
PR_kwDOCUB6oc41_52B
| 16,700
|
update decoder_vocab_size when resizing embeds
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
MEMBER
| null |
# What does this PR do?
- Update the `config.decoder_vocab_size` when resizing embeds if the enc, dec embeds are shared.
- Use `config.decoder_vocab_size` to reshape the `lm_logits`
Fixes #16670
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16700/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16700",
"html_url": "https://github.com/huggingface/transformers/pull/16700",
"diff_url": "https://github.com/huggingface/transformers/pull/16700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16700.patch",
"merged_at": 1649692931000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16699
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16699/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16699/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16699/events
|
https://github.com/huggingface/transformers/pull/16699
| 1,199,864,443
|
PR_kwDOCUB6oc41_w6C
| 16,699
|
Fix drop_path_rates argument passed to ConvNextStage
|
{
"login": "alex-coniasse",
"id": 60103599,
"node_id": "MDQ6VXNlcjYwMTAzNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/60103599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-coniasse",
"html_url": "https://github.com/alex-coniasse",
"followers_url": "https://api.github.com/users/alex-coniasse/followers",
"following_url": "https://api.github.com/users/alex-coniasse/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-coniasse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-coniasse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-coniasse/subscriptions",
"organizations_url": "https://api.github.com/users/alex-coniasse/orgs",
"repos_url": "https://api.github.com/users/alex-coniasse/repos",
"events_url": "https://api.github.com/users/alex-coniasse/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-coniasse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16699). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Thanks for pointing out, I'll remove the unclear `cur` variable and will fix it for the TF implementation as well.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing as this was fixed in #17280"
] | 1,649
| 1,654
| 1,654
|
NONE
| null |
# What does this PR do?
- Pass a sub-list of `drop_path_rates` to ConvNextStage instead of a single element.
- Fix the following issue:
Argument `drop_path_rates (List[float])` passed to `ConvNextStage` is a single float
when `drop_path_rate` is specified by the config.
Reproducer:
```
from transformers import ConvNextModel, ConvNextConfig
configuration = ConvNextConfig(drop_path_rate=0.1)
model = ConvNextModel(configuration)
```
Throws:
```
Traceback (most recent call last):
File "repro-droprate.py", line 4, in <module>
model = ConvNextModel(configuration)
File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 311, in __init__
self.encoder = ConvNextEncoder(config)
File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 221, in __init__
drop_path_rates=drop_path_rates[cur],
File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 197, in __init__
*[ConvNextLayer(config, dim=out_channels, drop_path=drop_path_rates[j]) for j in range(depth)]
File "/home/alexandrec/.local/lib/python3.6/site-packages/transformers/models/convnext/modeling_convnext.py", line 197, in <listcomp>
*[ConvNextLayer(config, dim=out_channels, drop_path=drop_path_rates[j]) for j in range(depth)]
TypeError: 'float' object is not subscriptable
```
Reproduced with Transformers 4.18.0 , Ubuntu 18.04 + Python 3.6.9
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16699/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16699",
"html_url": "https://github.com/huggingface/transformers/pull/16699",
"diff_url": "https://github.com/huggingface/transformers/pull/16699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16699.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/16698
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16698/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16698/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16698/events
|
https://github.com/huggingface/transformers/pull/16698
| 1,199,805,133
|
PR_kwDOCUB6oc41_kCO
| 16,698
|
Fix TF_MASKED_LM_SAMPLE
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
Fix `TF_MASKED_LM_SAMPLE`: there is currently a dimension issue regarding `mask_token_index` and `predicted_token_id`, which gives different results between PT/TF masked LM code samples
PT: `paris`
TF: `p a r i s`
See below for details.
(This is related to #16523)
##
### PT_MASKED_LM_SAMPLE
```python
from transformers import BertTokenizer, BertForMaskedLM
import torch
mask = "[MASK]",
checkpoint = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(f"{checkpoint}")
model = BertForMaskedLM.from_pretrained(f"{checkpoint}")
inputs = tokenizer(f"The capital of France is {mask}.", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# retrieve index of {mask}
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
expected_output = tokenizer.decode(predicted_token_id)
print(mask_token_index) # tensor([8]): row dimension from `nonzero()`
print(predicted_token_id) # tensor([3000])
print(expected_output) # paris
```
### TF_MASKED_LM_SAMPLE (on `main`)
```python
from transformers import BertTokenizer, TFBertForMaskedLM
import tensorflow as tf
tokenizer = BertTokenizer.from_pretrained(f"{checkpoint}")
model = TFBertForMaskedLM.from_pretrained(f"{checkpoint}")
inputs = tokenizer(f"The capital of France is {mask}.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of {mask}
mask_token_index = tf.where(inputs.input_ids == tokenizer.mask_token_id)[0][1]
predicted_token_id = tf.math.argmax(logits[0, mask_token_index], axis=-1)
expected_output = tokenizer.decode(predicted_token_id)
print(mask_token_index) # tf.Tensor(8, shape=(), dtype=int64): no row dimension
print(predicted_token_id) # tf.Tensor(3000, shape=(), dtype=int64)
print(tokenizer.decode(predicted_token_id)) # p a r i s (not good)
```
### TF_MASKED_LM_SAMPLE (this PR)
```python
# retrieve index of {mask}
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
expected_output = tokenizer.decode(predicted_token_id)
print(mask_token_index) # tf.Tensor([[8]], shape=(1, 1), dtype=int64): with row dimension
print(predicted_token_id) # tf.Tensor([3000], shape=(1,), dtype=int64)
print(tokenizer.decode(predicted_token_id)) # paris
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16698/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16698",
"html_url": "https://github.com/huggingface/transformers/pull/16698",
"diff_url": "https://github.com/huggingface/transformers/pull/16698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16698.patch",
"merged_at": 1649693968000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16697
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16697/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16697/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16697/events
|
https://github.com/huggingface/transformers/issues/16697
| 1,199,655,313
|
I_kwDOCUB6oc5HgUmR
| 16,697
|
ViLT vs VIT Classifier heads question
|
{
"login": "PrithivirajDamodaran",
"id": 7071019,
"node_id": "MDQ6VXNlcjcwNzEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7071019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrithivirajDamodaran",
"html_url": "https://github.com/PrithivirajDamodaran",
"followers_url": "https://api.github.com/users/PrithivirajDamodaran/followers",
"following_url": "https://api.github.com/users/PrithivirajDamodaran/following{/other_user}",
"gists_url": "https://api.github.com/users/PrithivirajDamodaran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PrithivirajDamodaran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PrithivirajDamodaran/subscriptions",
"organizations_url": "https://api.github.com/users/PrithivirajDamodaran/orgs",
"repos_url": "https://api.github.com/users/PrithivirajDamodaran/repos",
"events_url": "https://api.github.com/users/PrithivirajDamodaran/events{/privacy}",
"received_events_url": "https://api.github.com/users/PrithivirajDamodaran/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, thanks for your interest in ViLT. \r\n\r\nThat was the decision of the authors. Maybe you can ask them 😉 ",
"Ok :-)"
] | 1,649
| 1,650
| 1,650
|
NONE
| null |
Is there any specific reason for the classifier head on the ViLT model for tasks say ```ViltForImagesAndTextClassification``` or ```ViltForQuestionAnswering ``` has a ```LayerNorm and GELU``` and not just a Linear Input and output Layers (Like below) whereas classifier head on the VIT for ```ViTForImageClassification``` Has only a linear layer ?
Please advice.
i.e This
```python
# Classifier head
self.classifier = nn.Sequential(
nn.Linear(config.hidden_size, config.hidden_size * 2),
nn.LayerNorm(config.hidden_size * 2),
nn.GELU(),
nn.Linear(config.hidden_size * 2, config.num_labels),
)
```
and NOT this ?
```python
# Classifier head
self.classifier = nn.Linear(config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity()
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16697/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16696
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16696/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16696/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16696/events
|
https://github.com/huggingface/transformers/pull/16696
| 1,199,653,455
|
PR_kwDOCUB6oc41_DKk
| 16,696
|
Handle image_embeds in ViltModel
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for improving this!\r\n> \r\n> Out of interest: were you experimenting with ViLT?\r\n\r\nNot for this PR. I tried to fix a CI (vit-mae), which was about `test_torchscript`.\r\nIt turns out to be related to model main input -> I worked/improved on it -> more models involved including ViLT -> I just took this chance to work on this PR (otherwise I would forget it very quickly)"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
Handle `image_embeds` in `ViltModel` / `ViltForImagesAndTextClassification`.
(looks like `Vilt` is the first model introducing `image_embeds` argument.)
## More Info
The `image_embeds` in `ViltForImagesAndTextClassification` should have `num_images` dimension as `pixel_values` has, as far as I understand.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16696/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16696",
"html_url": "https://github.com/huggingface/transformers/pull/16696",
"diff_url": "https://github.com/huggingface/transformers/pull/16696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16696.patch",
"merged_at": 1649708180000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16695
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16695/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16695/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16695/events
|
https://github.com/huggingface/transformers/issues/16695
| 1,199,621,980
|
I_kwDOCUB6oc5HgMdc
| 16,695
|
Enable ONNX support for multiple-choice classification heads
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "echarlaix",
"id": 80481427,
"node_id": "MDQ6VXNlcjgwNDgxNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/80481427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echarlaix",
"html_url": "https://github.com/echarlaix",
"followers_url": "https://api.github.com/users/echarlaix/followers",
"following_url": "https://api.github.com/users/echarlaix/following{/other_user}",
"gists_url": "https://api.github.com/users/echarlaix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echarlaix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echarlaix/subscriptions",
"organizations_url": "https://api.github.com/users/echarlaix/orgs",
"repos_url": "https://api.github.com/users/echarlaix/repos",
"events_url": "https://api.github.com/users/echarlaix/events{/privacy}",
"received_events_url": "https://api.github.com/users/echarlaix/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "echarlaix",
"id": 80481427,
"node_id": "MDQ6VXNlcjgwNDgxNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/80481427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echarlaix",
"html_url": "https://github.com/echarlaix",
"followers_url": "https://api.github.com/users/echarlaix/followers",
"following_url": "https://api.github.com/users/echarlaix/following{/other_user}",
"gists_url": "https://api.github.com/users/echarlaix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echarlaix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echarlaix/subscriptions",
"organizations_url": "https://api.github.com/users/echarlaix/orgs",
"repos_url": "https://api.github.com/users/echarlaix/repos",
"events_url": "https://api.github.com/users/echarlaix/events{/privacy}",
"received_events_url": "https://api.github.com/users/echarlaix/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,652
| 1,652
|
MEMBER
| null |
# 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Currently, the `transformers.onnx` package doesn't support the export of models with a multiple-choice classification head, i.e. all `ModelForMultiplChoice` classes. We should enable this to provide full coverage of our current exports.
Implementing this involves:
* Adding a `multiple-choice` feature to the `FeaturesManager`
* Generating the appropriate dummy inputs
* Updating the features of all existing models which have a corresponding head
cc @michaelbenayoun
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16695/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16694
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16694/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16694/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16694/events
|
https://github.com/huggingface/transformers/issues/16694
| 1,199,613,729
|
I_kwDOCUB6oc5HgKch
| 16,694
|
Question: Add Embedding layer to BERT
|
{
"login": "KyungHyunLim",
"id": 72729802,
"node_id": "MDQ6VXNlcjcyNzI5ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/72729802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KyungHyunLim",
"html_url": "https://github.com/KyungHyunLim",
"followers_url": "https://api.github.com/users/KyungHyunLim/followers",
"following_url": "https://api.github.com/users/KyungHyunLim/following{/other_user}",
"gists_url": "https://api.github.com/users/KyungHyunLim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KyungHyunLim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KyungHyunLim/subscriptions",
"organizations_url": "https://api.github.com/users/KyungHyunLim/orgs",
"repos_url": "https://api.github.com/users/KyungHyunLim/repos",
"events_url": "https://api.github.com/users/KyungHyunLim/events{/privacy}",
"received_events_url": "https://api.github.com/users/KyungHyunLim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"self.token_type_embeddings(entity_loc_ids)\r\n-> self.entity_type_embeddings(entity_loc_ids)\r\n\r\nthere was typo"
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
I am going to add a BERT embedding layer.
```python
class CustomBertEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
############# THIS PART ################
self.entity_type_embeddings = nn.Embedding(3, config.hidden_size, max_norm=True)
############# THIS PART ################
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
# position_ids (1, len position emb) is contiguous in memory and exported when serialized
self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
if version.parse(torch.__version__) > version.parse("1.6.0"):
self.register_buffer(
"token_type_ids",
torch.zeros(self.position_ids.size(), dtype=torch.long),
persistent=False,
)
```
This layer works properly as long as it has only two values, 0 and 1.
However, I want to use 3 values 0,1,2.
However, if 3 values are used, the following error occurs.
```
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [3,0,0]
...
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [573,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
```
I can't understand. All I have to do is add a new embedding vector as shown below.
```python
entity_type_embeddings = self.token_type_embeddings(entity_loc_ids)
embeddings = inputs_embeds + token_type_embeddings + entity_type_embeddings
```
can anyone help me?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16694/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16693
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16693/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16693/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16693/events
|
https://github.com/huggingface/transformers/pull/16693
| 1,199,543,698
|
PR_kwDOCUB6oc41-ylW
| 16,693
|
Rename the method test_torchscript
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
`class ModelTesterMixin` has attribute `test_torchscript` as well as method named `def test_torchscript`.
In a few model specific test files, we have `test_torchscript = True` defined, for example,
`T5ModelTest`:
https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/tests/t5/test_modeling_t5.py#L515
`DistilBertModelTest`:
https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/tests/distilbert/test_modeling_distilbert.py#L214
This actually makes the `test_torchscript` **being a boolean value** instead of a method, therefore `test_torchscript` is **not run** for these places. See the last section for a dummy example.
## Fix
Although this could be fixed just by removing `test_torchscript = True`, it would be a better idea to rename the method `def test_torchscript` to something else like `def test_torchscript_simple`.
## Dummy example (to reproduce the issue)
```python
class DummyCommonTest:
test_me = True
def test_me(self):
a = 3
print(a)
class DummyModelTest1(DummyCommonTest):
pass
class DummyModelTest2(DummyCommonTest):
test_me = True
dummy_test = DummyCommonTest()
print(dummy_test.test_me)
dummy_test.test_me()
dummy_model_test_1 = DummyModelTest1()
# A method
print(dummy_model_test_1.test_me)
# can be called
dummy_model_test_1.test_me()
dummy_model_test_2 = DummyModelTest2()
# A boolean
print(dummy_model_test_2.test_me)
# can't be called: (TypeError: 'bool' object is not callable)
dummy_model_test_2.test_me()
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16693/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16693",
"html_url": "https://github.com/huggingface/transformers/pull/16693",
"diff_url": "https://github.com/huggingface/transformers/pull/16693.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16693.patch",
"merged_at": 1649694105000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16692
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16692/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16692/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16692/events
|
https://github.com/huggingface/transformers/issues/16692
| 1,199,529,016
|
I_kwDOCUB6oc5Hf1w4
| 16,692
|
ViLT Fine-tuning Bug: ValueError: operands could not be broadcast together with shapes
|
{
"login": "ZhihaoZhang97",
"id": 31653817,
"node_id": "MDQ6VXNlcjMxNjUzODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31653817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhihaoZhang97",
"html_url": "https://github.com/ZhihaoZhang97",
"followers_url": "https://api.github.com/users/ZhihaoZhang97/followers",
"following_url": "https://api.github.com/users/ZhihaoZhang97/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhihaoZhang97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhihaoZhang97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhihaoZhang97/subscriptions",
"organizations_url": "https://api.github.com/users/ZhihaoZhang97/orgs",
"repos_url": "https://api.github.com/users/ZhihaoZhang97/repos",
"events_url": "https://api.github.com/users/ZhihaoZhang97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhihaoZhang97/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThanks for your interest in ViLT! Could you try adding `.convert(\"RGB\")` when reading an image in the getitem method of the PyTorch dataset?",
"> Hi,\r\n> \r\n> Thanks for your interest in ViLT! Could you try adding `.convert(\"RGB\")` when reading an image in the getitem method of the PyTorch dataset?\r\n\r\nThank you for your quick reply! I will have a try now.",
"@NielsRogge Problem Solved! Thank you so much! "
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
## Environment info
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): GPU
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Models:
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, ViLT BEiT, DEiT, DETR, CANINE: @NielsRogge
Library:
- Vision: @NielsRogge, @sgugger
## Information
The model I am using **ViLT**:
The problem arises when using:
* ViLT official example scripts: (give details below)
https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViLT/Fine_tuning_ViLT_for_VQA.ipynb
The tasks I am working on is:
* Fine-tune ViLT for QVA on the complete VQAv2 validation dataset
## To reproduce
Steps to reproduce the behavior:
1. Download all the required dependencies
2. Change the data size in **Cell [29]** from **questions=questions[:100]** to **questions=questions[0:]** and **annotations=annotations[:100]** to **annotations=annotations[0:]**
3. Run the example notebook
**Replace the code below in the example notebook Cell [29] to reproduce the behavior**
```
from transformers import ViltProcessor
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
dataset = VQADataset(questions=questions[0:],
annotations=annotations[0:],
processor=processor)
```
**Error Message is shown below:**
```
1%|▌ | 236/26795 [01:31<2:52:07, 2.57it/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [29], in <cell line: 4>()
4 for epoch in range(50): # loop over the dataset multiple times
5 print(f"Epoch: {epoch}")
----> 6 for batch in tqdm(train_dataloader, total=len(train_dataloader)):
7 # get the inputs;
8 batch = {k:v.to(device) for k,v in batch.items()}
10 # zero the parameter gradients
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/tqdm/std.py:1195, in tqdm.__iter__(self)
1192 time = self._time
1194 try:
-> 1195 for obj in iterable:
1196 yield obj
1197 # Update and possibly print the progressbar.
1198 # Note: does not call self.update(1) for speed optimisation.
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/dataloader.py:530, in _BaseDataLoaderIter.__next__(self)
528 if self._sampler_iter is None:
529 self._reset()
--> 530 data = self._next_data()
531 self._num_yielded += 1
532 if self._dataset_kind == _DatasetKind.Iterable and \
533 self._IterableDataset_len_called is not None and \
534 self._num_yielded > self._IterableDataset_len_called:
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/dataloader.py:570, in _SingleProcessDataLoaderIter._next_data(self)
568 def _next_data(self):
569 index = self._next_index() # may raise StopIteration
--> 570 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
571 if self._pin_memory:
572 data = _utils.pin_memory.pin_memory(data)
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:49, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
47 def fetch(self, possibly_batched_index):
48 if self.auto_collation:
---> 49 data = [self.dataset[idx] for idx in possibly_batched_index]
50 else:
51 data = self.dataset[possibly_batched_index]
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:49, in <listcomp>(.0)
47 def fetch(self, possibly_batched_index):
48 if self.auto_collation:
---> 49 data = [self.dataset[idx] for idx in possibly_batched_index]
50 else:
51 data = self.dataset[possibly_batched_index]
Input In [19], in VQADataset.__getitem__(self, idx)
19 image = Image.open(id_to_filename[annotation['image_id']])
20 text = questions['question']
---> 22 encoding = self.processor(image, text, padding="max_length", truncation=True, return_tensors="pt")
23 # remove batch dimension
24 for k,v in encoding.items():
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/models/vilt/processing_vilt.py:91, in ViltProcessor.__call__(self, images, text, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, return_tensors, **kwargs)
72 encoding = self.tokenizer(
73 text=text,
74 add_special_tokens=add_special_tokens,
(...)
88 **kwargs,
89 )
90 # add pixel_values + pixel_mask
---> 91 encoding_feature_extractor = self.feature_extractor(images, return_tensors=return_tensors)
92 encoding.update(encoding_feature_extractor)
94 return encoding
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/models/vilt/feature_extraction_vilt.py:265, in ViltFeatureExtractor.__call__(self, images, pad_and_return_pixel_mask, return_tensors, **kwargs)
254 images = [
255 self._resize(
256 image=image,
(...)
262 for image in images
263 ]
264 if self.do_normalize:
--> 265 images = [self.normalize(image=image, mean=self.image_mean, std=self.image_std) for image in images]
267 if pad_and_return_pixel_mask:
268 # pad images up to largest image in batch and create pixel_mask
269 max_size = self._max_by_axis([list(image.shape) for image in images])
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/models/vilt/feature_extraction_vilt.py:265, in <listcomp>(.0)
254 images = [
255 self._resize(
256 image=image,
(...)
262 for image in images
263 ]
264 if self.do_normalize:
--> 265 images = [self.normalize(image=image, mean=self.image_mean, std=self.image_std) for image in images]
267 if pad_and_return_pixel_mask:
268 # pad images up to largest image in batch and create pixel_mask
269 max_size = self._max_by_axis([list(image.shape) for image in images])
File ~/anaconda3/envs/hugging-face/lib/python3.8/site-packages/transformers/image_utils.py:186, in ImageFeatureExtractionMixin.normalize(self, image, mean, std)
184 return (image - mean[:, None, None]) / std[:, None, None]
185 else:
--> 186 return (image - mean) / std
ValueError: operands could not be broadcast together with shapes (384,576) (3,)
```
## Expected behavior
The ViLT model should be fin-tuned with the provided dataset without any error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16692/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16691
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16691/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16691/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16691/events
|
https://github.com/huggingface/transformers/pull/16691
| 1,199,516,161
|
PR_kwDOCUB6oc41-szf
| 16,691
|
Reduce the memory leak caused by `torch.jit.trace`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
Reduce the memory leak caused by `torch.jit.trace`.
Without this fix, each call to `_create_and_check_torchscript` increases RAM usage by ~20MB.
(Even with this call, there are still memory leak by ~0.04MB)
## Remark
Since our torchscript tests are slow tests, they are run only in scheduled CI (where the test jobs are organized by models), this memory leak issue is not important (memory is released after each job process is done/exit).
However, for PR like #16679, I need to make sure the modified tests will pass. When I run in a GCP VM, I got OOM issue.
So the change in this PR still have its value, I think.
## More information
The method is copied from `torch`
https://github.com/pytorch/pytorch/blob/bcf6974c207ac0339bfb8bdfdb0b0ec348f7a22f/torch/testing/_internal/jit_utils.py#L59
This is called in torch/test, including
https://github.com/pytorch/pytorch/blob/bcf6974c207ac0339bfb8bdfdb0b0ec348f7a22f/test/test_jit.py#L12962
```python
# clear the class registry as we will be defining foo multiple times
jit_utils.clear_class_registry()
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16691/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16691",
"html_url": "https://github.com/huggingface/transformers/pull/16691",
"diff_url": "https://github.com/huggingface/transformers/pull/16691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16691.patch",
"merged_at": 1649694148000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16690
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16690/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16690/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16690/events
|
https://github.com/huggingface/transformers/issues/16690
| 1,199,441,600
|
I_kwDOCUB6oc5HfgbA
| 16,690
|
A question about the position of language indicator tokens of mBART
|
{
"login": "speedcell4",
"id": 3585459,
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/speedcell4",
"html_url": "https://github.com/speedcell4",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I noticed there is [a relevant issue](https://github.com/huggingface/transformers/issues/16583), @patil-suraj has mentioned\r\n> In BART `eos` (`</s>`) token is used as the `decoder_start_token_id`\r\n\r\nBut why do we need `</s>` to be the `decoder_start_token_id`? In my understanding of the original paper, the `<TGT_LANG>` plays this role, according to Figure 1 again.",
"Hi! In the code snippet you are using `mbart-50` tokenizer, which uses a different format than mbart-25 (the model mentioned in the paper) as explained in the [doc](https://huggingface.co/docs/transformers/model_doc/mbart#mbart-and-mbart50). If you use the mbart-25 tokenizer in your code you'll see that that it follows the format from the paper.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained('facebook/mbart-large-cc25', src_lang='en_XX', tgt_lang='ro_RO')\r\n```\r\n\r\nAlso \r\n\r\n> But why do we need </s> to be the decoder_start_token_id? In my understanding of the original paper, the <TGT_LANG> plays this role, according to Figure 1 again.\r\n\r\nThis is an artifact of the `fairseq` repo. In fairseq the `labels` are shifted to the right to get the `decoder_input_ids`, which makes the `eos` token the first token in the sequence.",
"@patil-suraj Thanks for your reply.\r\n\r\nI tried `mbart-large-cc25` as you mentioned as the following,\r\n\r\n```python\r\nfrom transformers import MBartForConditionalGeneration, AutoTokenizer\r\nfrom transformers.models.mbart.modeling_mbart import shift_tokens_right\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n 'facebook/mbart-large-cc25',\r\n src_lang='en_XX',\r\n tgt_lang='ro_RO',\r\n)\r\n\r\nmodel = MBartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25')\r\n\r\nprint(tokenizer.tokenize('UN Chief Says There Is No Military Solution in Syria', add_special_tokens=True))\r\n# ['▁UN', '▁Chief', '▁Say', 's', '▁There', '▁Is', '▁No', '▁Militar', 'y', '▁Solution', '▁in', '▁Syria', '</s>', 'en_XX']\r\n\r\nsrc = tokenizer('UN Chief Says There Is No Military Solution in Syria', return_tensors='pt')\r\nwith tokenizer.as_target_tokenizer():\r\n tgt = tokenizer('Şeful ONU declară că nu există o soluţie militară în Siria', return_tensors='pt')\r\n\r\nhidden = model.forward(\r\n input_ids=src['input_ids'],\r\n attention_mask=src['attention_mask'],\r\n decoder_input_ids=shift_tokens_right(tgt['input_ids'], pad_token_id=tokenizer.pad_token_id),\r\n decoder_attention_mask=tgt['attention_mask'],\r\n)\r\n\r\nout = hidden.logits.argmax(dim=-1)\r\nprint(tokenizer.convert_ids_to_tokens(out[0].detach().tolist()))\r\n# ['<s>', 'f', '▁of', '▁Say', '▁Say', 'ry', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Say', '▁Chief']\r\n```\r\n\r\nYes, the tokenizer works fine, the `en_XX` is put at the end of the sentence. However, the predictions from `hidden` are quite different from the ground truth. Especially, the prediction starts with a `<s>` and ends with a `▁Chief`, we are not supposed to use `<s>`, are we?\r\nI am not sure if I did something wrong.",
"Thanks anyway."
] | 1,649
| 1,650
| 1,650
|
NONE
| null |
## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- MBART, BART: @patil-suraj
## Information
<img width="655" alt="image" src="https://user-images.githubusercontent.com/3585459/162672533-17804a89-a22e-4716-bcea-bc89ea9c1dde.png">
According to Figure 1 of [the mBART paper](https://arxiv.org/pdf/2001.08210.pdf), I think,
* `input_ids = [s1, s2, ..., sn, </s>, <SRC_LANG>]`
* `decoder_input_ids = [<TGT_LANG>, d1, d2, ..., dm, </s>]`
* `labels = [d1, d2, ..., dm, </s>, <TGT_LANG>]`
Then, I checked the output of the mBART tokenizer,
```python
tokenizer = AutoTokenizer.from_pretrained('facebook/mbart-large-50-many-to-many-mmt', src_lang='en_XX', tgt_lang='ro_RO')
src = tokenizer.tokenize('UN Chief Says There Is No Military Solution in Syria', add_special_tokens=True)
# ['en_XX', '▁UN', '▁Chief', '▁Say', 's', '▁There', '▁Is', '▁No', '▁Militar', 'y', '▁Solution', '▁in', '▁Syria', '</s>']
with tokenizer.as_target_tokenizer():
tgt = tokenizer.tokenize('Şeful ONU declară că nu există o soluţie militară în Siria', add_special_tokens=True)
# ['ro_RO', '▁Şe', 'ful', '▁ONU', '▁de', 'cla', 'ră', '▁că', '▁nu', '▁există', '▁o', '▁solu', 'ţie', '▁militar', 'ă', '▁în', '▁Siria', '</s>']
```
Seems, the language indicator token is placed at the beginning of each sentence. However, I found `BartForConditionalGeneration` shifted the `labels` and feed it as the `decoder_input_ids` to the decoder while leaving the `input_ids` unchanged.
https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/src/transformers/models/bart/modeling_bart.py#L1343-L1346
Isn't this procedure changing the sequences to be like?
* `input_ids = [<SRC_LANG>, s1, s2, ..., sn, </s>]`
* `decoder_input_ids = [</s>, <TGT_LANG>, d1, d2, ..., dm]`
* `labels = [<TGT_LANG>, d1, d2, ..., dm, </s>]`
I think this is quite different from the original paper, Is this implementation correct?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16690/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16689
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16689/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16689/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16689/events
|
https://github.com/huggingface/transformers/issues/16689
| 1,199,218,575
|
I_kwDOCUB6oc5Hep-P
| 16,689
|
`"histogram_cpu" not implemented for 'BFloat16'` when using deepspeed and reporting to wandb
|
{
"login": "jncasey",
"id": 31020859,
"node_id": "MDQ6VXNlcjMxMDIwODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jncasey",
"html_url": "https://github.com/jncasey",
"followers_url": "https://api.github.com/users/jncasey/followers",
"following_url": "https://api.github.com/users/jncasey/following{/other_user}",
"gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jncasey/subscriptions",
"organizations_url": "https://api.github.com/users/jncasey/orgs",
"repos_url": "https://api.github.com/users/jncasey/repos",
"events_url": "https://api.github.com/users/jncasey/events{/privacy}",
"received_events_url": "https://api.github.com/users/jncasey/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @jncasey, thanks for the report.\r\n\r\n> since everything is working without deepspeed, maybe there's something different about how the deepspeed integration is reporting to wandb?\r\n\r\nCould you check that the script still breaks when you set `\"bf16\"` to `\"enabled\": false` in the DeepSpeed config? Also, a full traceback would be helpful!",
"Updating the DeepSpeed config to set `\"bf16\" { \"enabled\": false }` raises this error, as one might expect:\r\n```\r\nValueError: Please correct the following DeepSpeed config values that mismatch TrainingArguments values:\r\n- ds bf16.enabled=False vs hf bf16|bf16_full_eval=True\r\n```\r\nSetting `\"bf16\" { \"enabled\": \"auto\" }` yields the original error I reported.\r\n\r\nHere's the full traceback:\r\n```\r\n 0%|▎ | 499/107670 [03:39<13:13:26, 2.25it/s]Traceback (most recent call last):\r\n File \"./bin/run_summarization.py\", line 706, in <module>\r\n main()\r\n File \"./bin/run_summarization.py\", line 625, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/transformers/trainer.py\", line 1422, in train\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/transformers/trainer.py\", line 2027, in training_step\r\n loss = self.deepspeed.backward(loss)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/utils/nvtx.py\", line 11, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/runtime/engine.py\", line 1667, in backward\r\n self.optimizer.backward(loss)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py\", line 1921, in backward\r\n self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py\", line 53, in backward\r\n scaled_loss.backward(retain_graph=retain_graph)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/_tensor.py\", line 363, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/autograd/__init__.py\", line 173, in backward\r\n Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/wandb/wandb_torch.py\", line 266, in <lambda>\r\n handle = var.register_hook(lambda grad: _callback(grad, log_track))\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/wandb/wandb_torch.py\", line 264, in _callback\r\n self.log_tensor_stats(grad.data, name)\r\n File \"/opt/miniconda3/envs/hf/lib/python3.8/site-packages/wandb/wandb_torch.py\", line 215, in log_tensor_stats\r\n tensor = flat.histc(bins=self._num_bins, min=tmin, max=tmax)\r\nRuntimeError: \"histogram_cpu\" not implemented for 'BFloat16'\r\n```",
"Please help me understand this issue - why are we discussing a problem in wandb at `transformers`? Clearly wandb can't handle bf16 inputs in at least one code path - what does it have to do with deepspeed or transformers?\r\n\r\nThe next logical step is to either have `wandb` workaround bf16 if it can't handle it and not use `histogram_cpu` for bf16 inputs or ask pytorch to implement it for bf16/cpu.",
"Hi Stas,\r\n\r\nMy thought was that since the same script can report to wandb using bf16 when not using DeepSpeed, there might be something different in how the DeepSpeed integration handles the reporting, and it might be possible to avoid the problem the way the non-DeepSpeed run does. \r\n\r\nBut I admit I'm out of my depth here, so maybe m thinking is flawed and there's nothing to be done on the transformers side.",
"The difference is that normally w/o deepspeed you're using bf16/amp, which keeps the model and activations in fp32 and downcasts them to bf16 when needed.\r\n\r\nDeepspeed doesn't use amp and uses a different approach where it keeps the model and activations in the half precision mode from the get going (fp16 or bf16) (but keeps a fp32 weights copy in its optimizer), and so it trips wandb's code which doesn't expect bf16 tensors.\r\n\r\nIt's not something that can be changed in Deepspeed or the HF integration.\r\n\r\nPossible solutions:\r\n\r\nFor example, if the bf16 input < 64k, wandb could make a copy and safely convert it to fp16 and run `histogram_cpu` on it (I assume it supports fp16), if not perhaps it could do the processing on gpu.\r\n\r\nBut the simplest solution is to request pytorch to implement `histogram_cpu` for `BFloat16` inputs, which you or the wandb folks can ask via a \"feature request\" at https://github.com/pytorch/pytorch/issues/new/choose but which of course will take time. So possibly both solutions could be used together.\r\n\r\n\r\n\r\n",
"Got it! Thanks for the super clear explanation.",
"Late update: This seems to be fixed with the release of pytorch 1.12",
"Super! Thank you for the update, @jncasey ",
"I'm still having this bug with torch 0.13.0+cu116 (`RuntimeError: \"histogram_cpu\" not implemented for 'Byte'`)",
"> I'm still having this bug with torch 0.13.0+cu116 (`RuntimeError: \"histogram_cpu\" not implemented for 'Byte'`)\r\n\r\n@julien-blanchon Did you solve it? \r\n\r\nI am getting this error when reporting to wandb using `torch==2.0.1`, `wandb==0.15.5` and `accelerate==0.20.3`.",
"No, I'm still waiting for a patch here: https://github.com/pytorch/pytorch/issues/75667"
] | 1,649
| 1,689
| 1,649
|
CONTRIBUTOR
| null |
## Environment info
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-20-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: Deepspeed
### Who can help
@stas00
## Information
Model I am using (Bert, XLNet ...): bart-large
The problem arises when using:
* [X] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
I'm using a training script adapted from the run_summarization.py example with a model using bart-large architecture and a custom tokenizer. I'm working locally on my workstation with two RTX 3090s. I had been training using deepspeed and fp16, but I saw that the latest transformers update added bf16 support to the deepspeed integration, so I wanted to try that in order to reduce the constant overflow errors I had been getting.
But when using deepspeed, bf16, and reporting to wandb, my training crashes.
I'm able to reproduce the error using the example scripts:
```
deepspeed run_summarization.py \
--model_name_or_path facebook/bart-large \
--dataset_name cnn_dailymail --dataset_config_name 3.0.0 \
--do_train --per_device_train_batch_size 4 --bf16 \
--overwrite_output_dir --output_dir models/text_summarization \
--deepspeed config/deepspeed_config-zero2-bf16.json
```
with the deepspeed config being:
```
{
"bf16": {
"enabled": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": false
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
After 500 steps (when saving the first checkpoint), wandb throws this error:
`RuntimeError: "histogram_cpu" not implemented for 'BFloat16'`
The error doesn't occur if I run the same script without deepspeed. And no other error gets thrown if I use deepspeed and don't report to wandb.
[A very similar issue was reported to wandb last month](https://github.com/wandb/client/issues/3332). The wandb people say it's an issue with pytorch and not wandb, but since everything is working without deepspeed, maybe there's something different about how the deepspeed integration is reporting to wandb?
## Expected behavior
The training should continue without crashing, and should report as much info to wandb as possible (not sure if there are limits to that introduced by bf16 )
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16689/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16688
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16688/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16688/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16688/events
|
https://github.com/huggingface/transformers/issues/16688
| 1,199,169,001
|
I_kwDOCUB6oc5Hed3p
| 16,688
|
Cannot train M2M100 using run_translation.py and DeepSpeed ZeRO stage 3
|
{
"login": "evros-chris",
"id": 37453466,
"node_id": "MDQ6VXNlcjM3NDUzNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/37453466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evros-chris",
"html_url": "https://github.com/evros-chris",
"followers_url": "https://api.github.com/users/evros-chris/followers",
"following_url": "https://api.github.com/users/evros-chris/following{/other_user}",
"gists_url": "https://api.github.com/users/evros-chris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evros-chris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evros-chris/subscriptions",
"organizations_url": "https://api.github.com/users/evros-chris/orgs",
"repos_url": "https://api.github.com/users/evros-chris/repos",
"events_url": "https://api.github.com/users/evros-chris/events{/privacy}",
"received_events_url": "https://api.github.com/users/evros-chris/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thank you, @evros-chris \r\n\r\nI can reproduce this and I think this might be related to the peculiarity of this model that creates a new Parameter in `forward`, which currently Deepspeed isn't equipped to deal with. It somehow needs to be repartition its flattened tensors with the new Parameter. I have reported this problem [here](https://github.com/microsoft/DeepSpeed/pull/1606). But it could be something else.\r\n\r\nThere is another issue with this model:\r\n\r\n```\r\nPYTHONPATH=src deepspeed --master_port 6666 --num_nodes 1 --num_gpus 2 examples/pytorch/translation/run_translation.py --train_file tests/fixtures/tests_samples/wmt_en_ro/train.json --source_lang en --target_lang ro --model_name_or_path hf-internal-testing/tiny-random-m2m_100 --do_train --max_train_samples 4 --per_device_train_batch_size 2 --num_train_epochs 1 --fp16 --report_to none --overwrite_output_dir --deepspeed tests/deepspeed/ds_config_zero3.json --output_dir /tmp/tmpi4k4wz8s --save_steps 1\r\n[...]\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/m2m_100/modeling_m2m_100.py\", line 175, in forward\r\n self.weights.requires_grad = False\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1176, in __getattr__\r\n self.make_weights(max_pos + self.offset, self.embedding_dim, self.padding_idx)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/m2m_100/modeling_m2m_100.py\", line 134, in make_weights\r\n self.weights.requires_grad = False\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1176, in __getattr__\r\n return _parameters[name]\r\n File \"/mnt/nvme0/code/github/00optimize/deepspeed/deepspeed/runtime/zero/stage3.py\", line 150, in __getitem__\r\n if param.ds_status == ZeroParamStatus.NOT_AVAILABLE:\r\nAttributeError: 'Parameter' object has no attribute 'ds_status'return _parameters[name]\r\n```\r\nthat's the one I reported to deepspeed. But I think they are related.\r\n\r\nThe solution for the latter one is to ensure that when creating the model `config.max_position_embeddings` is set to the longest_seqlen so that it doesn't need to remake the positional embeddings and thus it won't create a new `Parameter` once it started training.\r\n\r\nI'm trying to figure out where the problem is coming from. I will keep you posted once I make some progress.",
"Thanks a lot @stas00!",
"OK, it's the `LayerDrop` that causes this problem.\r\n\r\nHere is one of them:\r\n\r\nhttps://github.com/huggingface/transformers/blob/69233cf03be5fbce0492f3997e139c4d05499e27/src/transformers/models/m2m_100/modeling_m2m_100.py#L799-L804\r\n\r\nTo quickly unblock you set the layerdrop probability directly in the model config or the application to `0.0`:\r\n\r\n```\r\ndiff --git a/examples/pytorch/translation/run_translation.py b/examples/pytorch/translation/run_translation.py\r\nindex f7e98276d..f5af70417 100755\r\n--- a/examples/pytorch/translation/run_translation.py\r\n+++ b/examples/pytorch/translation/run_translation.py\r\n@@ -349,6 +349,9 @@ def main():\r\n revision=model_args.model_revision,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n+ #config.max_position_embeddings = 2048\r\n+ config.encoder_layerdrop = 0\r\n+ config.decoder_layerdrop = 0\r\n model = AutoModelForSeq2SeqLM.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n```\r\n\r\nMeanwhile I will work on a workaround - since Deepspeed doesn't expect layers disappearing from `forward` stack.",
"OK, please try this PR: https://github.com/huggingface/transformers/pull/16717\r\n\r\nIt should work now with normal config and `LayerDrop`",
"Thanks a lot for your immediate help and explanation of the error @stas00!\r\n\r\nSetting `config.encoder_layerdrop` = 0 and `config.decoder_layerdrop = 0` works!\r\n\r\nHowever, I tried the PR: https://github.com/huggingface/transformers/pull/16717 and I still get the error below.\r\n\r\nTo reproduce:\r\n```\r\ndeepspeed examples/pytorch/translation/run_translation.py \\\r\n--deepspeed tests/deepspeed/ds_config_zero3.json \\\r\n--model_name_or_path facebook/m2m100_418M \\\r\n--per_device_train_batch_size 8 \\\r\n--per_device_eval_batch_size 8 \\\r\n--output_dir output_dir --overwrite_output_dir \\\r\n--fp16 \\\r\n--do_train --do_eval --do_predict \\\r\n--max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 \\\r\n--num_train_epochs 3 \\\r\n--dataset_name wmt16 --dataset_config \"ro-en\" \\\r\n--source_lang en --target_lang ro \\\r\n--predict_with_generate --forced_bos_token ro\r\n```\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 636, in <module>\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 553, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/trainer.py\", line 1422, in train\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/trainer.py\", line 2011, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/trainer.py\", line 2043, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py\", line 11, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py\", line 1556, in forward\r\n loss = self.module(*inputs, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 1306, in forward\r\n outputs = self.model(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 1164, in forward\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 636, in <module>\r\n encoder_outputs = self.encoder(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 819, in forward\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 553, in main\r\n layer_outputs = encoder_layer(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/trainer.py\", line 1422, in train\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 379, in forward\r\n hidden_states = self.self_attn_layer_norm(hidden_states)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1109, in _call_impl\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/trainer.py\", line 2011, in training_step\r\n result = hook(self, input)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py\", line 11, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py\", line 1411, in _pre_forward_module_hook\r\n self.pre_sub_module_forward_function(module)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n loss = self.compute_loss(model, inputs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/trainer.py\", line 2043, in compute_loss\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py\", line 1528, in pre_sub_module_forward_function\r\n self.param_coordinator.fetch_sub_module(sub_module)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py\", line 11, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n outputs = model(**inputs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py\", line 358, in fetch_sub_module\r\n raise RuntimeError(\r\nRuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be ({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}) but got ({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}).\r\n return forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py\", line 11, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py\", line 1556, in forward\r\n loss = self.module(*inputs, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 1306, in forward\r\n outputs = self.model(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 1164, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 819, in forward\r\n layer_outputs = encoder_layer(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1120, in _call_impl\r\n result = forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py\", line 379, in forward\r\n hidden_states = self.self_attn_layer_norm(hidden_states)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1109, in _call_impl\r\n result = hook(self, input)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py\", line 11, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py\", line 1411, in _pre_forward_module_hook\r\n self.pre_sub_module_forward_function(module)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py\", line 1528, in pre_sub_module_forward_function\r\n self.param_coordinator.fetch_sub_module(sub_module)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py\", line 11, in wrapped_fn\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py\", line 358, in fetch_sub_module\r\n raise RuntimeError(\r\nRuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be ({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}) but got ({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}).\r\n 1%|█▉ | 1/96 [00:01<03:09, 1.99s/it]\r\n[2022-04-13 20:44:34,034] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 598135\r\n[2022-04-13 20:44:34,034] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 598136\r\n[2022-04-13 20:44:34,034] [ERROR] [launch.py:184:sigkill_handler] ['/opt/conda/bin/python3.8', '-u', 'examples/pytorch/translation/run_translation.py', '--local_rank=1', '--deepspeed', 'tests/deepspeed/ds_config_zero3.json', '--model_name_or_path', 'facebook/m2m100_418M', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--do_eval', '--do_predict', '--max_train_samples', '500', '--max_eval_samples', '50', '--max_predict_samples', '50', '--num_train_epochs', '3', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro', '--predict_with_generate', '--forced_bos_token', 'ro'] exits with return code = 1\r\n```",
"I suspect you are still using the older `transformers`, can you make sure to uninstall it first, ensure it's not there and then install from that branch? Thank you!\r\n\r\nor alternatively make sure to set `PYTHONPATH` to where the new source is\r\n\r\nHere is how I normally do this:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\ngit checkout ds-m2m-layerdrop\r\nPYTHONPATH=src deepspeed examples/pytorch/translation/run_translation.py [...]\r\n```",
"Hmm, no, you're right, I can reproduce the issue. I will get back to you once I get a chance to look at it.",
"update, nope, it works just fine. I have just suggested to you to use `PYTHONPATH` and haven't used it myself ;)\r\n\r\nTry again with:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\ngit checkout ds-m2m-layerdrop\r\nPYTHONPATH=src deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path facebook/m2m100_418M --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --do_eval --do_predict --max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 --num_train_epochs 3 --dataset_name wmt16 --dataset_config \"ro-en\" --source_lang en --target_lang ro --predict_with_generate --forced_bos_token ro\r\n```",
"Yes you are right, it does work! Thanks a lot for fixing this @stas00!"
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
## Environment info
- `transformers` version: 4.18.0
- Platform: Linux
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10
- Tensorflow version (GPU?): -
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Deepspeed ZeRO stage 3
Library Versions:
- deepspeed 0.6.1
- transformers 4.18.0
- pytorch 1.10
### Who can help
@[stas00](https://github.com/stas00)
## Information
The problem arises when:
* I try to finetune the Hugging Face `facebook/m2m100_418M` model using the `run_translation.py` script under `transformers/examples/pytorch/translation/run_translation.py` and deepspeed ZeRO stage 3. If I use `t5-small` instead of `facebook/m2m100_418M` then the model trains. Also, if I use `facebook/m2m100_418M` and `ds_config_zero2.json` instead of `ds_config_zero3.json`, then the models trains again.
## To reproduce
```
deepspeed run_translation.py \
--deepspeed ds_config_zero3.json \
--model_name_or_path facebook/m2m100_418M \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--output_dir output_dir --overwrite_output_dir \
--fp16 \
--do_train --do_eval --do_predict \
--max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 \
--num_train_epochs 3 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro \
--predict_with_generate --forced_bos_token ro
```
where:
- `run_translation.py` is the same file as in `transformers/examples/pytorch/translation/run_translation.py`
- `ds_config_zero3.json` is the same file as in `transformers/tests/deepspeed/ds_config_zero3.json`
Error:
```
Traceback (most recent call last):
File "run_translation.py", line 636, in <module>
main()
File "run_translation.py", line 553, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1422, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2011, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2043, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1556, in forward
loss = self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1306, in forward
outputs = self.model(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1164, in forward
encoder_outputs = self.encoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 819, in forward
layer_outputs = encoder_layer(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 379, in forward
hidden_states = self.self_attn_layer_norm(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1109, in _call_impl
result = hook(self, input)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1411, in _pre_forward_module_hook
self.pre_sub_module_forward_function(module)
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1528, in pre_sub_module_forward_function
self.param_coordinator.fetch_sub_module(sub_module)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 11, in wrapped_fn
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 358, in fetch_sub_module
raise RuntimeError(
RuntimeError: tracing error at step 42: expected the next 2 parameters in the parameter fetch queue to be ({'id': 26, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}, {'id': 27, 'status': 'AVAILABLE', 'numel': 1024, 'ds_numel': 1024, 'shape': (1024,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {24}}) but got ({'id': 115, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1024, 'shape': (0,), 'ds_shape': (1024,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set()}, {'id': 116, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 1048576, 'shape': (0,), 'ds_shape': (1024, 1024), 'requires_grad': True, 'grad_shape': None, 'persist': False, 'active_sub_modules': set()}).
1%|█ | 1/189 [00:01<04:33, 1.45s/it]
[2022-04-10 20:34:32,488] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 41615
[2022-04-10 20:34:32,488] [ERROR] [launch.py:184:sigkill_handler] ['/opt/conda/bin/python3.8', '-u', 'run_translation.py', '--local_rank=0', '--deepspeed', 'config/ds_config_zero3.json', '--model_name_or_path', 'facebook/m2m100_418M', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--do_eval', '--do_predict', '--max_train_samples', '500', '--max_eval_samples', '50', '--max_predict_samples', '50', '--num_train_epochs', '3', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro', '--predict_with_generate', '--forced_bos_token', 'ro'] exits with return code = 1
```
## Expected behavior
The model trains.
## Additional info
Changing deepspeed version from 0.6.1 to 0.5.10 and transformers version from 4.18.0 to 4.16.2, results in the following error:
```
Traceback (most recent call last):
File "run_translation.py", line 636, in <module>
main()
File "run_translation.py", line 553, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1365, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1956, in training_step
loss = self.deepspeed.backward(loss)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1697, in backward
self.optimizer.backward(loss)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 2944, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 53, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/opt/conda/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/function.py", line 199, in apply
return user_fn(self, *args)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 562, in backward
ctx.pre_backward_function(ctx.module)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1456, in _run_before_backward_function
self.pre_sub_module_backward_function(sub_module)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 1551, in pre_sub_module_backward_function
self.param_coordinator.prefetch_next_sub_modules(sub_module,
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 358, in prefetch_next_sub_modules
params_to_prefetch = self.prefetch_coordinator.get_params_to_prefetch(
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 220, in get_params_to_prefetch
if sub_module.id != self.sub_module_trace[self.step_id]:
IndexError: list index out of range
1%|█ | 1/189 [00:01<04:02, 1.29s/it]
[2022-04-10 20:44:02,482] [INFO] [launch.py:160:sigkill_handler] Killing subprocess 45884
[2022-04-10 20:44:02,482] [ERROR] [launch.py:166:sigkill_handler] ['/opt/conda/bin/python3.8', '-u', 'run_translation.py', '--local_rank=0', '--deepspeed', 'config/ds_config_zero3.json', '--model_name_or_path', 'facebook/m2m100_418M', '--per_device_train_batch_size', '8', '--per_device_eval_batch_size', '8', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--do_eval', '--do_predict', '--max_train_samples', '500', '--max_eval_samples', '50', '--max_predict_samples', '50', '--num_train_epochs', '3', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro', '--predict_with_generate', '--forced_bos_token', 'ro'] exits with return code = 1
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16688/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16687
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16687/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16687/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16687/events
|
https://github.com/huggingface/transformers/issues/16687
| 1,198,821,794
|
I_kwDOCUB6oc5HdJGi
| 16,687
|
Can't load pretrained TrOCR model
|
{
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have exactly the same problem. If anyone finds a solution, it would be appreciated. If I manage to solve it, I will post it here ASAP.\r\n\r\nEDIT: \r\nI have not been able to follow the code trace exactly but I believe the error is as follows. When the model is created from the `hub.py` file, the model class is `DeitConfig`, however, when the model is created from a configuration file, the model appears as `VisionEncoderDecoderConfig`. In the `transformers.models.auto.auto_factory file`, it is validated that the model is among the tokenizers defined in `TOKENIZER_MAPPING` and if it is not, the error is raised. That is, the behavior of loading the same file from the hub and from local differs and that is what is causing this problem.\r\n",
"Could be related to https://github.com/huggingface/transformers/issues/14884",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I have the same problem, Is any solution to this?",
"@NouamaneTazi If the new version 4.19 still doesn't work with your local checkpoint, could you please upload your checkpoint/config files to your HF Hub repo, and provide a link to it?\r\n\r\n@emigomez Do you use a local checkpoint or a checkpoint from the Hub? Could you try with v4.19?",
"Hi @emigomez , if the solution proposed by @ydshieh doesn't work for you, what I did was to train the model as in the tutorials and, once trained, load the model from the generated checkpoint:\r\n\r\n```\r\nmodel = VisionEncoderDecoderModel.from_pretrained(/path/to/local/checkpoint)\r\n```\r\n\r\nAnd the preprocessor I load it from the hub (using the same checkpoint of the pretrained model), for example, if you have finetuned the model from `microsoft/trocr-base-printed`:\r\n```\r\npreprocessor = TrOCRProcessor.from_pretrained(`microsoft/trocr-base-printed`).\r\n```\r\nThis way the combination model + preprocessor works for me, allowing me to use the models to predict. \r\n\r\nI hope it helps you!\r\n",
"Hi @ydshieh I am using my local checkpoints obtained after fine tuning, and I am using the v4.19 yes\r\nI have generated this pretrained model with \r\n```\r\n trainer = Seq2SeqTrainer(.....)\r\n trainer.train()\r\n trainer.save_model(\"./models\")\r\n```\r\n \r\nHi @CuarteroAlvaro I was trying to make the inference with my model with:\r\n```\r\n MODEL_PATH = \"./models/checkpoints/checkpoint_10000/\" # option 1\r\n MODEL_PATH = \"./models/\" # option 2\r\n processor = TrOCRProcessor.from_pretrained(MODEL_PATH) \r\n model = VisionEncoderDecoderModel.from_pretrained(MODEL_PATH) \r\n```\r\n \r\nMy error appears when I was trying to load the processor, so as you suggested I load the one used in my training:\r\n` processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed') `\r\n \r\nThis is the complete code that I'm using to infer:\r\n```\r\nfrom transformers import TrOCRProcessor, VisionEncoderDecoderModel\r\nimport requests \r\nfrom PIL import Image\r\nimport time\r\nimport torch\r\n\r\nMODEL_PATH = \"./models/checkpoints/checkpoint-10000/\" # option 1\r\nMODEL_PATH = \"./models/\" # option 2\r\n\r\nprocessor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed') \r\nmodel = VisionEncoderDecoderModel.from_pretrained(MODEL_PATH)\r\n\r\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\r\nprint(\"Running in device:\", device)\r\nmodel.to(device)\r\n\r\nimage = Image.open(\"0_0.tif\").convert(\"RGB\")\r\ntimeini = time.time()\r\n\r\npixel_values = processor(image, return_tensors=\"pt\").pixel_values \r\ngenerated_ids = model.generate(pixel_values)\r\ngenerated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] \r\n\r\ntimeend = time.time() - timeini\r\nprint(\"\\nResult: \", generated_text)\r\nprint(\"Execution time: \", timeend)\r\n```\r\n\r\nWith both MODEL_PATH options I obtain the next error:\r\n```\r\n$ python trocr_infer.py \r\nRunning in device: cuda\r\nTraceback (most recent call last):\r\n File \"trocr_infer.py\", line 21, in <module>\r\n generated_ids = model.generate(pixel_values)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\autograd\\grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transformers\\generation_utils.py\", line 1172, in generate\r\n model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transformers\\generation_utils.py\", line 525, in _prepare_encoder_decoder_kwargs_for_generation\r\n model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(**encoder_kwargs)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transformers\\models\\vit\\modeling_vit.py\", line 572, in forward\r\n embedding_output = self.embeddings(\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transformers\\models\\vit\\modeling_vit.py\", line 135, in forward\r\n embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transformers\\models\\vit\\modeling_vit.py\", line 191, in forward\r\n x = self.projection(pixel_values).flatten(2).transpose(1, 2)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\nn\\modules\\conv.py\", line 447, in forward\r\n return self._conv_forward(input, self.weight, self.bias)\r\n File \"C:\\Users\\MSI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\nn\\modules\\conv.py\", line 443, in _conv_forward\r\n return F.conv2d(input, weight, bias, self.stride,\r\nRuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor\r\n```\r\n\r\nAnd these are the files that I have under the previous folders:\r\nmodels/:\r\n`checkpoints/ config.json preprocessor_config.json pytorch_model.bin runs/ training_args.bin`\r\nmodels/checkpoints/checkpoint-10000/:\r\n`config.json optimizer.pt preprocessor_config.json pytorch_model.bin rng_state.pth scaler.pt scheduler.pt trainer_state.json training_args.bin`\r\n\r\nDo you know how to fix this problem? \r\n\r\nThank you both for your quick reply!!\r\n\r\n",
"@emigomez Would you mind to share a complete code snippet of your training script and arguments, so we can reproduce the issue quickly in order to identify the cause?\r\n\r\nWithout training, I couldn't reproduce the issue (by just loading/saving/loading-again)\r\n\r\n```\r\nfrom transformers import TrOCRProcessor, VisionEncoderDecoderModel\r\n\r\nprocessor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed')\r\nmodel = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed')\r\nmodel.save_pretrained(\"./local_checkpoint\")\r\nloaded_model = VisionEncoderDecoderModel.from_pretrained('./local_checkpoint')\r\nprint(loaded_model)\r\n```",
"The problem was that my model is on the GPU, but my data is on the CPU. So, I need to send my data to GPU changing on the previous code that I have shared:\r\n\r\n`generated_ids = model.generate(pixel_values.to(device))`\r\n\r\nAnd that is working.\r\n\r\nThank you!!",
"Yes, i was just writing the answer!\r\n\r\n",
"@ydshieh, The problem appears when trying to load the preprocessor from a local checkpoint. I think this code will reproduce the issue:\r\n\r\n```python\r\nfrom transformers import TrOCRProcessor, VisionEncoderDecoderModel\r\n\r\nprocessor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed')\r\nmodel = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed')\r\nmodel.save_pretrained(\"./local_checkpoint\")\r\nloaded_preprocessor = TrOCRProcessor.from_pretrained('./local_checkpoint')\r\nloaded_model = VisionEncoderDecoderModel.from_pretrained('./local_checkpoint')\r\nprint(loaded_preprocessor, loaded_model)\r\n\r\n```",
"Hi, @CuarteroAlvaro \r\n\r\nIn you code snippet, there is 1 line missing\r\n```\r\nprocessor.save_pretrained('./local_checkpoint') # <-- this is required \r\n```\r\nThe following will work\r\n```\r\nfrom transformers import TrOCRProcessor, VisionEncoderDecoderModel\r\n\r\nprocessor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-printed')\r\nmodel = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-printed')\r\nmodel.save_pretrained(\"./local_checkpoint\")\r\nprocessor.save_pretrained('./local_checkpoint') # <-- this is required \r\nloaded_preprocessor = TrOCRProcessor.from_pretrained('./local_checkpoint') \r\nloaded_model = VisionEncoderDecoderModel.from_pretrained('./local_checkpoint')\r\nprint(loaded_preprocessor, loaded_model)\r\n```\r\n\r\nBut I can't reproduce the error shown in @NouamaneTazi 's original issue.\r\n```\r\nKeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>\r\n```",
"I don't seem to be able to reproduce the problem, so I'll mark this as resolved! Thanks for your help 🤗"
] | 1,649
| 1,653
| 1,653
|
MEMBER
| null |
@NielsRogge I get this error when I try to load a local TrOCR checkpoint.
```python
>>> processor = TrOCRProcessor.from_pretrained("./checkpoint-2")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 186, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 230, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 544, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/usr/users/gpupro/gpu_tazi/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 564, in __getitem__
raise KeyError(key)
KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>
```
This is the content of my checkpoint folder:
```
checkpoint-2
|-trainer_state.json
|-preprocessor_config.json
|-training_args.bin
|-scaler.pt
|-optimizer.pt
|-scheduler.pt
|-pytorch_model.bin
|-rng_state.pth
|-config.json
```
Yet, loading a TrOCR checkpoint from the hub works just fine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16687/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16686
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16686/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16686/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16686/events
|
https://github.com/huggingface/transformers/pull/16686
| 1,198,819,669
|
PR_kwDOCUB6oc418mzY
| 16,686
|
fixed crash when deleting older checkpoint and files with name f"{checkpoint_prefix}-*" exist
|
{
"login": "sadransh",
"id": 12527824,
"node_id": "MDQ6VXNlcjEyNTI3ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/12527824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadransh",
"html_url": "https://github.com/sadransh",
"followers_url": "https://api.github.com/users/sadransh/followers",
"following_url": "https://api.github.com/users/sadransh/following{/other_user}",
"gists_url": "https://api.github.com/users/sadransh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadransh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadransh/subscriptions",
"organizations_url": "https://api.github.com/users/sadransh/orgs",
"repos_url": "https://api.github.com/users/sadransh/repos",
"events_url": "https://api.github.com/users/sadransh/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadransh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
What does this PR do?
I create an archive of older checkpoints during training the checkpoint has a name with `f"{checkpoint_prefix}-*.zip/.tar `
previously `glob(f"{checkpoint_prefix}-*")` takes all files/folders starting with the name checkpoint, and later `shutil.rmtree(checkpoint)` takes a folder name; since at some point it my get a zip file; it crashes training; adding this `if os.path.isdir(x)` allows only folders on `glob_checkpoints`.
let's say output folder structure is like: (with `save_limit=5`)
```
checkpoint-36000
checkpoint-35000
checkpoint-34000
checkpoint-33000
checkpoint-33000.zip
```
then code attempts to remove oldest checkpoint
since we have a file (checkpoint-33000.zip) and pass the file to `shutil.rmtree(checkpoint)` to delete it will fail.
by avoiding storing files on `glob_checkpoints` this will get fixed! ( checking everything is folder as checkpoints are folders not single files.)
**Before submitting:**
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16686/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16686",
"html_url": "https://github.com/huggingface/transformers/pull/16686",
"diff_url": "https://github.com/huggingface/transformers/pull/16686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16686.patch",
"merged_at": 1649676727000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16685
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16685/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16685/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16685/events
|
https://github.com/huggingface/transformers/pull/16685
| 1,198,785,962
|
PR_kwDOCUB6oc418h5W
| 16,685
|
Translate index.mdx (to ES) and add Spanish models to quicktour.mdx examples
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger, I ran `make style` and made some manual debug but cannot get `check_code_quality` to pass. Any idea why could it be? \r\n\r\nI think the error is in[ these lines](https://github.com/huggingface/transformers/pull/16685/files#diff-517e793a1e18859abaf368b0c3f4d344231747ebc1ceb5154ca94159a7207bf1R110-R116):\r\n\r\n`\r\nA continuación, carga el dataset (ve 🤗 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart.html) para más detalles) sobre el que quisieras iterar. Por ejemplo, vamos a cargar el dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14):\r\n\r\n```py\r\n>>> from datasets import load_dataset, Audio\r\n\r\n>>> dataset = load_dataset(\"PolyAI/minds14\", name=\"es-ES\", split=\"train\") # doctest: +IGNORE_RESULT\r\n```\r\n`",
"Make sure you have the latest version of `hf-doc-builder` installed (`pip install hf-doc-builder -U`) then run `make style`.",
"EDIT: Fixed by @osanseviero in PR #17197.\r\n\r\nSorry @sgugger, even with updating `hf-doc-builder` to the latest version, `0.4.0`, and running `make style` the `check_code_quality` error continues appearing. I am not able to find the error and the [feedback in CircleCI](https://app.circleci.com/pipelines/github/huggingface/transformers/39893/workflows/35203812-3c00-42fa-b7e6-2d1789c21b12/jobs/449693) is not revealing (`ValueError: 1 files should be restyled!`).\r\n\r\nWhen running `make style` I get this error (I tried to debug in other ways but the error continues):\r\n\r\n\r\nIn the meantime, I merged the PR with the error so we can get the index and quicktour before the release tomorrow. However, please let me know if we should proceed in another way. Sorry for this cumbersome merge. They are looking fine in the docs:\r\n- [Index](https://huggingface.co/docs/transformers/main/es/index);\r\n- [Quicktour](https://huggingface.co/docs/transformers/main/es/quicktour)."
] | 1,649
| 1,652
| 1,652
|
CONTRIBUTOR
| null |
# What does this PR do?
Related to issue #15947
1. Translate `index.mdx` to Spanish
2. Replace English models and a dataset in `quicktour.mdx` with Spanish versions.
## Relevant
- Translation of `index.mdx` included the list of compatible models (not the papers´ and models´ names). Since this should not be updated manually, I can come back to the original text if required.
- Spanish models selected for quicktour are:
- clasificador = pipeline('sentiment-analysis', model="pysentimiento/robertuito-sentiment-analysis")
- reconocedor_de_voz = pipeline("automatic-speech-recognition", model="jonatasgrosman/wav2vec2-large-xlsr-53-spanish", device=0)
- Spanish ASR dataset selected is:
- dataset = datasets.load_dataset("PolyAI/minds14", name="es-ES", split="train")
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16685/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16685/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16685",
"html_url": "https://github.com/huggingface/transformers/pull/16685",
"diff_url": "https://github.com/huggingface/transformers/pull/16685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16685.patch",
"merged_at": 1652330108000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16684
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16684/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16684/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16684/events
|
https://github.com/huggingface/transformers/issues/16684
| 1,198,699,539
|
I_kwDOCUB6oc5HcrQT
| 16,684
|
`FlaxBartForConditionalGeneration` has a `.encode` method but `BartForConditionalGeneration` does not
|
{
"login": "ayaka14732",
"id": 68557794,
"node_id": "MDQ6VXNlcjY4NTU3Nzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayaka14732",
"html_url": "https://github.com/ayaka14732",
"followers_url": "https://api.github.com/users/ayaka14732/followers",
"following_url": "https://api.github.com/users/ayaka14732/following{/other_user}",
"gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions",
"organizations_url": "https://api.github.com/users/ayaka14732/orgs",
"repos_url": "https://api.github.com/users/ayaka14732/repos",
"events_url": "https://api.github.com/users/ayaka14732/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayaka14732/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! The `encode` method does not exist in PT model because it's possible to access the encoder using `model.get_encoder` and then call it to run the encoder. Flax does not allow accessing modules like this so we need to expose explicit methods like `encode`. This is not required for PT.",
"Thanks for the explanation!"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
## Environment info
- `transformers` version: 4.18.0
- Platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (tpu)
- Jax version: 0.3.5
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
## Information
Model I am using: BART
## To reproduce
Flax version (working):
```python
from transformers import BartTokenizer, FlaxBartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = FlaxBartForConditionalGeneration.from_pretrained('facebook/bart-base')
inputs = tokenizer('Travelers wait about an hour and a half to cross the Tower.', return_tensors='jax')
outputs = model.encode(**inputs)
print(outputs) # OK
```
PyTorch version (not working):
```python
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
inputs = tokenizer('Travelers wait about an hour and a half to cross the Tower.', return_tensors='pt')
outputs = model.encode(**inputs) # AttributeError: 'BartForConditionalGeneration' object has no attribute 'encode'
```
## Expected behavior
The PyTorch version should also have a `.encode` method to generate the encoder output, as the Flax version does.
## Actual behavior
The PyTorch version does not a `.encode` method.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16684/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16683
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16683/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16683/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16683/events
|
https://github.com/huggingface/transformers/issues/16683
| 1,198,600,311
|
I_kwDOCUB6oc5HcTB3
| 16,683
|
Option to change Ray's gridsearch scope
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Yes, I'd still like to see this added. I can do a quick PR if needed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Another bump to remind myself that I'll do a PR for this in the coming month."
] | 1,649
| 1,655
| 1,655
|
COLLABORATOR
| null |
# 🚀 Feature request
It would be great if we can get more control over how Ray is selecting the best trial in a hyperparameter search. Currently, the default value for [`get_best_trial`](https://docs.ray.io/en/latest/tune/api_docs/analysis.html#ray.tune.ExperimentAnalysis.get_best_trial) is used here:
https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/src/transformers/integrations.py#L299
namely `scope="last"`. So for each trial, it will simply take the _last_ checkpoint of that trial, and use its performance to compare with the other trials. This is not always ideal as it may very well be that some trials converge sooner than other and than overfit, leading to poor evaluation scores in their last checkpoints. Fortunately, Ray allows other options, such as `"all"`, which takes the best checkpoint instead of the last, of each trial and compares those.
## Motivation
Currently there is no way to pass this through to Ray from within `transformers` as I can tell, yet it is an important aspect of hyperparameter search.
## Your contribution
I can work on this if requested, although I am still looking for input how to best tackle this. One could simply add an argument to TrainingArguments, e.g. `ray_scope`, and then change this line
https://github.com/huggingface/transformers/blob/7c5d79912a21880ce13d77881940458e90d98917/src/transformers/integrations.py#L299
to
```python
best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3], scope=trainer.args.ray_scope)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16683/reactions",
"total_count": 5,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16683/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16682
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16682/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16682/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16682/events
|
https://github.com/huggingface/transformers/pull/16682
| 1,198,542,707
|
PR_kwDOCUB6oc417vpW
| 16,682
|
Type hint complete Albert model file.
|
{
"login": "karthikrangasai",
"id": 39360170,
"node_id": "MDQ6VXNlcjM5MzYwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/39360170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karthikrangasai",
"html_url": "https://github.com/karthikrangasai",
"followers_url": "https://api.github.com/users/karthikrangasai/followers",
"following_url": "https://api.github.com/users/karthikrangasai/following{/other_user}",
"gists_url": "https://api.github.com/users/karthikrangasai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karthikrangasai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karthikrangasai/subscriptions",
"organizations_url": "https://api.github.com/users/karthikrangasai/orgs",
"repos_url": "https://api.github.com/users/karthikrangasai/repos",
"events_url": "https://api.github.com/users/karthikrangasai/events{/privacy}",
"received_events_url": "https://api.github.com/users/karthikrangasai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, I'm sorry for the delay with this one! We really appreciate it, and I'm trying to get a chance to look through it all ASAP!",
"Thanks a lot and I am glad that you liked the work.\r\nMore of these will be coming in a few days for other models, so it would be great if I know up to what level of detailing is needed for the Type Annotations.",
"@karthikrangasai I spoke to the team and the conclusion was that we should just use `Union[AlbertForPreTrainingOutput, Tuple]` - as well as being easier for you, when these type hints are copied into the documentation, it'll be a lot more readable that way.\r\n\r\nThat said, I respect the dedication and precision that went into making it, so I'm a little sad to see it go.",
"Hello @Rocketknight1 ,\r\n\r\nSure, no worries.\r\nI can make the changes and update the Pull Request in a while.\r\n\r\n\r\nAlthough I am not sure what the reason is for to have return type a Tuple, a suggestion is to maybe remove the return type as \"Tuple\" and make it the respective output type for all models. This will have keyword based output values like \r\n```\r\noutput = AlbertModel(**inputs)\r\n\r\noutput.last_hidden_state\r\noutput.pooler_output\r\n```\r\nand this might become a more cleaner API.\r\n",
"Seeing some code quality issues, I'm guessing because we changed our versions for code formatting tools and might need to rebase. Let me check!",
"It seems like you'll need to pull commits from our repo to the main branch of your repo, then rebase your branch, then force push to update. After that, the files should be updated and the error should go away!",
"Hello @Rocketknight1, I have updated the PR with the main branch.\r\nAll tests are passing.",
"Great job. Thank you for this PR, it's much appreciated!"
] | 1,649
| 1,651
| 1,651
|
CONTRIBUTOR
| null |
# What does this PR do?
Type hint the Albert Model file for PyTorch Model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Part of #16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16682/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16682",
"html_url": "https://github.com/huggingface/transformers/pull/16682",
"diff_url": "https://github.com/huggingface/transformers/pull/16682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16682.patch",
"merged_at": 1651671312000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16681
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16681/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16681/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16681/events
|
https://github.com/huggingface/transformers/issues/16681
| 1,198,511,312
|
I_kwDOCUB6oc5Hb9TQ
| 16,681
|
`LongT5`: Efficient Text-To-Text Transformer for Long Sequences
|
{
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Thanks a lot for opening the issue @stancld ! I'm quite busy with other projects at the moment so I'd be more than happy to guide you here! Do you want to give it a try?",
"Also cc @stefan-it @peregilk @versae ",
"This might also be helpful: https://github.com/patrickvonplaten/t5-mtf-to-hf-converter",
"Here some info for a T5X -> HF conversion script:\r\nhttps://github.com/google-research/t5x/issues/198",
"Feel free to start working on it - I'm more than happy to help you if you're stuck :-) Also cc @patil-suraj @LysandreJik @craffel for notification",
"https://github.com/google-research/longt5/issues/2#issue-1198955086",
"This is super cool! Happy to help if anyone wants to give it a try :) ",
"@patrickvonplaten @patil-suraj I'm gonna give it a try and will try to open a draft PR as soon as I have some progress! :]\r\n\r\nAlso @patrickvonplaten, thanks a lot for all the useful links you have posted here! :]"
] | 1,649
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
# 🌟 New model addition -- LongT5: Efficient Text-To-Text Transformer for Long Sequences
## Model description
LongT5 is an extension of the [T5 model](https://github.com/google-research/text-to-text-transfer-transformer) that handles long sequence inputs more efficiently. We integrated attention ideas from long-input transformers [ETC](https://arxiv.org/abs/2004.08483),and adopted pre-training strategies from summarization pre-training [PEGASUS](https://arxiv.org/abs/1912.08777) into the scalable T5 architecture. The result is a new attention mechanism we call Transient Global(TGlobal), which mimics ETC’s local/globalattention mechanism, but without requiring additional side-inputs. We are able to achieve state-of-the-art results on several summarization and question answering tasks, as well as outperform the original T5 models on these tasks.
*Description copied from https://github.com/google-research/longt5/blob/master/README.md.*
The full paper is currently available on arXiv -- [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916).
## Open source status
The model has its own repository available [here](https://github.com/google-research/longt5).
* [x] the model implementation is available - the model implementation is available at [Google FlaxFormer repo](https://github.com/google/flaxformer/tree/main/flaxformer/architectures/longt5).
* [x] the model weights are available: Currently, Google has released five checkpoints listed in the [LongT5 repo](https://github.com/google-research/longt5)
- **LongT5-Local-Base** (250 million parameters)
- **LongT5-TGlobal-Base** (250 million parameters)
- **LongT5-Local-Large** (780 million parameters)
- **LongT5-TGlobal-Large** (780 million parameters)
- **LongT5-TGlobal-XL** (3 billion parameters)
* [x] who are the authors: @mandyguo-xyguo, Joshua Ainslie, @duthus, @santiontanon, @nijianmo, @yhsung, @yinfeiy, (not sure with some GitHub names, so will be happy if anyone can complete it :] )
### Additional context
If anyone from the original authors won't be interested in porting the model into the `transformers`, I'll be more than happy to work on this :].
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16681/reactions",
"total_count": 11,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/transformers/issues/16681/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16680
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16680/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16680/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16680/events
|
https://github.com/huggingface/transformers/issues/16680
| 1,198,477,750
|
I_kwDOCUB6oc5Hb1G2
| 16,680
|
Trying to Train Lonformer but from standard transfomer file, error AttributeError: module 'wandb' has no attribute 'run'. Even when I have not install Wandb
|
{
"login": "nabarunbaruaAIML",
"id": 64695833,
"node_id": "MDQ6VXNlcjY0Njk1ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/64695833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nabarunbaruaAIML",
"html_url": "https://github.com/nabarunbaruaAIML",
"followers_url": "https://api.github.com/users/nabarunbaruaAIML/followers",
"following_url": "https://api.github.com/users/nabarunbaruaAIML/following{/other_user}",
"gists_url": "https://api.github.com/users/nabarunbaruaAIML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nabarunbaruaAIML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nabarunbaruaAIML/subscriptions",
"organizations_url": "https://api.github.com/users/nabarunbaruaAIML/orgs",
"repos_url": "https://api.github.com/users/nabarunbaruaAIML/repos",
"events_url": "https://api.github.com/users/nabarunbaruaAIML/events{/privacy}",
"received_events_url": "https://api.github.com/users/nabarunbaruaAIML/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Got the Solution In previous Development I had set Wandb Global Environment Variable as True. And I suppose Transformer Library checks for the Variable and if found it will try for Integration because of which this Issue happened. To fix if you're not using Wandb then execute \r\n\r\n> export WANDB_DISABLED=true\r\nelse add it to your env file\r\n\r\nThis issue can be closed now."
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.18.0
- Platform: Windows
- Python version: 3.7.13
- PyTorch version (GPU?): Yes
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@sgugger
Models:
- Longformer, BigBird: @ydshieh
Library:
- Tokenizers: @SaulLu
- Trainer: @sgugger
-->
## Information
The model Longformer, I am using in a fresh environment with below packages:
torch==1.8.2+cu111
torchvision==0.9.2+cu111
torchaudio===0.8.2
tqdm
mlflow
pandas
transformers
datasets
PyYAML
boto3
matplotlib
sklearn
python-dotenv
s3fs
Ther Problem start when Trainer API executed. In the Transformers Python file error comes stating no module Wandb but I haven't installed Wandb. My understanding is if I have installed Wandb then only this Integration should trigger.
I am getting Error in integrations.py in Transformer Package
File "D:\Virtual_Env\GitHub_projects\AIOPS\DVC\Fact_Checking_Health_related_Claims\env\lib\site-
packages\transformers\integrations.py", line 592, in setup
if self._wandb.run is None:
AttributeError: module 'wandb' has no attribute 'run'
My Code from where I am triggering Trainer API: can be found here https://github.com/nabarunbaruaAIML/Fact_Checking_Health_related_Claims/blob/master/src/stage_03_train.py
The tasks I am working on is:
* Text Classification on Publicly available Dataset
## To reproduce
Steps to reproduce the behaviour:
1. After setuping Environment
2. Execute Python File: stage_01_load_save & stage_02_prepare_dataset which download Public Dataset and convert it to Dataset
3. Lastly Execute Python file to start training via trainer API: python src/stage_03_train.py
**Github:** https://github.com/nabarunbaruaAIML/Fact_Checking_Health_related_Claims/tree/master/src

## Expected behavior
Training should happen
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16680/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16679
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16679/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16679/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16679/events
|
https://github.com/huggingface/transformers/pull/16679
| 1,198,414,062
|
PR_kwDOCUB6oc417XNb
| 16,679
|
Enable more test_torchscript
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
COLLABORATOR
| null |
# What does this PR do?
Enable more `test_torchscript` (in 30 files) by updating `_create_and_check_torchscript`.
(There are still 21 files with 23 places being `False` at this moment - they still give errors.)
The main place to review is in `test_modeling_common.py`.
The changes in model specific test files are removing lines regarding `test_torchscript = True/False`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16679/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16679",
"html_url": "https://github.com/huggingface/transformers/pull/16679",
"diff_url": "https://github.com/huggingface/transformers/pull/16679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16679.patch",
"merged_at": 1649694215000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16678
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16678/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16678/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16678/events
|
https://github.com/huggingface/transformers/issues/16678
| 1,198,343,443
|
I_kwDOCUB6oc5HbUUT
| 16,678
|
`FlaxBartForConditionalGeneration` should not require `input_ids` when `encoder_output` is provided
|
{
"login": "ayaka14732",
"id": 68557794,
"node_id": "MDQ6VXNlcjY4NTU3Nzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayaka14732",
"html_url": "https://github.com/ayaka14732",
"followers_url": "https://api.github.com/users/ayaka14732/followers",
"following_url": "https://api.github.com/users/ayaka14732/following{/other_user}",
"gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions",
"organizations_url": "https://api.github.com/users/ayaka14732/orgs",
"repos_url": "https://api.github.com/users/ayaka14732/repos",
"events_url": "https://api.github.com/users/ayaka14732/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayaka14732/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"ping @patil-suraj",
"Hey @ayaka14732 ! The flax generate method does not support passing in `encoder_outputs`, so `input_ids` is a required input.",
"Thanks! Why does the flax generate method not support passing in `encoder_outputs`? Is that a bug or a feature?\r\n\r\nI am trying to generate with `encoder_outputs`, but I don't have the `input_ids`. That's because the `encoder_outputs` are produced from a customized encoder rather than the original one. What approach would you suggest to achieve this?",
"We can support passing `encoder_outputs` in flax generate. Would you like to open a PR for this ? Happy to help with it.\r\n\r\nWe'll need to modify this method to skip calling `.encode` if the `encoder_outputs` are passed as a kwarg.\r\nhttps://github.com/huggingface/transformers/blob/bae9b6458cb4aebaf3a2eea1ab5d47904062f30f/src/transformers/generation_flax_utils.py#L142-L149",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am going to make a PR",
"@patil-suraj\r\n\r\nAlthough the modification seems easy, I didn't find similar code in `generation_utils.py` to be used as a reference. In other words, the PyTorch version does not check `encoder_outputs` in `_prepare_encoder_decoder_kwargs_for_generation`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/b9bb417324c0d9013c505dc39c016ab9ca0e23c8/src/transformers/generation_utils.py#L507-L527\r\n\r\nWhy is that?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I believe this is not completed"
] | 1,649
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
## Environment info
- `transformers` version: 4.18.0
- Platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.5
- JaxLib version: 0.3.5
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
## Information
Model I am using: BART
## To reproduce
```python
from transformers import BartTokenizer, BartForConditionalGeneration, FlaxBartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model_flax = FlaxBartForConditionalGeneration.from_pretrained('facebook/bart-base')
inputs = tokenizer('Travelers wait about an hour and a half to cross the Tower.', return_tensors='jax')
outputs_flax = model_flax.encode(**inputs)
generate_ids_flax = model_flax.generate(attention_mask=inputs.attention_mask, encoder_output=outputs_flax) # TypeError: FlaxGenerationMixin.generate() missing 1 required positional argument: 'input_ids'
import numpy as onp
import torch
from transformers.modeling_outputs import BaseModelOutput
def jax2pt(a):
return torch.from_numpy(onp.asarray(a))
model_pt = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
outputs_pt = BaseModelOutput(last_hidden_state=jax2pt(outputs_flax.last_hidden_state))
generate_ids_pt = model_pt.generate(attention_mask=jax2pt(inputs.attention_mask), encoder_outputs=outputs_pt)
print(generate_ids_pt) # OK
```
## Expected behavior
The Flax model should work as the PyTorch model.
## Actual behavior
`FlaxBartForConditionalGeneration` requires `input_ids`, even if `encoder_output` is provided.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16678/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16677
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16677/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16677/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16677/events
|
https://github.com/huggingface/transformers/issues/16677
| 1,198,214,778
|
I_kwDOCUB6oc5Ha056
| 16,677
|
The accuracy of test set is different in training and evaluating Bert
|
{
"login": "klyuhang9",
"id": 39409233,
"node_id": "MDQ6VXNlcjM5NDA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klyuhang9",
"html_url": "https://github.com/klyuhang9",
"followers_url": "https://api.github.com/users/klyuhang9/followers",
"following_url": "https://api.github.com/users/klyuhang9/following{/other_user}",
"gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions",
"organizations_url": "https://api.github.com/users/klyuhang9/orgs",
"repos_url": "https://api.github.com/users/klyuhang9/repos",
"events_url": "https://api.github.com/users/klyuhang9/events{/privacy}",
"received_events_url": "https://api.github.com/users/klyuhang9/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,652
| 1,652
|
NONE
| null |
Hi,friends, I meet a problem which puzzled me for couple of days.
I want to training a Bert for two classification task on my custom data.
I choose BertForSequenceClassification model.
And I use the 'Trainer' from 'transformers', the tokenizer, model, metrics and Trainer code as follows:
`
tokenizer=BertTokenizerFast.from_pretrained("bert_base_chinese")
model = BertForSequenceClassification.from_pretrained("bert-base-chinese", return_dict=True)
def computed_metric(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
precision, recall, f1, _ =precision_recall_fscore_support(labels,predictions, average="macro")
acc = accuracy_score(labels, predictions)
return {"accuracy": acc, 'precision: ':precision, "f1" : f1, "recall" : recall }
trainer = Trainer(
model,
args,
train_dataset= dataset_train,
eval_dataset= dataset_test,
tokenizer= tokenizer,
compute_metrics= computed_metric,
)
`
The problem is: in the training process "eval_accuracy: 0.82", however, after t he training, I tested the model separately, and the statistical accuracy was only 0.74.
The code for the test as follows:
`
def tokenize(x):
return tokenizer([x[0]], [x[1]], truncation=True, padding='max_length',max_length=512)
answer_model = [] #record the answer from model
model.eval()
for i in range(len(data_text)):
encoded_input = tokenize(data_text[i])
encoded_input["input_ids"] = torch.tensor(encoded_input["input_ids"])
encoded_input["token_type_ids"] = torch.tensor(encoded_input["token_type_ids"])
encoded_input["attention_mask"] = torch.tensor(encoded_input["attention_mask"])
output = model(**encoded_input)
print(i)
print(output) # [-2.9663, 2.3750]
if output["logits"][0][0] < output["logits"][0][1]:
answer_model.append(1)
else:
answer_model.append(0)
acc = accuracy_score(data_label, answer_model)
print("accuracy ",acc) #only 0.74
`
I have carefully checked the code for two days and retrained it, but the result is still different. I don't know where the problem might be. I hope to get your help!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16677/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16676
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16676/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16676/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16676/events
|
https://github.com/huggingface/transformers/issues/16676
| 1,197,978,892
|
I_kwDOCUB6oc5HZ7UM
| 16,676
|
Adding new tokens to RobertaTokenizer gives very strange results - probably a bug.
|
{
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"For additional context, the non-fast `RobertaTokenizer` returns the following results.\r\n\r\n```python\r\n>>> print(tokenizer.tokenize(\"Apples are too_big for turtles.\"))\r\n['App', 'les', 'Ġare', 'Ġtoo', '_', 'big', 'Ġfor', 'Ġturtles', '.']\r\n>>> tokenizer.add_tokens([\"too_big\",\"turtles\"])\r\n>>> print(tokenizer.tokenize(\"Apples are too_big for turtles.\"))\r\n['App', 'les', 'Ġare', 'too_big', 'for', 'turtles', '.']\r\n```\r\n\r\ncc @SaulLu",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @Oxi84! \r\n\r\nSorry for the late reply. As far as added tokens are concerned, you can set attributes when using a fast tokenizer (for the moment it won't work with a slow one).\r\n\r\nhttps://github.com/huggingface/transformers/blob/31616b8d613dcb7ac69b562d51b42d0db379f72f/src/transformers/tokenization_utils_base.py#L77-L91\r\n\r\nSo if the default behaviour doesn't suit you, you can for example change it to get the same behavior as the slow tokenizer:\r\n```python\r\nfrom transformers import RobertaTokenizerFast, AddedToken\r\nt = RobertaTokenizerFast.from_pretrained('roberta-base')\r\n\r\nt.add_tokens([AddedToken(\"too_big\", lstrip=True, rstrip=True), AddedToken(\"turtles\", lstrip=True, rstrip=True)])\r\nprint(t.convert_ids_to_tokens(t.encode(\"Apples are too_big for turtles.\")))\r\n# ['<s>', 'App', 'les', 'Ġare', 'too_big', 'for', 'turtles', '.', '</s>']\r\n```\r\n\r\nOr if you want to be close to your initial proposal, you'll have to add a space at the beginning of the added tokens:\r\n```python\r\nfrom transformers import RobertaTokenizerFast, AddedToken\r\nt = RobertaTokenizerFast.from_pretrained('roberta-base')\r\n\r\nt.add_tokens([\" too_big\", \" turtles\", \"too_big\",\"turtles\"])\r\nprint(t.convert_ids_to_tokens(t.encode(\"Apples are too_big for turtles.\")))\r\n# ['<s>', 'App', 'les', 'Ġare', ' too_big', 'Ġfor', ' turtles', '.', '</s>']\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,649
| 1,654
| 1,654
|
NONE
| null |
I am trying to add new tokens to Roberta tokenizer, but the results are rather strange.
from transformers import RobertaTokenizerFast
t = RobertaTokenizerFast.from_pretrained('roberta-base')
print(t.tokenize("Apples are too_big for turtles."))
t.add_tokens(["too_big","turtles"])
print(t.tokenize("Apples are too_big for turtles."))
The result before adding the tokens are:
['App', 'les', 'Ġare', 'Ġtoo', '_', 'big', 'Ġfor', 'Ġturtles', '.']
and this is ok, but after I add new tokes the results are:
['App', 'les', 'Ġare', 'Ġ', 'too_big', 'Ġfor', 'Ġ', 'turtles', '.']
and they should be:
['App', 'les', 'Ġare', ''Ġtoo_big', 'Ġfor', 'Ġturtles', '.']
Is this a bug? Normally i doubt the tokenizer ever has Ġ alone without anything else as a token.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16676/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16675
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16675/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16675/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16675/events
|
https://github.com/huggingface/transformers/pull/16675
| 1,197,695,518
|
PR_kwDOCUB6oc415B_Q
| 16,675
|
[Doctest] added doctest changes for electra
|
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, I also got an error for `ElectraForPreTraining`.\r\n\r\n~~I am still checking it.~~\r\n\r\nRegarding `ElectraForPreTraining`, could you update that part with\r\n\r\n```\r\n>>> from transformers import ElectraForPreTraining, ElectraTokenizerFast\r\n>>> import torch\r\n\r\n>>> discriminator = ElectraForPreTraining.from_pretrained(\"google/electra-base-discriminator\")\r\n>>> tokenizer = ElectraTokenizerFast.from_pretrained(\"google/electra-base-discriminator\")\r\n\r\n>>> sentence = \"The quick brown fox jumps over the lazy dog\"\r\n>>> fake_sentence = \"The quick brown fox fake over the lazy dog\"\r\n\r\n>>> fake_tokens = tokenizer.tokenize(fake_sentence, add_special_tokens=True)\r\n>>> fake_inputs = tokenizer.encode(fake_sentence, return_tensors=\"pt\")\r\n>>> discriminator_outputs = discriminator(fake_inputs)\r\n>>> predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)\r\n\r\n>>> fake_tokens\r\n['[CLS]', 'the', 'quick', 'brown', 'fox', 'fake', 'over', 'the', 'lazy', 'dog', '[SEP]']\r\n\r\n>>> predictions.squeeze().tolist()\r\n[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]\r\n```\r\n\r\n`google/electra-small-discriminator` gives all `0` - not a very inspiring example :-)",
"sure",
"Hi @bhadreshpsavani , please ping me once you think it's ready for the next review 🙏 Thank you!",
"Hi @ydshieh,\r\nIts ready, I have made the suggested changes and few more to fix the doctests",
"@patrickvonplaten Regarding `p a r i s`, check the fix in this PR\r\n\r\nhttps://github.com/huggingface/transformers/pull/16698\r\n\r\nI will help @bhadreshpsavani to finalize this PR regarding this part :)",
"Hello @ydshieh,\r\nSince this fix for that issue is merged shall I pull the changes and make the required changes from `p a r i s` to `paris` ?",
"Looks good to me, tested locally and indeed rebasing on https://github.com/huggingface/transformers/pull/16698 solves the issue.",
"> Hello @ydshieh, Since this fix for that issue is merged shall I pull the changes and make the required changes from `p a r i s` to `paris` ?\r\n\r\nWould be great if you can pull, rebase, and change the expected value to `paris` (after a verification) 😄 . Thanks",
"Hi @ydshieh,\r\nI have updated the changes",
"Super nice PR! Thank you, @bhadreshpsavani , especially for the patience 💯 !\r\nMerged (tested locally again -> all pass)",
"Hi @ydshieh,\nThank you for all the help and guidance!\nShall I create PR for other model as well?"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds Doctest fo Electra Pytorch
Issue: #16292
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ydshieh @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16675/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16675",
"html_url": "https://github.com/huggingface/transformers/pull/16675",
"diff_url": "https://github.com/huggingface/transformers/pull/16675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16675.patch",
"merged_at": 1649882340000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16674
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16674/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16674/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16674/events
|
https://github.com/huggingface/transformers/pull/16674
| 1,197,578,454
|
PR_kwDOCUB6oc414wCq
| 16,674
|
[Trainer] tf32 arg doc
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
As discussed at https://github.com/huggingface/transformers/issues/16588#issuecomment-1093056995 expanding the TF32 Trainer arg doc to define the default value and where to find more information.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16674/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16674",
"html_url": "https://github.com/huggingface/transformers/pull/16674",
"diff_url": "https://github.com/huggingface/transformers/pull/16674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16674.patch",
"merged_at": 1649446539000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16673
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16673/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16673/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16673/events
|
https://github.com/huggingface/transformers/pull/16673
| 1,197,563,448
|
PR_kwDOCUB6oc414sz_
| 16,673
|
Load finetuned state dict without loading pretrained weights
|
{
"login": "laurahanu",
"id": 32672979,
"node_id": "MDQ6VXNlcjMyNjcyOTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/32672979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laurahanu",
"html_url": "https://github.com/laurahanu",
"followers_url": "https://api.github.com/users/laurahanu/followers",
"following_url": "https://api.github.com/users/laurahanu/following{/other_user}",
"gists_url": "https://api.github.com/users/laurahanu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laurahanu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laurahanu/subscriptions",
"organizations_url": "https://api.github.com/users/laurahanu/orgs",
"repos_url": "https://api.github.com/users/laurahanu/repos",
"events_url": "https://api.github.com/users/laurahanu/events{/privacy}",
"received_events_url": "https://api.github.com/users/laurahanu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16672
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16673/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/16673",
"html_url": "https://github.com/huggingface/transformers/pull/16673",
"diff_url": "https://github.com/huggingface/transformers/pull/16673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/16673.patch",
"merged_at": 1649439725000
}
|
https://api.github.com/repos/huggingface/transformers/issues/16672
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16672/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16672/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16672/events
|
https://github.com/huggingface/transformers/issues/16672
| 1,197,538,612
|
I_kwDOCUB6oc5HYP00
| 16,672
|
Can't load a local finetuned state dict anymore without loading the official pretrained weights first
|
{
"login": "laurahanu",
"id": 32672979,
"node_id": "MDQ6VXNlcjMyNjcyOTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/32672979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laurahanu",
"html_url": "https://github.com/laurahanu",
"followers_url": "https://api.github.com/users/laurahanu/followers",
"following_url": "https://api.github.com/users/laurahanu/following{/other_user}",
"gists_url": "https://api.github.com/users/laurahanu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laurahanu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laurahanu/subscriptions",
"organizations_url": "https://api.github.com/users/laurahanu/orgs",
"repos_url": "https://api.github.com/users/laurahanu/repos",
"events_url": "https://api.github.com/users/laurahanu/events{/privacy}",
"received_events_url": "https://api.github.com/users/laurahanu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.18.0
- Platform: Ubuntu & Mac
- Python version: 3.9.7
### Who can help
@sgugger
## Information
Issue first reported [here](https://github.com/unitaryai/detoxify/issues/48)
Model I am using (Bert, XLNet ...): Bert, Roberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
The code below worked before version 4.18.0.
1. cannot load a finetuned state dict (can download from [here](https://github.com/unitaryai/detoxify/releases/download/v0.3-alpha/toxic_debiased-c7548aa0.ckpt)) without loading the official pretrained HF weights (which worked by having `pretrained_model_name_or_path` as None):
```python
model = RobertaForSequenceClassification.from_pretrained(
pretrained_model_name_or_path=None,
config="roberta-base",
num_labels=16,
state_dict=state_dict,
)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Stack trace:
```
Exception has occurred: TypeError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
expected str, bytes or os.PathLike object, not NoneType
File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 308, in _check_seekable
f.seek(f.tell())
During handling of the above exception, another exception occurred:
File "[/detoxify/toxic-env/lib/python3.9/site-packages/transformers/modeling_utils.py]()", line 349, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 594, in load
with _open_file_like(f, 'rb') as opened_file:
File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 235, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 220, in __init__
_check_seekable(buffer)
File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 311, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "[/detoxify/toxic-env/lib/python3.9/site-packages/torch/serialization.py]()", line 304, in raise_err_msg
raise type(e)(msg)
```
## Expected behavior
This seems to only be an issue since #16343 was introduced and seems to be related to this [change](https://github.com/huggingface/transformers/pull/16343/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaL1444-R1796)
(L1444-R1796)
What would solve this would be to have `if not is_sharded and state_dict is None:` on L1797.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16672/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16671
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16671/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16671/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16671/events
|
https://github.com/huggingface/transformers/issues/16671
| 1,197,517,480
|
I_kwDOCUB6oc5HYKqo
| 16,671
|
ASR Pipeline: End of transcripts missing when chunking enabled
|
{
"login": "nkaenzig-aifund",
"id": 93617195,
"node_id": "U_kgDOBZR8Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/93617195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nkaenzig-aifund",
"html_url": "https://github.com/nkaenzig-aifund",
"followers_url": "https://api.github.com/users/nkaenzig-aifund/followers",
"following_url": "https://api.github.com/users/nkaenzig-aifund/following{/other_user}",
"gists_url": "https://api.github.com/users/nkaenzig-aifund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nkaenzig-aifund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nkaenzig-aifund/subscriptions",
"organizations_url": "https://api.github.com/users/nkaenzig-aifund/orgs",
"repos_url": "https://api.github.com/users/nkaenzig-aifund/repos",
"events_url": "https://api.github.com/users/nkaenzig-aifund/events{/privacy}",
"received_events_url": "https://api.github.com/users/nkaenzig-aifund/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @nkaenzig-aifund,\r\n\r\nThanks a lot for the reproducible bug report! I'm looking into it and will try to submit a fix today. cc @Narsil \r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(model='facebook/wav2vec2-large-960h-lv60-self')\r\n\r\nresult_1 = pipe('47.wav', chunk_length_s=None)\r\n\r\nresult_2 = pipe('47.wav', chunk_length_s=10)\r\n\r\nprint(\"Correct\", result_1)\r\nprint(\"Wrong\", result_2)\r\n```\r\n\r\nwith:\r\n\r\n!wget https://public-a2d129863a16ad26b0deda49d22c64b8.s3.us-west-2.amazonaws.com/47.wav"
] | 1,649
| 1,649
| 1,649
|
NONE
| null |
## Environment info
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes & no
- Using distributed or parallel set-up in script?: no
### Who can help
@Narsil, @patrickvonplaten, @jonatasgrosman, @anton-l
## Information
If you use the chunking feature implemented in the asr pipeline, in some cases it cuts off the end of the audio transcripts.
Model I am using:
`facebook/wav2vec2-large-960h-lv60-self`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
The issue can be reproduced using the following colab notebook:
https://colab.research.google.com/drive/1SSWr-2X2nnbKLa5dUSEm_Y_WFTju2-fN?usp=sharing
## Expected behavior
`chunk_length_s=None` and `chunk_length_s=10` should yield the same (or similar) results, without cutting off the end of the transcript.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16671/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16671/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/16670
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/16670/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/16670/comments
|
https://api.github.com/repos/huggingface/transformers/issues/16670/events
|
https://github.com/huggingface/transformers/issues/16670
| 1,197,457,920
|
I_kwDOCUB6oc5HX8IA
| 16,670
|
Bug in Marian model (or tokenizer) in transformers==4.18.0
|
{
"login": "MorenoLaQuatra",
"id": 10062811,
"node_id": "MDQ6VXNlcjEwMDYyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10062811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorenoLaQuatra",
"html_url": "https://github.com/MorenoLaQuatra",
"followers_url": "https://api.github.com/users/MorenoLaQuatra/followers",
"following_url": "https://api.github.com/users/MorenoLaQuatra/following{/other_user}",
"gists_url": "https://api.github.com/users/MorenoLaQuatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorenoLaQuatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorenoLaQuatra/subscriptions",
"organizations_url": "https://api.github.com/users/MorenoLaQuatra/orgs",
"repos_url": "https://api.github.com/users/MorenoLaQuatra/repos",
"events_url": "https://api.github.com/users/MorenoLaQuatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorenoLaQuatra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Good catch! Fix is here #16700",
"Thank you!"
] | 1,649
| 1,649
| 1,649
|
CONTRIBUTOR
| null |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.18.0
- Platform: Google Colab / Linux & Conda
- Python version: 3.7.13
- PyTorch version (GPU?): 1.10.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Marian
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Oscar & ALT - Standard MT task
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Extend the tokenizer using a target one
2. Add tokens
3. Run forward with model in training mode.
4. script and error reported here: https://colab.research.google.com/drive/1utS-L1iO1paiwKKPNqVHW5ARvprfRgG2?usp=sharing
**Traceback below:**
```
[/usr/local/lib/python3.7/dist-packages/transformers/models/marian/modeling_marian.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1452 if labels is not None:
1453 loss_fct = CrossEntropyLoss()
-> 1454 masked_lm_loss = loss_fct(lm_logits.view(-1, self.target_vocab_size), labels.view(-1))
1455
1456 if not return_dict:
RuntimeError: shape '[-1, 65001]' is invalid for input of size 8320768
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Standard Marian training output. No issue with `transformers==4.17.0`
<!-- A clear and concise description of what you would expect to happen. -->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/16670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/16670/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.