url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/20785
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20785/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20785/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20785/events
|
https://github.com/huggingface/transformers/pull/20785
| 1,498,808,138
|
PR_kwDOCUB6oc5FkOM-
| 20,785
|
Add test_image_processing_common.py
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh @sgugger Yes, sorry, I could have been clearer above. \r\n\r\nThe reason for not replacing `FeatureExtractionSavingTestMixin` with `ImageProcessingSavingTestMixin` in the `test_image_processing_xxx.py` files in this PR is that the tests in the original mixin use class attributes like `self.feature_extraction_class`. When updating the mixin, then attributes in the testing class `XxxImageProcessingTest` for each test file `test_image_processing_xxx.py` have to be updated. This results in either 1) updating just the FE references to have the tests running or 2) updating all of the FE references. In the case of 1) it results in the code being mixed between feature extractors and image processors which I found confusing to read (subjective opinion) and for 2) it introduced hundreds of lines of additional diff. I decided to leave the switch to a follow up PR. \r\n\r\n",
"> @ydshieh @sgugger Yes, sorry, I could have been clearer above.\r\n> \r\n> The reason for not replacing `FeatureExtractionSavingTestMixin` with `ImageProcessingSavingTestMixin` in the `test_image_processing_xxx.py` files in this PR is that the tests in the original mixin use class attributes like `self.feature_extraction_class`. When updating the mixin, then attributes in the testing class `XxxImageProcessingTest` for each test file `test_image_processing_xxx.py` have to be updated. This results in either 1) updating just the FE references to have the tests running or 2) updating all of the FE references. In the case of 1) it results in the code being mixed between feature extractors and image processors which I found confusing to read (subjective opinion) and for 2) it introduced hundreds of lines of additional diff. I decided to leave the switch to a follow up PR.\r\n\r\nUnderstand! Thank you for explaining!\r\nBTW, it would be super nice to give a link like https://github.com/huggingface/transformers/blob/997cbebf3483d500de8f85cc8834b704f6b410be/tests/models/beit/test_image_processing_beit.py#L110\r\n(I agree it is a bit more time consuming :-) )"
] | 1,671
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Creates an equivalent `test_feature_extraction_common.py` for image processors: `test_image_processing_common.py` and moves any vision and image processing specific logic to this file.
This is necessary for creating `ImageProcessingSavingTestMixin` in order to rename any feature extractor references in the image processor tests. This is left for a [future PR](https://github.com/huggingface/transformers/pull/20768).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20785/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20785",
"html_url": "https://github.com/huggingface/transformers/pull/20785",
"diff_url": "https://github.com/huggingface/transformers/pull/20785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20785.patch",
"merged_at": 1674481711000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20784
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20784/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20784/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20784/events
|
https://github.com/huggingface/transformers/pull/20784
| 1,498,734,854
|
PR_kwDOCUB6oc5Fj-PI
| 20,784
|
Move convert_to_rgb to image_transforms module
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Moves the `convert_to_rgb` function to `image_transforms` so all image processors can easily import it.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20784/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20784",
"html_url": "https://github.com/huggingface/transformers/pull/20784",
"diff_url": "https://github.com/huggingface/transformers/pull/20784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20784.patch",
"merged_at": 1671130024000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20783
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20783/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20783/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20783/events
|
https://github.com/huggingface/transformers/issues/20783
| 1,498,701,927
|
I_kwDOCUB6oc5ZVGBn
| 20,783
|
[Pipeline-asr] `batch_size` ignored when using a single file
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,675
| 1,675
|
COLLABORATOR
| null |
This is not a very important issue but when using the `asr` pipeline, if you give a single audio file that you want to `chunk`, and provide both a `batch_size` and `chunk_lenght_s`, the pipeline still runs sequentially.
Reproducing script :
```python
from transformers import pipeline
from datasets import load_dataset
import datasets
ds = load_dataset("common_voice", "ja", split="test", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
input_speech = next(iter(ds))["audio"]["array"]
pipe = pipeline("automatic-speech-recognition","facebook/wav2vec2-base-960h")
pipe(input_speech, return_timestamps ="char", chunk_length_s = 30, stride_length_s=[3,3], batch_size = 1024, device = 0)
```
(thanks @Narsil for the hack with `pipe([input_speech], return_timestamps ="char", chunk_length_s = 30, stride_length_s=[3,3], batch_size = 1024, device = 0)`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20783/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20782
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20782/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20782/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20782/events
|
https://github.com/huggingface/transformers/issues/20782
| 1,498,692,413
|
I_kwDOCUB6oc5ZVDs9
| 20,782
|
Do not Preload a Deep Learning Framework
|
{
"login": "daskol",
"id": 9336514,
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daskol",
"html_url": "https://github.com/daskol",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"repos_url": "https://api.github.com/users/daskol/repos",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Those environment variable exist, but Transformers also only imports the frameworks as needed, they are named `USE_TF`, `USE_TORCH` and `USE_JAX`.",
"Many thanks!\r\n\r\nAre they described somewhere in documentation? I didn't manage to find them. Is it possible to add `HF_` prefix to be consistent with `HF_HOME` as an example?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
NONE
| null |
### Feature request
I need some toggle (say envvar `HF_FRAMEWORK` or configuration option `transformers.config.framework`) or runtime check on importing `transformers` in order not to import `tensorflow`, `torch`, and `flax` simulteneously.
### Motivation
At the moment `transformers ` package imports all deep learning packages even if they are not used. In other words, if there are `tensorflow` and `torch` installed simultaneously then both packages will be imported despite the only `torch` is actually used.
### Your contribution
I'll see but I am not sure that it is easy to remove explicit imports of DL framework on `transformer` imports.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20782/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20781
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20781/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20781/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20781/events
|
https://github.com/huggingface/transformers/issues/20781
| 1,498,561,637
|
I_kwDOCUB6oc5ZUjxl
| 20,781
|
[Tokenizers] Missmatch between fast and slow
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting.\r\n\r\n- `tokenizers` CAN do warnings (although I was under the impression there was an effort to reduce warnings.).\r\n- In my personal opinion, raising an Exception is better when OOV than silently ignoring. That being said, it would be a massive breaking change, so I'm hesitant to \"fix\" that way.",
"Yes, an exception should be raised and it's more of a bug fix than a breaking change IMO. Users will be surprised, but they should be surprised when there is an out-of-vocab index.",
"@Narsil @sgugger I can take this up. Will raise an exception in PreTrainedTokenizer when OOV is encountered . ",
"(This might break the tokenizer used by Whisper, as all of the timestamp tokens are not `in` the vocabulary, but are still used and need to be decoded as `''`)",
"Couldn't we put them in the vocab as `Timestamp <n>` (being the seconds offset ?) (Even as special tokens ?)",
"we can, but we would also have to help the openAi team with their tokenizer that is based on pour GPT2TokenizerFast 😅",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
COLLABORATOR
| null |
When I worked on the implementation of Whisper I realised that two different behaviors appear when you use a `fast` or `slow` tokenizer and have a OOV.
Simple snippet :
```python
>>> from transformers import GPT2Tokenizer, GPT2TokenizerFast
>>> fast = GPT2TokenizerFast.from_pretrained("gpt2")
>>> slow = GPT2Tokenizer.from_pretrained("gpt2")
>>> # the vocab size is 50257
>>> fast.decode(50258)
''
```
```python
>>> slow.decode(50258)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/arthur_huggingface_co/transformers/src/transformers/tokenization_utils_base.py", line 3468, in decode
return self._decode(
File "/home/arthur_huggingface_co/transformers/src/transformers/tokenization_utils.py", line 938, in _decode
for token in filtered_tokens:
TypeError: 'NoneType' object is not iterable
```
My question I guess is : which one is the expected one?
Here is my take :
- It should work, but output a warning saying that an OOV was encountered and was ignored.
WDYT @sgugger @LysandreJik @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20781/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20781/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20780
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20780/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20780/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20780/events
|
https://github.com/huggingface/transformers/pull/20780
| 1,498,425,370
|
PR_kwDOCUB6oc5Fi7CA
| 20,780
|
Vilt - use image_transforms pad
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
When Vilt was first implemented, image transforms library didn't have `pad` implemented. This PR removes the old pad implementation in `image_processing_vilt.py` and uses the standard library.
It also adds some missing `# Copied from ` statements that apply to other module level functionality in `image_processing_detr.py` spotted when comparing `pad` between the models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20780/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20780",
"html_url": "https://github.com/huggingface/transformers/pull/20780",
"diff_url": "https://github.com/huggingface/transformers/pull/20780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20780.patch",
"merged_at": 1671450187000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20779
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20779/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20779/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20779/events
|
https://github.com/huggingface/transformers/issues/20779
| 1,498,400,160
|
I_kwDOCUB6oc5ZT8Wg
| 20,779
|
How to reset dataloader or global_step for continued training?
|
{
"login": "ajesujoba",
"id": 12751379,
"node_id": "MDQ6VXNlcjEyNzUxMzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/12751379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajesujoba",
"html_url": "https://github.com/ajesujoba",
"followers_url": "https://api.github.com/users/ajesujoba/followers",
"following_url": "https://api.github.com/users/ajesujoba/following{/other_user}",
"gists_url": "https://api.github.com/users/ajesujoba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajesujoba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajesujoba/subscriptions",
"organizations_url": "https://api.github.com/users/ajesujoba/orgs",
"repos_url": "https://api.github.com/users/ajesujoba/repos",
"events_url": "https://api.github.com/users/ajesujoba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajesujoba/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"if the continued training stops on `B` due to GPU limit, how do I ensure that when I try to continue training on `B`, it has the right data index to start from?\r\n",
"Please use the [forums](https://discuss.huggingface.co/) for questions like this, as we keep issues for bugs and feature requests only. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
NONE
| null |
I have finetuned MT5-base model for machine translation task on a corpus `A` and I would want to swap the dataset and continue training on corpus `B` (this is not finetuning but a form of continued training), and force the dataloader to start afresh instead of continuing from the point where it stopped previously on corpus `A`. Currently, to do continue training, I used `resume_from_checkpoint` to provide the checkpoint of the MT model for corpus `A` and provide the path to the new data.
Is there an argument or parameter to reset data loader? I noticed there is a `ignore_data_skip` parameter but this does not solve the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20779/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20778
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20778/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20778/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20778/events
|
https://github.com/huggingface/transformers/pull/20778
| 1,498,274,285
|
PR_kwDOCUB6oc5FiZi1
| 20,778
|
[Pipeline] fix failing bloom `pipeline` test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20778). All of your documentation changes will be reflected on that endpoint."
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the `pipeline` test: `tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_small_model_pt_bloom_accelerate`
- Link to failing job: https://github.com/huggingface/transformers/actions/runs/3691174891/jobs/6248989365
- Why this fix is relevant? Before https://github.com/huggingface/transformers/pull/20602 there was an inconsistency between models loaded with `accelerate` with no `device_map` set and with `device_map` set (i.e. without `accelerate`). Before the the aforementioned PR, if you load a model as follows:
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-bloom", device_map="auto")
print(model.lm_head.weight.dtype)
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-bloom")
print(model.lm_head.weight.dtype)
```
You get:
```
torch.bfloat16
torch.float32
```
Which is inconsistent. Since that, to load a model with its native dtype, you need to provide `torch_dtype="auto"`.
This PR fixes the failing test by setting `torch.float32` for the expected `dtype`.
cc @ydshieh @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20778/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20778",
"html_url": "https://github.com/huggingface/transformers/pull/20778",
"diff_url": "https://github.com/huggingface/transformers/pull/20778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20778.patch",
"merged_at": 1671126360000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20777
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20777/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20777/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20777/events
|
https://github.com/huggingface/transformers/pull/20777
| 1,498,203,196
|
PR_kwDOCUB6oc5FiKD4
| 20,777
|
Install video dependency for pipeline CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
PR #20151 added video classification pipeline, which requires video dependency (`decord`). It is not currently installed in the CI image used for Pipeline CI - so we have failure.
This PR adds the dependency (same as in CircleCI).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20777/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20777/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20777",
"html_url": "https://github.com/huggingface/transformers/pull/20777",
"diff_url": "https://github.com/huggingface/transformers/pull/20777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20777.patch",
"merged_at": 1671126425000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20776
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20776/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20776/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20776/events
|
https://github.com/huggingface/transformers/pull/20776
| 1,498,194,816
|
PR_kwDOCUB6oc5FiIPI
| 20,776
|
Fixing object detection with layoutlm.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20776). All of your documentation changes will be reflected on that endpoint."
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the slow test by making sure we're loading the FeatureExtractor.
`LayoutLM` doesn't have a `FeatureExtractor` while `LayoutLMV2` does and this repo uses a combination of both.
Putting `LayoutLM` in the MULTI_MODAL config enables the pipeline to load `feature_extractor` regardless of `FEATURE_EXTRACTION_MAPPING`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20776/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20776",
"html_url": "https://github.com/huggingface/transformers/pull/20776",
"diff_url": "https://github.com/huggingface/transformers/pull/20776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20776.patch",
"merged_at": 1671126404000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20775
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20775/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20775/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20775/events
|
https://github.com/huggingface/transformers/pull/20775
| 1,497,739,031
|
PR_kwDOCUB6oc5FgljJ
| 20,775
|
Add BridgeTower model
|
{
"login": "abhiwand",
"id": 12353176,
"node_id": "MDQ6VXNlcjEyMzUzMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12353176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhiwand",
"html_url": "https://github.com/abhiwand",
"followers_url": "https://api.github.com/users/abhiwand/followers",
"following_url": "https://api.github.com/users/abhiwand/following{/other_user}",
"gists_url": "https://api.github.com/users/abhiwand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhiwand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhiwand/subscriptions",
"organizations_url": "https://api.github.com/users/abhiwand/orgs",
"repos_url": "https://api.github.com/users/abhiwand/repos",
"events_url": "https://api.github.com/users/abhiwand/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhiwand/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for the model here https://moon-ci-docs.huggingface.co/docs/transformers/pr_20775/en/model_doc/bridgetower\r\nshow up under Text Models. This needs to be under **Multimodal Models**. Can someone please assist?",
"Thanks a lot for your review @younesbelkada \r\nWe have addressed your comments in our latest commits. We have cleaned up the code based on your feedback!\r\nWe do plan to upload some more to the hub once this PR is merged successfully. \r\n\r\nPlease do let us know if you have any more suggestions and feedback.\r\n\r\nAny help on the failing style checks and tests will be really appreciated.\r\n\r\n> Thanks so much for adding this new model into `transformers` and introducing another multimodal model on the ecosystem! The PR is in a very good shape! Very strong efforts on the integration side, we should be close merging this once the main comments will be addressed. I left a couple of comments, my main comments being code readability / structure comments:\r\n> \r\n> * Feature extractors are deprecated and should be replaced by Image Processors only. This would require a very minimal change. Check what is done in BLIP for instance: #20716\r\n> * From what I have understood only 2 models are uploaded on the Hub, therefore I think that the model initialization and forward functions of `FusionHead` and `LinkTower` can be much more simplified\r\n> * I understand that some layers needs to be freezed. You can wrap the freezing procedure in a class method, and call it only if the module is in a training model (i.e., if `self.training = True` )\r\n> * Please avoid using variable names such as `x`, `x1`, or `x1_` as it makes harder to understand what the variable is meant to be. Consider calling these variables `hidden_states`, or any. Same comments for variables such as `image_embeds` and `image_embeds_`.\r\n> * I think that there is no need to assert that the tokenizer in a Roberta tokenizer. Tokenization auto does it magically for you\r\n> * Let's wrap all the weights initialization methods in the method `_init_weights` and call `self.post_init()` at the end of the init method for each module that inherits from `BridgeTowerPreTrainedModel`.\r\n> * Regarding your question about documentation, you should add it together with CLIP, [here](https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/docs/source/en/_toctree.yml#L498) in the multi modal models section.\r\n> Again thank you very much!\r\n\r\n",
"@abhiwand thanks a lot for your PR! Are you also planning to add classes for the downstream tasks (like VQA)?",
"> @abhiwand thanks a lot for your PR! Are you also planning to add classes for the downstream tasks (like VQA)?\r\n\r\n@NielsRogge We are hoping to add VQA/other downstream tasks in the coming months and also release some more models to the model-hub. In this PR however, we won't be doing so.\r\nWe have addressed most of the review feedback. Could you please help merge this PR if you it looks good to you?",
"@amyeroberts @NielsRogge @younesbelkada I think we've handled almost all your comments and simplified and streamlined the code significantly wherever possible.\r\n\r\nIf you'll approve, can you'll please merge this PR. You are welcome to make changes too. Thank you very much for your valuable feedback!",
"> @amyeroberts @NielsRogge @younesbelkada I think we've handled almost all your comments and simplified and streamlined the code significantly wherever possible.\r\n> \r\n> If you'll approve, can you'll please merge this PR. You are welcome to make changes too. Thank you very much for your valuable feedback!\r\n\r\n@amyeroberts @NielsRogge @younesbelkada Could you please help merge this PR :) Thanks! Happy Holidays.",
"@NielsRogge Thanks a lot for your review! We have addressed your review feedback as possible. If it looks good to you, could you please help merge this PR.\r\nCould you also please merge https://huggingface.co/datasets/huggingface/documentation-images/discussions/28 PR - @abhiwand moved it to the right folder.",
"Dear @NielsRogge and @sgugger,\r\nThanks a lot for your review! We have addressed your review feedback as possible. If it looks good to you, could you please help merge this PR.\r\nSincerely,",
"> Thanks for your work on adding this new model! There are still a few things to do before we can merge it:\r\n> \r\n> * the model type should not be used to make tests inside modeling code (see comments below)\r\n> * make sure all of the modules defined take the config for all arguments directly extracted from it\r\n> * make sure all of the modules defined are prefixed with `BridgeTower`\r\n> \r\n> I've added comments below.\r\n\r\n@sgugger Thanks a lot for your review! We have updated our code to reflect your suggestions. If it looks good to you can you please help merge the PR. Thanks again!",
"> Just added last comments around the `device` use: there is no need to add a `device` property to some of the modules introduced in this PR, you should rely on the device of other tensors.\r\n\r\nThanks @sgugger, we have addressed your feedback in the latest commit!",
"> Nice work!\r\n> \r\n> Left a few comments, mostly nits. Only major comments are about removing `pad` from the image processing file and allowing `eps` to be configurable for the layer norm layers. Otherwise looks good to go for me :)\r\n\r\nThanks for your suggestions @amyeroberts and approval. I have addressed your changes in the PR. Can you please help merge the PR?",
"@NielsRogge Our PR keeps failing at tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_return_timestamps_in_preprocess. Would you please help to see if it is because of BridgeTower or because of something else? \r\nThanks a lot",
"@abhiwand @tileintel Thanks for address all of the comments! On Monday there were two PRs merged into main which added `test_image_processing_common.py` (#20785) and updated the feature extractor references in the `test_image_processing_xxx.py` files (#20768). Could you update `test_image_processing_bridgetower.py` to reflect these please? \r\n\r\n",
"@amyeroberts We have updated test_image_processing_bridgetower.py as you suggested. Thanks for the suggestion.\r\n@NielsRogge @amyeroberts @sgugger We have addressed all of the comments. Thanks a lot for helping us to review and approve this. We are very looking forward to having this PR merged into main soon. ",
"Thanks again for your contribution!",
"@sgugger Thank you for merging this PR. May I ask when BridgeTower model will go to HuggingFace's production and what release is that? \r\nThanks",
"The next release will be in a month roughly (given the fast last release was yesterday).",
"Thank @sgugger for letting us know."
] | 1,671
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR implements a HuggingFace Transformers version of **BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning** from the paper https://arxiv.org/abs/2206.08657.pdf
This paper has been accepted to https://aaai.org/Conferences/AAAI-23/
The model's pre-trained checkpoints and configurations have been released here:
https://huggingface.co/BridgeTower under:
- https://huggingface.co/BridgeTower/bridgetower-base-itm-mlm
- https://huggingface.co/BridgeTower/bridgetower-base
The following heads have been implemented:
- BridgeTowerForMaskedLM
- BridgeTowerForImageAndTextRetrieval
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@amyeroberts @NielsRogge @ArthurZucker could you please assist with review and feedback.
@philschmid
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20775/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20775/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20775",
"html_url": "https://github.com/huggingface/transformers/pull/20775",
"diff_url": "https://github.com/huggingface/transformers/pull/20775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20775.patch",
"merged_at": 1674673473000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20774
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20774/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20774/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20774/events
|
https://github.com/huggingface/transformers/pull/20774
| 1,497,679,701
|
PR_kwDOCUB6oc5FgZAE
| 20,774
|
Enable PyTorch/XLA Fully Sharded Data Parallel (FSDP) for a Specific Class of Transformer Models
|
{
"login": "AlexWertheim",
"id": 90242206,
"node_id": "MDQ6VXNlcjkwMjQyMjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/90242206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexWertheim",
"html_url": "https://github.com/AlexWertheim",
"followers_url": "https://api.github.com/users/AlexWertheim/followers",
"following_url": "https://api.github.com/users/AlexWertheim/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexWertheim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexWertheim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexWertheim/subscriptions",
"organizations_url": "https://api.github.com/users/AlexWertheim/orgs",
"repos_url": "https://api.github.com/users/AlexWertheim/repos",
"events_url": "https://api.github.com/users/AlexWertheim/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexWertheim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20774). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR enables the user to make use of the [PyTorch/XLA implementation of FSDP](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp). Three arguments have been added to `training_args.py` to facilitate this functionality:
- `xla_fsdp`: this flag is a string containing the location of a `.json` file which specifies the FSDP arguments the user wants to use when wrapping their model.
- `xla_fsdp_nested`: this flag is a bool which determines whether each transformer block is also FSDP wrapped. Only models which expose their transformer blocks through the class attribute `transformer.h` can use this feature.
- `xla_fsdp_grad_ckpt`: this flag is a bool which determines whether gradient checkpointing is enabled for nested FSDP wrapped layers.
# Design notes and future work
1) For very large model sizes (greater than, say, 128B parameters), users may see host-side OOMs on TPUs during initialization. This can be mitigated by initializing layer weights immediately after construction, wrapping with FSDP, and moving onto the XLA device, as can be seen in [this branch](https://github.com/AlexWertheim/transformers/blob/einsum/src/transformers/models/gpt2/modeling_gpt2.py#L690-L723). We opted to enable FSDP wrapping at the trainer level, since it does not necessitate model-specific changes and does not disrupt the existing architecture for model construction and initialization.
2) Checkpointing support for XLA FSDP is not included as part of this PR. We hope to add it soon via another PR.
3) As indicated above, nested FSDP is only supported for models which expose their transformer blocks in a specific way. This is because naively wrapping every child layer introduces errors. There is [a PR](https://github.com/pytorch/xla/pull/4318) which will introduce auto-wrapping functionality into FSDP, and we expect that this feature will offer a much better way for all model classes to leverage nested wrapping. We also expect that auto-wrapping will enable users to perform nested wrapping multiple layers deep, which has been seen to introduce performance gains. This auto-wrap functionality needs more testing, but we hope to add this feature in a future PR.
4) We have not included testing for XLA FSDP as part of this PR. We would like to add this in a future PR.
Thanks to @ronghanghu for his assistance in the preparation of this PR. Among other contributions, the observations that one must copy the model's forward method and replace the optimizer step are his.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? -->
## Who can review?
@sgugger @JackCaoG
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20774/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20774",
"html_url": "https://github.com/huggingface/transformers/pull/20774",
"diff_url": "https://github.com/huggingface/transformers/pull/20774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20774.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20773
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20773/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20773/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20773/events
|
https://github.com/huggingface/transformers/issues/20773
| 1,497,677,451
|
I_kwDOCUB6oc5ZRL6L
| 20,773
|
trainer.save_model load error
|
{
"login": "oosij",
"id": 94098546,
"node_id": "U_kgDOBZvUcg",
"avatar_url": "https://avatars.githubusercontent.com/u/94098546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oosij",
"html_url": "https://github.com/oosij",
"followers_url": "https://api.github.com/users/oosij/followers",
"following_url": "https://api.github.com/users/oosij/following{/other_user}",
"gists_url": "https://api.github.com/users/oosij/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oosij/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oosij/subscriptions",
"organizations_url": "https://api.github.com/users/oosij/orgs",
"repos_url": "https://api.github.com/users/oosij/repos",
"events_url": "https://api.github.com/users/oosij/events{/privacy}",
"received_events_url": "https://api.github.com/users/oosij/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to help debug your code. Happy to help here when you have a short reproducer, but otherwise we keep issues for bugs and feature requests only.",
"Hi @oosij, this is a very annoying issue from the HF implementation of the id2label management. The model will load correctly if id2label is provided as a dict when you specify the model before fine-tuning. Providing id2label in a form of a list causes no trouble during training / inference until you try to restore the fine-tuned model.\r\n\r\nConsider something like this:\r\n```python\r\nid2label = {i: e for i, e in enumerate(tags)}\r\nlabel2id = {e: i for i, e in enumerate(tags)}\r\n\r\nlog.info('Loading pre-trained checkpoint of a model...')\r\ntokenizer = AutoTokenizer.from_pretrained(config.model.checkpoint, add_prefix_space=True)\r\nmodel = AutoModelForTokenClassification.from_pretrained(\r\n config.model.checkpoint,\r\n id2label=id2label,\r\n label2id=label2id,\r\n)\r\n\r\n# ... fine-tune the model\r\n\r\ntrainer.save_model(config.training.save_path)\r\ndebug = AutoModelForTokenClassification.from_pretrained(config.training.save_path) # This will work now\r\n```",
"@IINemo Would you like to make a PR adding a sanity check that immediately raises an error if the user tries to provide label2id/id2label that are lists instead of dictionaries?",
"@sgugger Ok, give me some time.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
NONE
| null |
reference code : https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb
----
----> 6 save_model ='./model/emotion'
----> 7 trainer.save_model(save_model)
----> 8 load_model = AutoModel.from_pretrained(save_model)
4 frames
[/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in __init__(self, **kwargs)
315 f"{self.id2label}. The number of labels wil be overwritten to {self.num_labels}."
316 )
--> 317 print(id2label)
318 self.id2label = dict((int(key), value) for key, value in self.id2label.items())
319 # Keys are always strings in JSON so convert ids to int here.
AttributeError: 'list' object has no attribute 'items'
---
After fine-tuning the emotion classification model with 28 classes, I learned and saved it through trainer() , but the model is not loaded. can someone help?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20773/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20772
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20772/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20772/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20772/events
|
https://github.com/huggingface/transformers/issues/20772
| 1,497,385,332
|
I_kwDOCUB6oc5ZQEl0
| 20,772
|
OPT model sizes mismatch between code and webpage
|
{
"login": "chenho74",
"id": 108291492,
"node_id": "U_kgDOBnRlpA",
"avatar_url": "https://avatars.githubusercontent.com/u/108291492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenho74",
"html_url": "https://github.com/chenho74",
"followers_url": "https://api.github.com/users/chenho74/followers",
"following_url": "https://api.github.com/users/chenho74/following{/other_user}",
"gists_url": "https://api.github.com/users/chenho74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenho74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenho74/subscriptions",
"organizations_url": "https://api.github.com/users/chenho74/orgs",
"repos_url": "https://api.github.com/users/chenho74/repos",
"events_url": "https://api.github.com/users/chenho74/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenho74/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! Basically the `1.7B` is now the `1.3b` and the `175B` is private and was not released by the META AI team. \r\nIf you want you can open a PR to just correct the model size 😉 ",
"Thanks for the clarification!"
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
N/A
### Who can help?
@ArthurZucker, @sgugger and @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was trying to figure out the architecture of OPT models, specifically the pre-layernorm vs post-layernorm setting. I came across these two lines that specified the setting for OPT of different model sizes.
https://github.com/huggingface/transformers/blob/67acb07e9ef40e6ea08997261e1d14a02530cf8b/src/transformers/models/opt/modeling_opt.py#L322
https://github.com/huggingface/transformers/blob/67acb07e9ef40e6ea08997261e1d14a02530cf8b/src/transformers/models/opt/modeling_opt.py#L346
However it doesn't match with the model sizes available here (if you expand the model list and search `opt-`). I didn't see 1.7B and 175B.
https://huggingface.co/facebook?sort_models=alphabetical#models
Which one is more accurate?
### Expected behavior
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20772/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20771
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20771/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20771/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20771/events
|
https://github.com/huggingface/transformers/pull/20771
| 1,497,168,894
|
PR_kwDOCUB6oc5FeomC
| 20,771
|
Install vision for TF pipeline tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Install vision for TF pipeline tests
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20771/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20771",
"html_url": "https://github.com/huggingface/transformers/pull/20771",
"diff_url": "https://github.com/huggingface/transformers/pull/20771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20771.patch",
"merged_at": 1671099398000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20770
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20770/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20770/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20770/events
|
https://github.com/huggingface/transformers/issues/20770
| 1,497,011,394
|
I_kwDOCUB6oc5ZOpTC
| 20,770
|
[Trainer] Optimize the use of datasets.IterableDataset in distributed setup
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It seems that `webdataset` also does this when you use `wds.split_by_node` and `wds.split_by_worker`, and is used to train clip or diffusion models at scale.",
"That would be awesome! I don't think I need more than an API to tell the iterable dataset that it should take care of the data on process i over n.\r\n\r\nMaybe one thing that might be needed is the total length (when available) in some kind of attribute, since the length of the itreable dataset would then be smaller (and the total length is not going to be just a round multiple of the number of processes).",
"Opened a PR here, introducing `datasets.distributed.split_dataset_by_node`: https://github.com/huggingface/datasets/pull/5369\r\n\r\nfeel free to play with it and share your feedbacks :)",
"That'll be for when I'm back from vacation ;-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Re-ping me in one month GitHub bot ;-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,676
| 1,676
|
MEMBER
| null |
Right now the Trainer uses `IterableDatasetShard` to skip examples on each node and avoid ending up with duplicate data.
This is not efficient for vision or audio tasks since we waste I/O and CPU time reading and decoding files that are not used.
We consider implementing an optimized sharding for distributed training directly in `datasets`. Right now a `datasets.IterableDataset` is already a `torch.utils.data.IterableDataset` that automatically takes care of distributing the necessary input shards to subprocesses in single node (since `datasets` 2.3.0).
The idea would be to also take into account the rank and world size to distribute the input shards. Maybe distributing the. `datasets.IterableDataset` across nodes should be asked explicitly by the user.
cc @sgugger WDYT ? Do you have other ideas in mind to optimize the use of `datasets.IterableDataset` for distributed training in pytorch ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20770/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20770/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20769
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20769/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20769/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20769/events
|
https://github.com/huggingface/transformers/pull/20769
| 1,496,966,164
|
PR_kwDOCUB6oc5Fd8Fc
| 20,769
|
Add Swin backbone
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds Swin backbone, to be used with frameworks like OneFormer and UperNet.
Note: #20648 also included this but I'll add Swin backbone as a separate PR to make the other one smaller.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20769/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20769",
"html_url": "https://github.com/huggingface/transformers/pull/20769",
"diff_url": "https://github.com/huggingface/transformers/pull/20769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20769.patch",
"merged_at": 1671042928000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20768
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20768/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20768/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20768/events
|
https://github.com/huggingface/transformers/pull/20768
| 1,496,890,846
|
PR_kwDOCUB6oc5FdrtF
| 20,768
|
Update tests: replace feature extractor tests with image processor
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,674
| 1,674
|
COLLABORATOR
| null |
# What does this PR do?
Replaces all feature extractor references with image processor references in the `test_image_processing_xxx.py` files
* `feature_extractor = XxxFeatureExtractor()` -> `image_processor = XxxImageProcessor`
Requires https://github.com/huggingface/transformers/pull/20785 in order to replace `FeatureExtractionSavingTestMixin`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20768/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20768/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20768",
"html_url": "https://github.com/huggingface/transformers/pull/20768",
"diff_url": "https://github.com/huggingface/transformers/pull/20768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20768.patch",
"merged_at": 1674494741000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20767
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20767/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20767/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20767/events
|
https://github.com/huggingface/transformers/issues/20767
| 1,496,779,440
|
I_kwDOCUB6oc5ZNwqw
| 20,767
|
Cache size limit for generation
|
{
"login": "Natooz",
"id": 56734983,
"node_id": "MDQ6VXNlcjU2NzM0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Natooz",
"html_url": "https://github.com/Natooz",
"followers_url": "https://api.github.com/users/Natooz/followers",
"following_url": "https://api.github.com/users/Natooz/following{/other_user}",
"gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Natooz/subscriptions",
"organizations_url": "https://api.github.com/users/Natooz/orgs",
"repos_url": "https://api.github.com/users/Natooz/repos",
"events_url": "https://api.github.com/users/Natooz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Natooz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante but this might be a bitt too niche for us to incorporate in `generate`.",
"Hey @Natooz 👋 \r\n\r\nThank you for raising this issue! It may be helpful in some situations. However, since it is the first time I'm seeing a request for this, the answer depends on your implementation :) I'm on board if it consists of a few line changes (say, <100 in total for all models). Otherwise, the maintenance costs are too high.\r\n\r\nRegardless of your answer and the decision on this issue, it's useful for us to have issues like this, to gauge demand and guide our future work 🤗 ",
"Hi @gante, thanks for the feedback !\r\n\r\nThis is because the feature might be a very niche (eg story or music gen) that I asked if you would prefer it implemented per model (was thinking of `prepare_inputs_for_generation`) or more in a more DRY way, probably a method called in each decoding method at each step.\r\n\r\nHere is how I implemented it for GPT2 in PyTorch (TF is pretty much the same) in [`prepare_inputs_for_generation`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L986), it stands in a few lines:\r\n\r\n```Python\r\n# past is retrieved from model_kwargs, and is of shape (L,2,N,NH,T,DH)\r\n# (layer, keys/values, batch, attn_head, seq, dim_head)\r\n# with two first dims as tuple, dim 2:-1 is a fixed length tensor\r\ncache_limit = kwargs.get(\"cache_limit\", None)\r\n\r\n# check if using cache and a limit, and that the current cache does exceed it\r\n# dim -2 of cache is always the same across all layers and kv, so checking past[0][0]\r\nif past and cache_limit and past[0][0].shape[-2] > cache_limit:\r\n # reducing the time / seq dimension (-2)\r\n past = [[kv[..., -cache_limit:, :] for kv in layer] for layer in past]\r\n\r\n # we need to update the attention_mask as it is incremented in _update_model_kwargs_for_generation\r\n if attention_mask is not None:\r\n attention_mask = attention_mask[:, -cache_limit - 1:]\r\n # no need to update position_ids here as kwargs[attention_mask].shape[1]\r\n # is the nb of all past positions / tokens that have been processed so far\r\n```\r\n\r\nNow for implementing it in `GenerationMixin`, it would require that 1) all concerned models use the same cache shape `(L,2,N,NH,T,DH)` (is it the case?), and; 2) it does not mess with positional encoding. In GPT2, positions are created on the fly from the shape of the attention_mask, itself [incremented](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L708) at each decoding step.\r\nFor this direction, and if we don't want to touch any method overridden by models, I think the best solution would be to store \r\nposition_id in `model_kwargs`, that would be updated at each step.\r\n\r\nI'll give a try to the `GenerationMixin` direction and come back ! 👨💻",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### Feature request
Add a `cache_limit` argument for `generate`, limiting the size of the cache (`past_key_values`).
### Motivation
In some contexts one might want to generate long sequences. When doing so, the system can easily run out of memory. Keeping the cache to a maximum size would allow users to have more control and tweak others parameters such as batch size or number of beams, to generate faster and take the most out of their hardware.
### Your contribution
I implemented it in GPT2 (PyTorch & TF, PR is ready), but I guess this could be implemented more broadly in `generate` so that every models could benefit it.
It might relate to #17574.
Waiting for your opinion on this, I can probably add it to `generate`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20767/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20766
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20766/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20766/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20766/events
|
https://github.com/huggingface/transformers/pull/20766
| 1,496,641,705
|
PR_kwDOCUB6oc5Fc0a2
| 20,766
|
Add Universal Segmentation class + mapping
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think the error message is pretty clear on what to do. The model would also need to be added to the doc page of maskformer if we go through with this.\r\n\r\nI'm not convinced we should however. While adding a new auto-model API could make sense (I'd wait to have more than one model though), renaming the model class a year after the model has been released is not something we should do (the same way we keep `GPTLMHeadModel` for instance). ",
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm unsure why you keep changing more of the `MaskFormerForInstanceSegmentation`. I think I may have been unclear in my previous comments. I am against changing that name in a model that has been around for 10 months now, for the same reason we did not change the name of `GPTLMHeadModel` or `BertLMHeadModel` even if those are not ideal.",
"@sgugger I'll keep the old names, but add them to the same (new) mapping."
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the `AutoModelForUniversalSegmentation` class and corresponding mapping. Models that can be added to this mapping include DETR, MaskFormer, Mask2Former and OneFormer.
To do:
- [x] update pipeline
@sgugger for some reason make fixup complains:
```
Traceback (most recent call last):
File "/Users/nielsrogge/Documents/python_projecten/transformers/utils/check_repo.py", line 827, in <module>
check_repo_quality()
File "/Users/nielsrogge/Documents/python_projecten/transformers/utils/check_repo.py", line 816, in check_repo_quality
check_models_are_in_init()
File "/Users/nielsrogge/Documents/python_projecten/transformers/utils/check_repo.py", line 369, in check_models_are_in_init
raise Exception(f"The following models should be in the main init: {','.join(models_not_in_init)}.")
Exception: The following models should be in the main init: MaskFormerForUniversalSegmentation.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20766/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20766",
"html_url": "https://github.com/huggingface/transformers/pull/20766",
"diff_url": "https://github.com/huggingface/transformers/pull/20766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20766.patch",
"merged_at": 1671196967000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20765
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20765/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20765/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20765/events
|
https://github.com/huggingface/transformers/pull/20765
| 1,496,192,561
|
PR_kwDOCUB6oc5FbP3U
| 20,765
|
Fix attribute error problem
|
{
"login": "casuallyName",
"id": 44667461,
"node_id": "MDQ6VXNlcjQ0NjY3NDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/44667461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casuallyName",
"html_url": "https://github.com/casuallyName",
"followers_url": "https://api.github.com/users/casuallyName/followers",
"following_url": "https://api.github.com/users/casuallyName/following{/other_user}",
"gists_url": "https://api.github.com/users/casuallyName/gists{/gist_id}",
"starred_url": "https://api.github.com/users/casuallyName/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casuallyName/subscriptions",
"organizations_url": "https://api.github.com/users/casuallyName/orgs",
"repos_url": "https://api.github.com/users/casuallyName/repos",
"events_url": "https://api.github.com/users/casuallyName/events{/privacy}",
"received_events_url": "https://api.github.com/users/casuallyName/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
When the `use_legacy_prediction_loop` parameter is used , `Trainer.predict` will raise an error `AttributeError: 'PredictionOutput' object has no attribute 'num_samples'`
# What does this PR do?
1. Use `EvalLoopOutput` instead of original `PredictionOutput` of `self.prediction_loop`, this makes the return formats of `self.prediction_loop` same as `self.evaluation_loop`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20765/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20765",
"html_url": "https://github.com/huggingface/transformers/pull/20765",
"diff_url": "https://github.com/huggingface/transformers/pull/20765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20765.patch",
"merged_at": 1671027967000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20764
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20764/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20764/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20764/events
|
https://github.com/huggingface/transformers/pull/20764
| 1,496,165,195
|
PR_kwDOCUB6oc5FbJvl
| 20,764
|
Install torch-tensorrt 1.3.0
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
It turns out that the issue mentioned in #20758 could be fixed by installing newer version of `torch-tensorrt (1.3.0)` (the pre-installed one in the base image was `1.1.0a0`).
Notice that before our CI using torch 1.13.0, we didn't have `torch-tensorrt` installed in the docker image (for daily CI with stable release of torch/deepspeed). But guess we can have it anyway.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20764/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20764",
"html_url": "https://github.com/huggingface/transformers/pull/20764",
"diff_url": "https://github.com/huggingface/transformers/pull/20764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20764.patch",
"merged_at": 1671035436000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20763
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20763/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20763/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20763/events
|
https://github.com/huggingface/transformers/pull/20763
| 1,496,141,385
|
PR_kwDOCUB6oc5FbEbU
| 20,763
|
Fix bug in ChineseCLIPTextPooler
|
{
"login": "xiaohu2015",
"id": 16861194,
"node_id": "MDQ6VXNlcjE2ODYxMTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/16861194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaohu2015",
"html_url": "https://github.com/xiaohu2015",
"followers_url": "https://api.github.com/users/xiaohu2015/followers",
"following_url": "https://api.github.com/users/xiaohu2015/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaohu2015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaohu2015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaohu2015/subscriptions",
"organizations_url": "https://api.github.com/users/xiaohu2015/orgs",
"repos_url": "https://api.github.com/users/xiaohu2015/repos",
"events_url": "https://api.github.com/users/xiaohu2015/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaohu2015/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, @xiaohu2015! I think the pooling layer inside `ChineseCLIPTextModel` has nothing to do with the projection layer `text_projection` in ` ChineseCLIPModel`. And the model definition should be the one used by the model author which I believe the current one is the correct version.\r\n\r\n ",
"> Hi, @xiaohu2015! I think the pooling layer inside `ChineseCLIPTextModel` has nothing to do with the projection layer `text_projection` in ` ChineseCLIPModel`. And the model definition should be the one used by the model author which I believe the current one is the correct version.\r\n\r\nBut CLIPTextModel is consistent with CLIPModel",
"> But CLIPTextModel is consistent with CLIPModel\r\n\r\nThis doesn't mean the relation of ChineseCLIPTextModel/ChineseCLIPModel should be the same as CLIPTextModel/CLIPModel. It is the author of ChineseCLIPModel who decided this way to construct the model.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am going to close this thread. But don't hesitate to comment if you have any further question, @xiaohu2015 "
] | 1,671
| 1,673
| 1,673
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I think the ChineseCLIPTextPooler should be consistent with https://github.com/huggingface/transformers/blob/main/src/transformers/models/chinese_clip/modeling_chinese_clip.py#L1374:
- bias=False
- don't use activation function
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20763/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20763",
"html_url": "https://github.com/huggingface/transformers/pull/20763",
"diff_url": "https://github.com/huggingface/transformers/pull/20763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20763.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20762
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20762/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20762/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20762/events
|
https://github.com/huggingface/transformers/pull/20762
| 1,496,102,146
|
PR_kwDOCUB6oc5Fa7oI
| 20,762
|
Even more validation.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Addresses this comment:
https://github.com/huggingface/transformers/pull/20729#issuecomment-1350351272
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20762/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20762",
"html_url": "https://github.com/huggingface/transformers/pull/20762",
"diff_url": "https://github.com/huggingface/transformers/pull/20762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20762.patch",
"merged_at": 1671095155000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20761
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20761/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20761/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20761/events
|
https://github.com/huggingface/transformers/issues/20761
| 1,496,039,683
|
I_kwDOCUB6oc5ZK8ED
| 20,761
|
The reasons for offseting the position embedding ids by 2 for OPT Model.
|
{
"login": "Zcchill",
"id": 83019888,
"node_id": "MDQ6VXNlcjgzMDE5ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/83019888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zcchill",
"html_url": "https://github.com/Zcchill",
"followers_url": "https://api.github.com/users/Zcchill/followers",
"following_url": "https://api.github.com/users/Zcchill/following{/other_user}",
"gists_url": "https://api.github.com/users/Zcchill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zcchill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zcchill/subscriptions",
"organizations_url": "https://api.github.com/users/Zcchill/orgs",
"repos_url": "https://api.github.com/users/Zcchill/repos",
"events_url": "https://api.github.com/users/Zcchill/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zcchill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey! That's an interesting question. \r\nThe main reason behind this is that the original code uses `nn.Embedding(..., ..., padding_idx)` where the argument is passed to the `torch module`. The `padding_idx` can be found in the tokenizer and is `1`. \r\nNow in transformers we realised that using the `nn.Embedding`'s native `padding_idx` is actually not very good for training. There is a big thread where you can learn more about why here : #10200. \r\n\r\nSince we are not doing this, we need to manually update the indexes, so we have the padding index plus the `-1` that is already there. That sums up to a shift of `2`\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,673
| 1,673
|
NONE
| null |
### System Info
transformers 4.20.1
### Who can help?
@sgugger
@stevhliu
@gante
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The source code in modeling_opt.py (HuggingFace)
```python
class OPTLearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
"""
def __init__(self, num_embeddings: int, embedding_dim: int):
# OPT is set up so that if padding_idx is specified then offset the embedding ids by 2
# and adjust num_embeddings appropriately. Other models don't have this hack
self.offset = 2
super().__init__(num_embeddings + self.offset, embedding_dim)
def forward(self, attention_mask: torch.LongTensor, past_key_values_length: int = 0):
"""`input_ids_shape` is expected to be [bsz x seqlen]."""
attention_mask = attention_mask.long()
# create positions depending on attention_mask
positions = (torch.cumsum(attention_mask, dim=1).type_as(attention_mask) * attention_mask).long() - 1
# cut positions if `past_key_values_length` is > 0
positions = positions[:, past_key_values_length:]
return super().forward(positions + self.offset)
```
The source code in metaseq
```python
class LearnedPositionalEmbedding(nn.Embedding):
"""
This module learns positional embeddings up to a fixed maximum size.
Padding ids are ignored by either offsetting based on padding_idx
or by setting padding_idx to None and ensuring that the appropriate
position ids are passed to the forward function.
"""
def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int):
super().__init__(num_embeddings, embedding_dim, padding_idx)
if self.padding_idx is not None:
self.max_positions = self.num_embeddings - self.padding_idx - 1
else:
self.max_positions = self.num_embeddings
def forward(
self,
input: Tensor,
incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
positions: Optional[Tensor] = None,
):
"""Input is expected to be of size [bsz x seqlen]."""
assert (positions is None) or (
self.padding_idx is None
), "If positions is pre-computed then padding_idx should not be set."
# we cannot use incremental state here because we must be aware of
# padding.
if positions is None and self.padding_idx is not None:
positions = utils.make_positions(input, self.padding_idx)
return F.embedding(
positions,
self.weight,
self.padding_idx,
self.max_norm,
self.norm_type,
self.scale_grad_by_freq,
self.sparse,
)
```
### Expected behavior
Why do we need to conduct offset for the positional embedding here and why it is 2? I haven't found the same setting in the source code of meta.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20761/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20760
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20760/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20760/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20760/events
|
https://github.com/huggingface/transformers/pull/20760
| 1,495,505,334
|
PR_kwDOCUB6oc5FY0sw
| 20,760
|
Patch for FlanT5-XXL 8bit support
|
{
"login": "larsmennen",
"id": 1162951,
"node_id": "MDQ6VXNlcjExNjI5NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1162951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larsmennen",
"html_url": "https://github.com/larsmennen",
"followers_url": "https://api.github.com/users/larsmennen/followers",
"following_url": "https://api.github.com/users/larsmennen/following{/other_user}",
"gists_url": "https://api.github.com/users/larsmennen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larsmennen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larsmennen/subscriptions",
"organizations_url": "https://api.github.com/users/larsmennen/orgs",
"repos_url": "https://api.github.com/users/larsmennen/repos",
"events_url": "https://api.github.com/users/larsmennen/events{/privacy}",
"received_events_url": "https://api.github.com/users/larsmennen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks so much for the fix @larsmennen ! I would personally advocate to focus only on `T5`, and we can add these patches later on if we figure out that the same issue occur for all subsidiary models! Can you revert the changes for longt5/perceiver & switch (ideally also keep the copy mechanism, so maybe add the `# Copied from` statements but use another model as t5 as reference (for e.g. for perceiver `# Copied from transformers.src.models.longt5. ...`) Also don't forget to run the styling changes ;) (`make fixup`) Thanks again!\r\n\r\nThat makes sense! done"
] | 1,670
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #20287 .
In #20287 , 3 patches were proposed here: https://github.com/huggingface/transformers/issues/20287#issuecomment-1342219429
* Patch 3 is already covered by https://github.com/huggingface/transformers/pull/20683
* I found patch 2 is actually unnecessary, because there's already a cast to float16 here: https://github.com/younesbelkada/transformers/blob/68a894a5875bfd958b8254afd3bbb23db9c2e813/src/transformers/models/t5/modeling_t5.py#L258-L260 which also applies in this case as we keep `self.wo` in `float32`.
* This PR contains the patch 1, adjusted so it only applies a cast if the `hidden_states` actually has a different `dtype` from the `wo` weights.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [n/a] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20760/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20760/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20760",
"html_url": "https://github.com/huggingface/transformers/pull/20760",
"diff_url": "https://github.com/huggingface/transformers/pull/20760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20760.patch",
"merged_at": 1671125218000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20759
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20759/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20759/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20759/events
|
https://github.com/huggingface/transformers/issues/20759
| 1,495,437,549
|
I_kwDOCUB6oc5ZIpDt
| 20,759
|
Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' )
|
{
"login": "shaoyuta",
"id": 52023469,
"node_id": "MDQ6VXNlcjUyMDIzNDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/52023469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaoyuta",
"html_url": "https://github.com/shaoyuta",
"followers_url": "https://api.github.com/users/shaoyuta/followers",
"following_url": "https://api.github.com/users/shaoyuta/following{/other_user}",
"gists_url": "https://api.github.com/users/shaoyuta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaoyuta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaoyuta/subscriptions",
"organizations_url": "https://api.github.com/users/shaoyuta/orgs",
"repos_url": "https://api.github.com/users/shaoyuta/repos",
"events_url": "https://api.github.com/users/shaoyuta/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaoyuta/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is an issue with the dataset, so you should probably open this in the Datasets repo :-)",
"> This is an issue with the dataset, so you should probably open this in the Datasets repo :-)\r\n\r\nThanks for your help "
] | 1,670
| 1,671
| 1,671
|
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.11.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/taosy/.huggingface/token
- Has saved token ?: False
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce this issue:
1. git clone https://github.com/huggingface/transformers
2. cd transformers
3. python examples/pytorch/language-modeling/run_clm.py --model_name_or_path EleutherAI/gpt-j-6B --dataset_name the_pile --dataset_config_name enron_emails --do_eval --output_dir /tmp/output --overwrite_output_dir
### Expected behavior
1. This issue looks like due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst " couldn't be reached.
2. Is there another way to download the dataset "the_pile" ?
3. Is there another way to cache the dataset "the_pile" but not let the hg to download it when runtime ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20759/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20758
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20758/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20758/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20758/events
|
https://github.com/huggingface/transformers/pull/20758
| 1,494,956,789
|
PR_kwDOCUB6oc5FW9Y4
| 20,758
|
Uninstall `torch_tensorrt` in `DeepSpeed` CI image for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Since we update CI to use PyTorch 1.13, it uses a base image `nvcr.io/nvidia/pytorch:22.04-py3` which contains `torch_tensorrt`.
This causes the CI failing from the test collection - i.e. the whole test suite fails from the beginning.
This PR uninstalls `torch_tensorrt` for now (previously, this is not installed for DeepSpeed CI - the stable release version), so at least the CI could run (and to see if any test would fail with PyTorch 1.13).
We will have to work on the `torch_tensorrt` issue though.
#### Current error message
```bash
==================================== ERRORS ====================================
______________ ERROR collecting tests/deepspeed/test_deepspeed.py ______________
ImportError while importing test module '/workspace/transformers/tests/deepspeed/test_deepspeed.py'.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20758/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20758",
"html_url": "https://github.com/huggingface/transformers/pull/20758",
"diff_url": "https://github.com/huggingface/transformers/pull/20758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20758.patch",
"merged_at": 1670966748000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20757
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20757/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20757/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20757/events
|
https://github.com/huggingface/transformers/pull/20757
| 1,494,792,911
|
PR_kwDOCUB6oc5FWYMP
| 20,757
|
Rework automatic code samples in docstrings
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
This PR reworks the automatic code sample docstrings in two ways:
First, use the auto-classes for the preprocessing. As was decided internally, we want to document the model class used, but use the auto classes for preprocessing so users are not confused when a given model uses the tokenizer/feature extractor/image processor/processor of another.
Second, we don't want to showcase `hf-internal-testing` models in the docstrings. Those are tiny random models and it confuses users more than it helps. However when using the standard checkpoint we get doctest problems, so this PR removes the output/loss from the code example when it shouldn't be tested.
Two examples are shown with BERT and DeBERTaV2, I can add more models to the PR if it suits everyone.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20757/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20757",
"html_url": "https://github.com/huggingface/transformers/pull/20757",
"diff_url": "https://github.com/huggingface/transformers/pull/20757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20757.patch",
"merged_at": 1673686177000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20756
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20756/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20756/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20756/events
|
https://github.com/huggingface/transformers/issues/20756
| 1,494,775,903
|
I_kwDOCUB6oc5ZGHhf
| 20,756
|
min_new_tokens option in generate() implementation
|
{
"login": "gonced8",
"id": 5244435,
"node_id": "MDQ6VXNlcjUyNDQ0MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5244435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gonced8",
"html_url": "https://github.com/gonced8",
"followers_url": "https://api.github.com/users/gonced8/followers",
"following_url": "https://api.github.com/users/gonced8/following{/other_user}",
"gists_url": "https://api.github.com/users/gonced8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gonced8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gonced8/subscriptions",
"organizations_url": "https://api.github.com/users/gonced8/orgs",
"repos_url": "https://api.github.com/users/gonced8/repos",
"events_url": "https://api.github.com/users/gonced8/events{/privacy}",
"received_events_url": "https://api.github.com/users/gonced8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hi @gonced8 👋 Thank you for raising this issue!\r\n\r\nThis is the same as [this issue](https://github.com/huggingface/transformers/issues/20614) (which is slightly older). I'm closing this issue to avoid duplication of comments/efforts, and [this particular comment](https://github.com/huggingface/transformers/issues/20614#issuecomment-1361225567) might be of your interest :)"
] | 1,670
| 1,671
| 1,671
|
NONE
| null |
### Feature request
Similarly to the `max_new_tokens`, a `min_new_tokens` option would count only the newly generated tokens, ignoring the tokens of the input sequence (prompt) in decoder only models.
### Motivation
The option `min_length` of the `generate()` method might be ambiguous for decoder only models. It is not clear if decoder only models consider the length of the input (prompt) for the `min_length` condition or only the newly generated tokens.
In Encoder Decoder (seq2seq) it is clear though.
### Your contribution
Not that I remember. But I could test it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20756/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20755
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20755/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20755/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20755/events
|
https://github.com/huggingface/transformers/pull/20755
| 1,494,751,541
|
PR_kwDOCUB6oc5FWOp6
| 20,755
|
[CI-Test] Fixes but also skips the mT5 tests
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"My opinion is that we should have test file(s) for each model type we have. MT5 has its own modeling file and model type, so we should keep it.\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"mT5 does not really have its own modeling file since it's just T5. I'm happy if the tests for that model are contained to a slow integration test to not bloat the CI, like what is done for similar models (Camembert for instance).",
"I am OK with the decision, but just want to point out that we will lose these models for tiny model creation, which is required for ONNX testing and pipeline testing (in the future). I just want every party involved agree this.\r\n\r\ncc @LysandreJik, @Narsil and @lewtun ",
"Ok with the decision as well!"
] | 1,670
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Silences the `MT5` tests. They are irrelevant as the model behind is T5, which function well.
I want to delete everything but prefer to ask if there is a reason why we have these?
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20755/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20755",
"html_url": "https://github.com/huggingface/transformers/pull/20755",
"diff_url": "https://github.com/huggingface/transformers/pull/20755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20755.patch",
"merged_at": 1671028565000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20754
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20754/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20754/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20754/events
|
https://github.com/huggingface/transformers/pull/20754
| 1,494,686,668
|
PR_kwDOCUB6oc5FV_7g
| 20,754
|
Fixing the pipeline tutorial test.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,675
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
This is just #20746 with one more fix. The CI would not run after I push a commit to that PR. Sorry @Narsil !
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20754/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20754",
"html_url": "https://github.com/huggingface/transformers/pull/20754",
"diff_url": "https://github.com/huggingface/transformers/pull/20754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20754.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20753
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20753/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20753/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20753/events
|
https://github.com/huggingface/transformers/issues/20753
| 1,494,614,607
|
I_kwDOCUB6oc5ZFgJP
| 20,753
|
Summarization Pipeline not outputting both text and token_ids
|
{
"login": "mpeadtt",
"id": 87301397,
"node_id": "MDQ6VXNlcjg3MzAxMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/87301397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpeadtt",
"html_url": "https://github.com/mpeadtt",
"followers_url": "https://api.github.com/users/mpeadtt/followers",
"following_url": "https://api.github.com/users/mpeadtt/following{/other_user}",
"gists_url": "https://api.github.com/users/mpeadtt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpeadtt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpeadtt/subscriptions",
"organizations_url": "https://api.github.com/users/mpeadtt/orgs",
"repos_url": "https://api.github.com/users/mpeadtt/repos",
"events_url": "https://api.github.com/users/mpeadtt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpeadtt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Realised that this has been addressed in another issue, apologies."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### System Info
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
summariser = pipeline("summarization", model="sshleifer/distilbart-cnn-12-6")
output = summariser("this is a test input", return_text=True, return_tensors=True)
print(output)
```
Returns (ignoring length warning message):
```
[{'summary_token_ids': tensor([ 2, 0, 42, 16, 10, 1296, 8135, 8135, 31, 5, 2730, 9,
42, 1566, 479, 152, 16, 45, 5, 78, 86, 52, 348, 450,
42, 1905, 11, 10, 1296, 422, 30, 5, 2730, 4, 42, 16,
5, 78, 9, 63, 761, 4, 152, 16, 5, 200, 9, 10,
651, 9, 1296, 1237, 30, 5, 7601, 9, 5, 1040, 479, 2])}]
```
### Expected behavior
According to the `SummarizationPipeline` [documentation](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.SummarizationPipeline.__call__) I would expect a list of dictionaries where each dictionary has both `summary_text` and `summary_token_ids` elements. Instead, only one is returned, even though both `return_text=True` and `return_tensors=True` in the call to the summariser.
I have done a little digging and think it's coming from the way these arguments are handled. In the `_sanitize_parameters` method [here](https://github.com/huggingface/transformers/blob/30d8919ab13dcc212cb00fbb4d3e969aee6c3fc5/src/transformers/pipelines/text2text_generation.py#L73), the `postprocess_params["return_type"]` gets set to either ` ReturnType.TENSORS` or ` ReturnType.TEXT` (or a different `return_type` specified as an input arg).
The `postprocess` method of the `Text2TextGenerationPipeline` [here](https://github.com/huggingface/transformers/blob/30d8919ab13dcc212cb00fbb4d3e969aee6c3fc5/src/transformers/pipelines/text2text_generation.py#L195) then takes the `return_type` arg, but can only ouput either the `summary_text` or `summary_token_ids`, and not both.
I could have a go at raising a PR for this if you'd like.
Many thanks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20753/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20752
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20752/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20752/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20752/events
|
https://github.com/huggingface/transformers/issues/20752
| 1,494,479,803
|
I_kwDOCUB6oc5ZE_O7
| 20,752
|
RuntimeError: false INTERNAL ASSERT FAILED at "../c10/cuda/CUDAGraphsC10Utils.h":73, please report a bug to PyTorch. Unknown CUDA graph CaptureStatus32742
|
{
"login": "nafiul-araf",
"id": 78312166,
"node_id": "MDQ6VXNlcjc4MzEyMTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/78312166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nafiul-araf",
"html_url": "https://github.com/nafiul-araf",
"followers_url": "https://api.github.com/users/nafiul-araf/followers",
"following_url": "https://api.github.com/users/nafiul-araf/following{/other_user}",
"gists_url": "https://api.github.com/users/nafiul-araf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nafiul-araf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nafiul-araf/subscriptions",
"organizations_url": "https://api.github.com/users/nafiul-araf/orgs",
"repos_url": "https://api.github.com/users/nafiul-araf/repos",
"events_url": "https://api.github.com/users/nafiul-araf/events{/privacy}",
"received_events_url": "https://api.github.com/users/nafiul-araf/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"We won't really be able to help you since your reproducer does not contain any way to get the dataset you're using.\r\nWe have a whole chapter around debugging the training pipeline in [our online course](https://huggingface.co/course/chapter8/4?fw=pt).",
"I had the same error in a similar setting. I installed CUDA 12.0 and now I get this error:\r\n\r\nRuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`\r\n\r\nWhat can I do?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,676
| 1,676
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is my code sample
```python
X_train=list(train['clean_review'])
X_test=list(test['clean_review'])
y_train=list(train['rating'])
y_test=list(test['rating'])
model_name="distilbert-base-uncased"
tokenizer=DistilBertTokenizer.from_pretrained(model_name)
model=DistilBertForSequenceClassification.from_pretrained(model_name, num_labels=2)
X_train_tokenized=tokenizer(X_train, padding=True, truncation=True, max_length=512)
X_test_tokenized=tokenizer(X_test, padding=True, truncation=True, max_length=512)
class Dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels=None):
self.encodings=encodings
self.labels=labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
if self.labels:
item["labels"]=torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.encodings["input_ids"])
train_dataset=Dataset(X_train_tokenized, y_train)
test_dataset=Dataset(X_test_tokenized, y_test)
args=TrainingArguments(
output_dir="output",
evaluation_strategy="steps",
eval_steps=500,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
seed=0,
load_best_model_at_end=True,)
trainer=Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],)
trainer.train()
```
### Expected behavior
I'm trying to create a classifier to classify the drug review. The outcome labels are 2: Positive or Negative. For this, I'm using transformer ```DistilBert-base-uncased```. However, when the ```Trainer``` function is run, it produces the following error:
```RuntimeError: false INTERNAL ASSERT FAILED at "../c10/cuda/CUDAGraphsC10Utils.h":73, please report a bug to PyTorch. Unknown CUDA graph CaptureStatus32742```.
I've searched into this a lot, but I haven't been able in finding a solution. Please help........
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20752/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20752/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20751
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20751/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20751/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20751/events
|
https://github.com/huggingface/transformers/issues/20751
| 1,494,357,214
|
I_kwDOCUB6oc5ZEhTe
| 20,751
|
TrOCR base-harge-stage1 Processor issue
|
{
"login": "Mohammed20201991",
"id": 59222637,
"node_id": "MDQ6VXNlcjU5MjIyNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohammed20201991",
"html_url": "https://github.com/Mohammed20201991",
"followers_url": "https://api.github.com/users/Mohammed20201991/followers",
"following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions",
"organizations_url": "https://api.github.com/users/Mohammed20201991/orgs",
"repos_url": "https://api.github.com/users/Mohammed20201991/repos",
"events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohammed20201991/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"Hi,\r\n\r\nThanks for your interest in TrOCR! This is possibly linked to #15283",
"Thanks @NielsRogge & @sgugger \r\nIn this [#15283](https://github.com/huggingface/transformers/issues/15283) Cannot solve my issue may I miss something else?\r\nBut, what about using a different processor like a small stage1 or large handwritten while they are both initialized from Beit & RoBERTa models by updating the above line with \r\nprocessor_base_large_stage1 = `TrOCRProcessor.from_pretrained('microsoft/trocr-base-stage1')`\r\nno errors here, But does this will affect results when finetuning for a different language?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The solution which is working for me I save the processor on a [collab ](https://drive.google.com/file/d/1_yzu2iTWW5AWpkBFv6M590rTPEY6vmbU/view?usp=sharing) by following `processor.save_pretrained(\"./processor\") ` and then I am using it in my own environment (remote cluster server) \r\n`processor = TrOCRProcessor.from_pretrained(\"/processor/\") #microsoft/trocr-large-stage1`\r\nTransformers version: 4.27.0 installed with new venv ",
"I see the issue occurs because that model repo doesn't have fast tokenizer files. One can load the slow (Python-based) tokenizer as follows:\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/trocr-large-stage1\", use_fast=False)\r\n```",
"Update, solved this issue by following the guide here: https://discuss.huggingface.co/t/convert-slow-xlmrobertatokenizer-to-fast-one/20876.\r\n\r\nThis works now!\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/trocr-large-stage1\")\r\n```"
] | 1,670
| 1,680
| 1,680
|
NONE
| null |
Hi everyone,
I am trying to run [trocr large base stage1](https://huggingface.co/microsoft/trocr-large-stage1)
code can be found [here with Google Collab ](https://drive.google.com/file/d/1_yzu2iTWW5AWpkBFv6M590rTPEY6vmbU/view?usp=sharing)
can anyone tell what the possible ways to this issue are during loading processor for features extraction
`processor_base_large_stage1 = TrOCRProcessor.from_pretrained('microsoft/trocr-large-stage1')`
the issue :
`Exception Traceback (most recent call last)
[<ipython-input-7-30e2e2fd6c43>](https://localhost:8080/#) in <module>
----> 1 processor_base_large_stage1 = TrOCRProcessor.from_pretrained('microsoft/trocr-large-stage1')
6 frames
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in __init__(self, *args, **kwargs)
109 elif fast_tokenizer_file is not None and not from_slow:
110 # We have a serialization from tokenizers which let us directly build the backend
--> 111 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
112 elif slow_tokenizer is not None:
113 # We need to convert a slow tokenizer to build the backend
Exception: No such file or directory (os error 2)`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20751/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20750
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20750/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20750/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20750/events
|
https://github.com/huggingface/transformers/issues/20750
| 1,494,229,224
|
I_kwDOCUB6oc5ZECDo
| 20,750
|
Module 'keras.engine.data_adapter' has no attribute 'expand_1d' with non dummy loss
|
{
"login": "ZJaume",
"id": 11339330,
"node_id": "MDQ6VXNlcjExMzM5MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/11339330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZJaume",
"html_url": "https://github.com/ZJaume",
"followers_url": "https://api.github.com/users/ZJaume/followers",
"following_url": "https://api.github.com/users/ZJaume/following{/other_user}",
"gists_url": "https://api.github.com/users/ZJaume/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZJaume/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZJaume/subscriptions",
"organizations_url": "https://api.github.com/users/ZJaume/orgs",
"repos_url": "https://api.github.com/users/ZJaume/repos",
"events_url": "https://api.github.com/users/ZJaume/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZJaume/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 and @gante ",
"Reproduced this issue locally, seems to be an issue with TF 2.11 and doesn't occur in previous versions. Checking it out now!",
"@ZJaume @mcclunatic the fix has been merged - please try installing transformers from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git` and see if the issue is resolved. If you encounter any further problems, please reopen this issue and let me know!",
"@Rocketknight1 I've just tested it in my notebook and the issue is indeed resolved! Thanks so much for fixing this so quickly!",
"came across this issue experiencing the same thing. upgraded from the primary branch worked for me as well 🚀 "
] | 1,670
| 1,672
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.1+cu102 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example code with a non dummy loss:
```python
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
from tensorflow.keras.optimizers import Adam
from datasets import load_dataset
import tensorflow as tf
import numpy as np
dataset = load_dataset("glue", "cola")
dataset = dataset["train"] # Just take the training split for now
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = dict(tokenizer(dataset["sentence"], return_tensors="np", padding=True))
labels = np.array(dataset["label"]) # Label is already an array of 0 and 1
# Load and compile our model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
# Lower learning rates are often better for fine-tuning transformers
model.compile(optimizer=Adam(3e-5), loss='binary_crossentropy')
model.fit(tokenized_data, labels)
```
```python
Traceback (most recent call last):
File "test_mirrored.py", line 22, in <module>
model.fit(tokenized_data, labels)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_file1a59fb96.py", line 15, in tf__train_function
retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1476, in train_step
data = data_adapter.expand_1d(data)
AttributeError: in user code:
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function *
return step_function(self, iterator)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step **
outputs = model.train_step(data)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1476, in train_step
data = data_adapter.expand_1d(data)
AttributeError: module 'keras.engine.data_adapter' has no attribute 'expand_1d'
```
### Expected behavior
Training succesfully.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20750/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20750/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20749
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20749/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20749/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20749/events
|
https://github.com/huggingface/transformers/pull/20749
| 1,494,163,412
|
PR_kwDOCUB6oc5FUNr8
| 20,749
|
fix missing () in is_flaky
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
#20739 uses `is_flaky` without `()` at the end. This will make the actual test (that being decorated) not running at all.
This PR adds `()`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20749/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20749",
"html_url": "https://github.com/huggingface/transformers/pull/20749",
"diff_url": "https://github.com/huggingface/transformers/pull/20749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20749.patch",
"merged_at": 1671014250000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20748
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20748/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20748/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20748/events
|
https://github.com/huggingface/transformers/issues/20748
| 1,493,924,232
|
I_kwDOCUB6oc5ZC3mI
| 20,748
|
The wrong expression in function "squad_convert_example_to_features"
|
{
"login": "chen-huanxin",
"id": 117709094,
"node_id": "U_kgDOBwQZJg",
"avatar_url": "https://avatars.githubusercontent.com/u/117709094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chen-huanxin",
"html_url": "https://github.com/chen-huanxin",
"followers_url": "https://api.github.com/users/chen-huanxin/followers",
"following_url": "https://api.github.com/users/chen-huanxin/following{/other_user}",
"gists_url": "https://api.github.com/users/chen-huanxin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chen-huanxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chen-huanxin/subscriptions",
"organizations_url": "https://api.github.com/users/chen-huanxin/orgs",
"repos_url": "https://api.github.com/users/chen-huanxin/repos",
"events_url": "https://api.github.com/users/chen-huanxin/events{/privacy}",
"received_events_url": "https://api.github.com/users/chen-huanxin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is all legacy code that we are not maintaining anymore though.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,674
| 1,674
|
NONE
| null |
https://github.com/huggingface/transformers/blob/d4bf9ee1ff0e85cb24feec4dd160af39b623d4b9/src/transformers/data/processors/squad.py#L253
The type of `span["input_ids"]` is `List`, and the `tokenizer.pad_token_id` equals 0, so `span["input_ids"] == tokenizer.pad_token_id` returns bool type result: `False`. It leads to `pad_token_indices = np.where(False)` to be empty array. This may not be the ideal outcome. Because the latter expression
https://github.com/huggingface/transformers/blob/d4bf9ee1ff0e85cb24feec4dd160af39b623d4b9/src/transformers/data/processors/squad.py#L258
uses the `pad_token_indices` to generate p_mask, which should be a `np.ndarray` that sets the padding item in it to 1.
I think the right expression may be as follow:
```Python
pad_token_indices = np.where(np.array(span["input_ids"]) == tokenizer.pad_token_id)
```
`np.array(span["input_ids"])` make a `np.ndarray`, which can compare with `tokenizer.pad_token_id`, and generate `np.ndarray` result. It make sence of expression `p_mask[pad_token_indices] = 1`.
My English is poor, I hope you can understand my words. Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20748/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20747
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20747/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20747/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20747/events
|
https://github.com/huggingface/transformers/issues/20747
| 1,493,852,465
|
I_kwDOCUB6oc5ZCmEx
| 20,747
|
Add GPT-2-climate
|
{
"login": "saeedashraf",
"id": 22497185,
"node_id": "MDQ6VXNlcjIyNDk3MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/22497185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saeedashraf",
"html_url": "https://github.com/saeedashraf",
"followers_url": "https://api.github.com/users/saeedashraf/followers",
"following_url": "https://api.github.com/users/saeedashraf/following{/other_user}",
"gists_url": "https://api.github.com/users/saeedashraf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saeedashraf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saeedashraf/subscriptions",
"organizations_url": "https://api.github.com/users/saeedashraf/orgs",
"repos_url": "https://api.github.com/users/saeedashraf/repos",
"events_url": "https://api.github.com/users/saeedashraf/events{/privacy}",
"received_events_url": "https://api.github.com/users/saeedashraf/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hey @saeedashraf are we expecting to work on integrating this in Hugging Face? If so then I'll be interested in helping out.",
"Hi Manish,\n\nYe ... we would like to fully integrate this.\n\n\nOn Mon, Jan 9, 2023 at 5:47 PM Manish ***@***.***> wrote:\n\n> Hey @saeedashraf <https://github.com/saeedashraf> are we expecting to\n> work on integrating this in Hugging Face? If so then I'll be interested in\n> helping out.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/20747#issuecomment-1375935797>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFLUPIJHVKSB5GHO53JETW3WRQ6LTANCNFSM6AAAAAAS47TOKU>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"Okay, so the GPT model itself is available in HuggingFace. Do we wish to incorporate this dataset? or just the training objective?"
] | 1,670
| 1,673
| null |
NONE
| null |
### Model description
GPT-2 was pretrained on a climate change-related corpus consisting of over 500 thousand abstracts of top climate scientists' articles from trustable sources covering large temporal and spatial scales. The climate-gpt-2 model could further be used for downstream tasks in the climate change domain, including Classification, Fact-checking, and text generation (climate change-related texts).
paper: https://www.climatechange.ai/papers/neurips2022/27
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
@seashr
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20747/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20746
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20746/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20746/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20746/events
|
https://github.com/huggingface/transformers/pull/20746
| 1,493,828,119
|
PR_kwDOCUB6oc5FTDJC
| 20,746
|
Fixing the pipeline tutorial test.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Well it works suddently! I will merge this one."
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20746/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20746",
"html_url": "https://github.com/huggingface/transformers/pull/20746",
"diff_url": "https://github.com/huggingface/transformers/pull/20746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20746.patch",
"merged_at": 1670954910000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20745
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20745/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20745/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20745/events
|
https://github.com/huggingface/transformers/issues/20745
| 1,493,436,273
|
I_kwDOCUB6oc5ZBAdx
| 20,745
|
"Loading weights from local directory"
|
{
"login": "ChingKwanCheung",
"id": 30768203,
"node_id": "MDQ6VXNlcjMwNzY4MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/30768203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChingKwanCheung",
"html_url": "https://github.com/ChingKwanCheung",
"followers_url": "https://api.github.com/users/ChingKwanCheung/followers",
"following_url": "https://api.github.com/users/ChingKwanCheung/following{/other_user}",
"gists_url": "https://api.github.com/users/ChingKwanCheung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChingKwanCheung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChingKwanCheung/subscriptions",
"organizations_url": "https://api.github.com/users/ChingKwanCheung/orgs",
"repos_url": "https://api.github.com/users/ChingKwanCheung/repos",
"events_url": "https://api.github.com/users/ChingKwanCheung/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChingKwanCheung/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I download the LTP model from https://huggingface.co/LTP/small/tree/main . And put it in the LTP-small local folder.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,674
| 1,674
|
NONE
| null |
I've been stuck in "Loading weights from local directory" when run the code(https://github.com/huggingface/transformers/blob/main/examples/research_projects/mlm_wwm/run_chinese_ref.py). My cmd for running run_chinese_ref.py is below:


|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20745/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20744
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20744/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20744/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20744/events
|
https://github.com/huggingface/transformers/pull/20744
| 1,493,416,562
|
PR_kwDOCUB6oc5FRofM
| 20,744
|
Add config for generating ONNX models for table-transformers
|
{
"login": "antew",
"id": 1693421,
"node_id": "MDQ6VXNlcjE2OTM0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1693421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antew",
"html_url": "https://github.com/antew",
"followers_url": "https://api.github.com/users/antew/followers",
"following_url": "https://api.github.com/users/antew/following{/other_user}",
"gists_url": "https://api.github.com/users/antew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antew/subscriptions",
"organizations_url": "https://api.github.com/users/antew/orgs",
"repos_url": "https://api.github.com/users/antew/repos",
"events_url": "https://api.github.com/users/antew/events{/privacy}",
"received_events_url": "https://api.github.com/users/antew/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20744). All of your documentation changes will be reflected on that endpoint.",
"Note that we don't accept new model exports in Transformers, as we moved that part into the Optimum library. You should open a PR to add support there :-)",
"> Note that we don't accept new model exports in Transformers, as we moved that part into the Optimum library. You should open a PR to add support there :-)\r\n\r\nThank you @sgugger, will do!"
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Allows generating ONNX models for table-transformers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge - This adds a small bit on top of the work you did in https://github.com/huggingface/transformers/pull/19614 to allow generating ONNX models for table transfomers (e.g. `python -m transformers.onnx --model="microsoft/table-transformer-structure-recognition" table/`)
Without this addition, I was getting this error:
```
KeyError: "table-transformer is not supported yet.
Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet_v1', 'mobilenet_v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos']
are supported. If you want to support table-transformer please propose a PR or open up an issue.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20744/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20744",
"html_url": "https://github.com/huggingface/transformers/pull/20744",
"diff_url": "https://github.com/huggingface/transformers/pull/20744.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20744.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20743
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20743/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20743/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20743/events
|
https://github.com/huggingface/transformers/issues/20743
| 1,492,797,984
|
I_kwDOCUB6oc5Y-kog
| 20,743
|
Issue with Tokenizer (fast) splitting `<mask>` into constituent added special tokens despite mask token in vocab and in special tokens map
|
{
"login": "simonlevine",
"id": 50503513,
"node_id": "MDQ6VXNlcjUwNTAzNTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/50503513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonlevine",
"html_url": "https://github.com/simonlevine",
"followers_url": "https://api.github.com/users/simonlevine/followers",
"following_url": "https://api.github.com/users/simonlevine/following{/other_user}",
"gists_url": "https://api.github.com/users/simonlevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonlevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonlevine/subscriptions",
"organizations_url": "https://api.github.com/users/simonlevine/orgs",
"repos_url": "https://api.github.com/users/simonlevine/repos",
"events_url": "https://api.github.com/users/simonlevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonlevine/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Interesting, this is part of a series of bug we have with different behaviours between fast and slow. Thanks for posting.",
"Thank you for your response @ArthurZucker . I would be happy to provide details about instantiation and behavior if needed.",
"Just to be able to reproduce correctly could you tell me which tokenizer are you using? ",
"RobertaTokenizerFast ",
"Could you push your tokenizer to the hub? I can't really reproduce this now",
"I also faced the same issue when trained using the ByteLevelBPETokenizer suggested in https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb#scrollTo=IMnymRDLe0hi\r\n\r\n<b>Tokenizer training</b>: \r\n\r\n```\r\ntokenizer = ByteLevelBPETokenizer()\r\ntokenizer.train_from_iterator(iterator=LIST_OF_STRINGS, vocab_size=52000, min_frequency=2, special_tokens=[\r\n \"<s>\",\r\n \"<pad>\",\r\n \"</s>\",\r\n \"<unk>\",\r\n \"<mask>\",\r\n])\r\n```\r\n\r\n<b>Tokenizer use</b>:\r\n\r\n```\r\ntokenizer = RobertaTokenizerFast(vocab_file=\"<VOCAB_FILE_PATH>\",\r\n merges_file=\"<MERGES_FILE_PATH>\",\r\n max_len=512)\r\n```\r\n\r\n\r\nThis tokenizer gives me: ```['<s>', '<', 'mask', '>', '</s>']``` when i use: \r\n\r\n```\r\ntokenizer.convert_ids_to_tokens(tokenizer.encode(tokenizer.mask_token))\r\n```\r\n\r\n\r\nis there a known fix to this ? I am using python 3.8, transformers 4.24.0 and tokenizers 0.13.1",
"I will have a look thanks 😉 ",
"This will be related to the `tokenizer` library as both reports include `fast`. Not stale! ",
"Thanks for your patience 🤗 \r\n1. In the current state, it is not a problem with the tokenizer itself as:\r\n```python \r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\", use_fast= True)\r\ntokenizer(tokenizer.mask_token, add_special_tokens=False)\r\n```\r\ncorrectly outputs `50264`. \r\n\r\n2. Regarding the training of the tokenizer, the notebook works well for me and I cannot reproduce the issue that you are getting. Are you sure that you properly saved the vocabulary and merges with `tokenizer.save_model()` (using the rust tokenizer) ? \r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,683
| 1,683
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- Huggingface_hub version: 0.11.0.rc0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to Reproduce Behavior:
1.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("./tok", use_fast= True)
tokenizer(tokenizer.mask_token, add_special_tokens=False)
```
Evaluates to `{'input_ids': [11, 10], 'attention_mask': [1, 1]}`
3.
```
tokenizer_slow = AutoTokenizer.from_pretrained("./tok", use_fast= False)
tokenizer_slow(tokenizer_slow.mask_token, add_special_tokens=False)
```
Evaluates to `{'input_ids': [4], 'attention_mask': [1]}` (as expected).
Not that in either case, mask_token is `<mask>` and corresponds to mask_token_id 4.
Note also that the directory `tok` contains merges.txt, special_tokens_map.json, tokenizer_config.json, tokenizer.json, and vocab.json. Note that additional_special_tokens and vocab contain `{... "m":11, "s":10 ,...}`, so I believe the Rust tokenizer is considering these special tokens before considering the `<mask>` token.
### Expected behavior
`tokenizer_slow(tokenizer_slow.mask_token, add_special_tokens=False)['input_ids'] == tokenizer(tokenizer.mask_token, add_special_tokens=False)['input_ids'] == [4]` would evaluate to `True`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20743/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20742
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20742/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20742/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20742/events
|
https://github.com/huggingface/transformers/pull/20742
| 1,492,796,363
|
PR_kwDOCUB6oc5FPfZE
| 20,742
|
Add docs xlm roberta
|
{
"login": "hazrulakmal",
"id": 24774385,
"node_id": "MDQ6VXNlcjI0Nzc0Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/24774385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hazrulakmal",
"html_url": "https://github.com/hazrulakmal",
"followers_url": "https://api.github.com/users/hazrulakmal/followers",
"following_url": "https://api.github.com/users/hazrulakmal/following{/other_user}",
"gists_url": "https://api.github.com/users/hazrulakmal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hazrulakmal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hazrulakmal/subscriptions",
"organizations_url": "https://api.github.com/users/hazrulakmal/orgs",
"repos_url": "https://api.github.com/users/hazrulakmal/repos",
"events_url": "https://api.github.com/users/hazrulakmal/events{/privacy}",
"received_events_url": "https://api.github.com/users/hazrulakmal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes [20055](https://github.com/huggingface/transformers/issues/20055)
I have opened a fresh PR to add resources for XLM-RoBERTa.
1. Added a link to classification task fine-tuning using Habana Gaudi blog
2. Fix typos
3. Remove TF and Flax XLMRobertaForCausalLM
4. Update my branch to reflect the latest main branch
Thanks for your help reviewing the changes! @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20742/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20742",
"html_url": "https://github.com/huggingface/transformers/pull/20742",
"diff_url": "https://github.com/huggingface/transformers/pull/20742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20742.patch",
"merged_at": 1670952356000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20741
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20741/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20741/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20741/events
|
https://github.com/huggingface/transformers/issues/20741
| 1,492,723,514
|
I_kwDOCUB6oc5Y-Sc6
| 20,741
|
T5ForConditionalGeneration with BetterTransformer
|
{
"login": "R4ZZ3",
"id": 25264037,
"node_id": "MDQ6VXNlcjI1MjY0MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25264037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/R4ZZ3",
"html_url": "https://github.com/R4ZZ3",
"followers_url": "https://api.github.com/users/R4ZZ3/followers",
"following_url": "https://api.github.com/users/R4ZZ3/following{/other_user}",
"gists_url": "https://api.github.com/users/R4ZZ3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/R4ZZ3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/R4ZZ3/subscriptions",
"organizations_url": "https://api.github.com/users/R4ZZ3/orgs",
"repos_url": "https://api.github.com/users/R4ZZ3/repos",
"events_url": "https://api.github.com/users/R4ZZ3/events{/privacy}",
"received_events_url": "https://api.github.com/users/R4ZZ3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think it needs to be implemented in optimum, so you should move the issue there. cc @younesbelkada ",
"Moving to optimum"
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### Feature request
Got this error when trying out BetterTransformer:
"The Better Transformers implementation for the model T5ForConditionalGeneration has not been implemented yet. Please open an issue requesting the addition of this model with its `BetterTransformer`implementation."
### Motivation
I would like to speed up some text2text models with BetterTransformer
### Your contribution
I can at least test the feature once it is ready
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20741/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20740
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20740/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20740/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20740/events
|
https://github.com/huggingface/transformers/pull/20740
| 1,492,508,256
|
PR_kwDOCUB6oc5FOe2h
| 20,740
|
Use tf.keras.Input to build TF models instead of actual Tensors
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20740). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this for now - there's a whole pile of issues with ensuring weight names line up that I haven't been able to fully resolve. I might come back to this, but the effort:reward ratio isn't really favourable right now!"
] | 1,670
| 1,674
| 1,674
|
MEMBER
| null |
This PR does three things!
### Rework signatures for `serving()`
We use `None` for almost every dimension in our `serving()` signatures, which indicates that the dimension is variable. However, in several cases, the dimension is static **for a given model, but not for that whole model class.**
For example, an image model might have an attribute like `config.num_channels`. Once this value is known, the channels dimension of the input must have this value. Therefore, the serving signature should reflect it too, to ensure the exported functions work correctly!
Right now we use `tf.function` as a decorator on the `serving()` method, but when we do this then the serving signature must be fully static. This PR instead removes the decorator, and creates `self.serving` by calling `tf.function` in the `__init__` when the `config` is available and these values are known. This change should be 100% transparent from the user's perspective, but will fix several issues.
### Use the serving signature instead of dummy inputs
Now that we have correct serving signatures, we can use them to build our models! This PR changes `from_pretrained` to use the serving signature instead of dummy inputs. Because the serving signature can have `None` dimensions, we cannot actually build real `Tensors` from that shape. However, we can create `tf.keras.Input` placeholders with that shape, and pass these through the model instead.
This is quite a significant change, because using placeholder inputs converts the build process from an eager forward pass into a TF compilation. However, this ensures that the model save signature (used when exporting a `SavedModel` has correct variable dimensions, which we were only able to do with very hacky calls to `_set_save_spec()` until now. This change surfaced several bugs, but most model classes had no issues with it.
### Clean up building and `name_scope`
Almost all of the issues that resulted from building-by-compilation are caused by our use of `name_scope` or `variable_scope`. I believe the issue is that TF creates variables in a slightly different way in an eager versus a compilation context, and I **think** this should be resolvable by refactoring our uses of `tf.name_scope` or `tf.compat.v1.variable_scope` to `tf.name_scope(use_resource=True)`, as all eager variables are `ResourceVariable` by default.
Also, compilation is slower than an eager forward pass when the inputs are small. However, we had several inefficiencies in `from_pretrained`, including a repeated build step that I don't think was necessary. By removing that, speed should be similar or even better than it was before!
- [x] Move serving signatures to a model `@property` called `serving_signature`.
- [x] Change the `build_with_dummies()` to use that serving signature
- [x] Delete the `serving()` method on all models.
- [x] In the base `init()`, create `self.serving` by compiling `self.eager_serving` with `self.serving_signature`
- [ ] Find models with failing builds and fill in the static dimensions for them.
- [ ] Allow custom signatures in `from_pretrained`
- [x] Check that tests using composite models pass
- [x] ~Deprecate dummies entirely?~
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20740/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20740",
"html_url": "https://github.com/huggingface/transformers/pull/20740",
"diff_url": "https://github.com/huggingface/transformers/pull/20740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20740.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20739
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20739/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20739/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20739/events
|
https://github.com/huggingface/transformers/pull/20739
| 1,492,442,939
|
PR_kwDOCUB6oc5FOQKR
| 20,739
|
Add decorator for flaky Donut tests
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"`is_flaky` has to be used as `is_flaky()`, otherwise the actual tests won't be run, and will always pass."
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Tests occasionally fail for Donut. Adding decorator to handle whilst waiting for a resolution to be merged in.
Issue raised here: https://github.com/huggingface/transformers/issues/20738
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20739/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20739",
"html_url": "https://github.com/huggingface/transformers/pull/20739",
"diff_url": "https://github.com/huggingface/transformers/pull/20739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20739.patch",
"merged_at": 1670869527000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20738
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20738/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20738/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20738/events
|
https://github.com/huggingface/transformers/issues/20738
| 1,492,440,343
|
I_kwDOCUB6oc5Y9NUX
| 20,738
|
Flaky feature extraction tests for Donut
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This is not yet resolved. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,676
| null |
COLLABORATOR
| null |
### System Info
Note: I have observed with my own setup, but most recently this was seen in a CI environment
- `transformers` version: 4.26.0.dev0
- Platform: macOS-13.0.1-arm64-arm-64bit
- Python version: 3.9.15
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.14.0.dev20221118 (False)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: Noe
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pytest tests/models/donut/test_feature_extraction_donut.py::DonutFeatureExtractionTest
### Expected behavior
Tests don't randomly fail e.g. like in [this CI run](https://app.circleci.com/pipelines/github/huggingface/transformers/53575/workflows/40f5b896-c941-4a9f-8946-26f54bb14505/jobs/644204).
There is an issue occurring when the amount to pad is being calculated - some dimensions are having negative padding found.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20738/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20737
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20737/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20737/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20737/events
|
https://github.com/huggingface/transformers/issues/20737
| 1,492,351,317
|
I_kwDOCUB6oc5Y83lV
| 20,737
|
RWKV4neo
|
{
"login": "ArEnSc",
"id": 6252325,
"node_id": "MDQ6VXNlcjYyNTIzMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6252325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArEnSc",
"html_url": "https://github.com/ArEnSc",
"followers_url": "https://api.github.com/users/ArEnSc/followers",
"following_url": "https://api.github.com/users/ArEnSc/following{/other_user}",
"gists_url": "https://api.github.com/users/ArEnSc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArEnSc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArEnSc/subscriptions",
"organizations_url": "https://api.github.com/users/ArEnSc/orgs",
"repos_url": "https://api.github.com/users/ArEnSc/repos",
"events_url": "https://api.github.com/users/ArEnSc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArEnSc/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"@ArthurZucker \r\n@younesbelkada \r\nhttps://github.com/huggingface/transformers/issues/17230#issuecomment-1346779059",
"This is super cool !! 🔥 \r\nReally looking forward to it \r\nAlso as discussed with @leondz , [we'll probably also include a blogpost](https://github.com/huggingface/transformers/issues/17230#issuecomment-1338060393) explaining this new architecture!\r\nHappy to help for the implementation and the blogpost! ",
"Ok I am gonna just set this up on my linux machine m1 setup isn't ready I spent 2 hours on this gonna try again tomorrow sorry D: ",
"> Ok I am gonna just set this up on my linux machine m1 setup isn't ready I spent 2 hours on this gonna try again tomorrow sorry D:\r\n\r\nscratch that, it also didn't work, mainly because of my limited harddisk space. Gonna retry mac...",
"```\r\nBuilding wheels for collected packages: transformers, onnx\r\n Building editable for transformers (pyproject.toml) ... done\r\n Created wheel for transformers: filename=transformers-4.26.0.dev0-0.editable-py3-none-any.whl size=31899 sha256=d4b123bfb17b8f11ab811da95d253a066a98638c492d76f4d00afa509dfc6d71\r\n Stored in directory: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-ephem-wheel-cache-w5ucgfda/wheels/52/2c/02/9b0e2ee52910e61c69011870086c52ab4eaaa554c34005f48f\r\n Building wheel for onnx (setup.py) ... error\r\n error: subprocess-exited-with-error\r\n \r\n × python setup.py bdist_wheel did not run successfully.\r\n │ exit code: 1\r\n ╰─> [76 lines of output]\r\n \r\n You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n running create_version\r\n running cmake_build\r\n Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']\r\n -- The C compiler identification is unknown\r\n -- The CXX compiler identification is unknown\r\n -- Detecting C compiler ABI info\r\n -- Detecting C compiler ABI info - failed\r\n -- Check for working C compiler: /usr/bin/cc\r\n -- Check for working C compiler: /usr/bin/cc - broken\r\n CMake Error at /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/cmake/data/share/cmake-3.25/Modules/CMakeTestCCompiler.cmake:70 (message):\r\n The C compiler\r\n \r\n \"/usr/bin/cc\"\r\n \r\n is not able to compile a simple test program.\r\n \r\n It fails with the following output:\r\n \r\n Change Dir: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeScratch/TryCompile-cd2Kpc\r\n \r\n Run Build Command(s):/usr/bin/make -f Makefile cmTC_a6349/fast &&\r\n You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.\r\n \r\n \r\n \r\n \r\n \r\n CMake will not be able to correctly generate this project.\r\n Call Stack (most recent call first):\r\n CMakeLists.txt:17 (project)\r\n \r\n \r\n -- Configuring incomplete, errors occurred!\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log\".\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeError.log\".\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py\", line 332, in <module>\r\n setuptools.setup(\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py\", line 153, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py\", line 148, in setup\r\n dist.run_commands()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 966, in run_commands\r\n self.run_command(cmd)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/wheel/bdist_wheel.py\", line 325, in run\r\n self.run_command(\"build\")\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py\", line 135, in run\r\n self.run_command(cmd_name)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py\", line 223, in run\r\n self.run_command(\"cmake_build\")\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py\", line 209, in run\r\n subprocess.check_call(cmake_args)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py\", line 373, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\n subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']' returned non-zero exit status 1.\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n ERROR: Failed building wheel for onnx\r\n Running setup.py clean for onnx\r\nSuccessfully built transformers\r\nFailed to build onnx\r\nInstalling collected packages: onnx, tf2onnx, nbformat, matplotlib, markdown, jaxlib, jax, huggingface-hub, gql, google-auth, GitPython, Flask, docker, cryptography, cliff, botocore, arrow, alembic, aiohttp, accelerate, transformers, timm, s3transfer, pyOpenSSL, optuna, librosa, kubernetes, jinja2-time, ipython, google-auth-oauthlib, dash, csvw, chex, APScheduler, tensorboard, optax, hf-doc-builder, datasets, dash-bootstrap-components, cookiecutter, clldutils, boto3, tensorflow-macos, sigopt, segments, flax, evaluate, codecarbon, phonemizer\r\n Attempting uninstall: onnx\r\n Found existing installation: onnx 1.13.0\r\n Uninstalling onnx-1.13.0:\r\n Successfully uninstalled onnx-1.13.0\r\n Running setup.py install for onnx ... error\r\n error: subprocess-exited-with-error\r\n \r\n × Running setup.py install for onnx did not run successfully.\r\n │ exit code: 1\r\n ╰─> [78 lines of output]\r\n \r\n You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.\r\n running install\r\n running build\r\n running build_py\r\n running create_version\r\n running cmake_build\r\n Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']\r\n -- The C compiler identification is unknown\r\n -- The CXX compiler identification is unknown\r\n -- Detecting C compiler ABI info\r\n -- Detecting C compiler ABI info - failed\r\n -- Check for working C compiler: /usr/bin/cc\r\n -- Check for working C compiler: /usr/bin/cc - broken\r\n CMake Error at /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/cmake/data/share/cmake-3.25/Modules/CMakeTestCCompiler.cmake:70 (message):\r\n The C compiler\r\n \r\n \"/usr/bin/cc\"\r\n \r\n is not able to compile a simple test program.\r\n \r\n It fails with the following output:\r\n \r\n Change Dir: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeScratch/TryCompile-WGiugk\r\n \r\n Run Build Command(s):/usr/bin/make -f Makefile cmTC_3370b/fast &&\r\n You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.\r\n \r\n \r\n \r\n \r\n \r\n CMake will not be able to correctly generate this project.\r\n Call Stack (most recent call first):\r\n CMakeLists.txt:17 (project)\r\n \r\n \r\n -- Configuring incomplete, errors occurred!\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log\".\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/.setuptools-cmake-build/CMakeFiles/CMakeError.log\".\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py\", line 332, in <module>\r\n setuptools.setup(\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py\", line 153, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py\", line 148, in setup\r\n dist.run_commands()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 966, in run_commands\r\n self.run_command(cmd)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/command/install.py\", line 61, in run\r\n return orig.install.run(self)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/install.py\", line 546, in run\r\n self.run_command('build')\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py\", line 135, in run\r\n self.run_command(cmd_name)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py\", line 223, in run\r\n self.run_command(\"cmake_build\")\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f/setup.py\", line 209, in run\r\n subprocess.check_call(cmake_args)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py\", line 373, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\n subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-ppsum_cw/onnx_d98e5ffd9f474a8b9f3cf5ca58551b5f']' returned non-zero exit status 1.\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n Rolling back uninstall of onnx\r\n Moving to /Users/michaelchung/Code/transformers/.env/bin/backend-test-tools\r\n from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-rs3gpves/backend-test-tools\r\n Moving to /Users/michaelchung/Code/transformers/.env/bin/check-model\r\n from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-rs3gpves/check-model\r\n Moving to /Users/michaelchung/Code/transformers/.env/bin/check-node\r\n from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-rs3gpves/check-node\r\n Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx-1.13.0.dist-info/\r\n from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx-1.13.0.dist-info\r\n Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx/\r\n from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx\r\nerror: legacy-install-failure\r\n\r\n× Encountered error while trying to install package.\r\n╰─> onnx\r\n\r\nnote: This is an issue with the package mentioned above, not pip.\r\nhint: See above for output from the failure.\r\n\r\n```\r\nFails at building wheels for transformers and onnx\r\nI installed onnx manually using pip3 \r\n```\r\nabsl-py==1.3.0\r\naiosignal==1.3.1\r\nappdirs==1.4.4\r\nappnope==0.1.3\r\nasttokens==2.2.1\r\nastunparse==1.6.3\r\nasync-timeout==4.0.2\r\nattrs==22.1.0\r\naudioread==3.0.0\r\nautopage==0.5.1\r\nBabel==2.11.0\r\nbackcall==0.2.0\r\nbackoff==1.11.1\r\nbeautifulsoup4==4.11.1\r\nbinaryornot==0.4.4\r\nblack==22.3.0\r\ncachetools==5.2.0\r\ncertifi==2022.12.7\r\ncffi==1.15.1\r\nchardet==5.1.0\r\ncharset-normalizer==2.1.1\r\nclick==8.1.3\r\ncmaes==0.9.0\r\ncmake==3.25.0\r\ncmd2==2.4.2\r\ncolorama==0.4.6\r\ncoloredlogs==15.0.1\r\ncolorlog==6.7.0\r\ncommonmark==0.9.1\r\ncontourpy==1.0.6\r\ncycler==0.11.0\r\ndash-core-components==2.0.0\r\ndash-html-components==2.0.0\r\ndash-table==5.0.0\r\ndecorator==5.1.1\r\ndill==0.3.4\r\ndistlib==0.3.6\r\ndlinfo==1.2.1\r\ndm-tree==0.1.7\r\nexceptiongroup==1.0.4\r\nexecnet==1.9.0\r\nexecuting==1.2.0\r\nfaiss-cpu==1.7.3\r\nfastjsonschema==2.16.2\r\nfilelock==3.8.2\r\nfire==0.5.0\r\nflake8==6.0.0\r\nflatbuffers==2.0.7\r\nfonttools==4.38.0\r\nfrozenlist==1.3.3\r\nfsspec==2022.11.0\r\nfugashi==1.1.2a6\r\ngast==0.4.0\r\ngitdb==4.0.10\r\ngoogle-pasta==0.2.0\r\ngraphql-core==3.2.3\r\ngrpcio==1.51.1\r\nh5py==3.7.0\r\nhumanfriendly==10.0\r\nhypothesis==6.61.0\r\nidna==3.4\r\nimportlib-metadata==4.13.0\r\niniconfig==1.1.1\r\nipadic==1.0.0\r\nisodate==0.6.1\r\nisort==5.11.2\r\nitsdangerous==2.1.2\r\njedi==0.18.2\r\nJinja2==3.1.2\r\njmespath==1.0.1\r\njoblib==1.2.0\r\njsonschema==4.17.3\r\njupyter_core==5.1.0\r\nkenlm==0.1\r\nkeras==2.11.0\r\nkeras-nlp==0.3.1\r\nkiwisolver==1.4.4\r\nlanguage-tags==1.1.0\r\nlibclang==14.0.6\r\nllvmlite==0.39.1\r\nlxml==4.9.2\r\nMako==1.2.4\r\nMarkupSafe==2.1.1\r\nmatplotlib-inline==0.1.6\r\nmccabe==0.7.0\r\nmpmath==1.2.1\r\nmsgpack==1.0.4\r\nmultidict==6.0.3\r\nmultiprocess==0.70.12.2\r\nmypy-extensions==0.4.3\r\nnltk==3.8\r\nnumba==0.56.4\r\nnumpy==1.23.5\r\noauthlib==3.2.2\r\nonnx==1.13.0\r\nonnxconverter-common==1.13.0\r\nonnxruntime==1.13.1\r\nonnxruntime-tools==1.7.0\r\nopt-einsum==3.3.0\r\npackaging==22.0\r\npandas==1.5.2\r\nparameterized==0.8.1\r\nparso==0.8.3\r\npathspec==0.10.3\r\npbr==5.11.0\r\npexpect==4.8.0\r\npickleshare==0.7.5\r\nPillow==9.3.0\r\nPint==0.16.1\r\nplac==1.3.5\r\nplatformdirs==2.6.0\r\nplotly==5.11.0\r\npluggy==1.0.0\r\npooch==1.6.0\r\nportalocker==2.0.0\r\npoyo==0.5.0\r\nprettytable==3.5.0\r\nprompt-toolkit==3.0.36\r\nprotobuf==3.19.6\r\npsutil==5.9.4\r\nptyprocess==0.7.0\r\npure-eval==0.2.2\r\npy-cpuinfo==9.0.0\r\npy3nvml==0.2.7\r\npyarrow==10.0.1\r\npyasn1==0.4.8\r\npyasn1-modules==0.2.8\r\npycodestyle==2.10.0\r\npycparser==2.21\r\npyctcdecode==0.4.0\r\npyflakes==3.0.1\r\nPygments==2.13.0\r\npygtrie==2.5.0\r\npyknp==0.6.1\r\npylatexenc==2.10\r\npynvml==11.4.1\r\npyparsing==3.0.9\r\npyperclip==1.8.2\r\npypng==0.20220715.0\r\npyrsistent==0.19.2\r\npytest==7.2.0\r\npytest-timeout==2.1.0\r\npytest-xdist==3.1.0\r\npython-dateutil==2.8.2\r\npython-slugify==7.0.0\r\npytz==2022.6\r\npytz-deprecation-shim==0.1.0.post0\r\nPyYAML==5.4.1\r\nray==2.2.0\r\nrdflib==6.2.0\r\nregex==2022.10.31\r\nrequests==2.28.1\r\nrequests-oauthlib==1.3.1\r\nrequests-toolbelt==0.10.1\r\nresampy==0.4.2\r\nresponses==0.18.0\r\nrfc3986==1.5.0\r\nrich==11.2.0\r\nrjieba==0.1.11\r\nrouge-score==0.1.2\r\nrsa==4.9\r\nsacrebleu==1.5.1\r\nsacremoses==0.0.53\r\nsafetensors==0.2.6\r\nscikit-learn==1.2.0\r\nscipy==1.8.1\r\nsentencepiece==0.1.97\r\nsix==1.16.0\r\nsmmap==5.0.0\r\nsortedcontainers==2.4.0\r\nsoundfile==0.11.0\r\nsoupsieve==2.3.2.post1\r\nSQLAlchemy==1.4.45\r\nstack-data==0.6.2\r\nstevedore==4.1.1\r\nSudachiDict-core==20221021\r\nSudachiPy==0.6.6\r\nsympy==1.11.1\r\ntabulate==0.9.0\r\ntenacity==8.1.0\r\ntensorboard-data-server==0.6.1\r\ntensorboard-plugin-wit==1.8.1\r\ntensorboardX==2.5.1\r\ntensorflow-estimator==2.11.0\r\ntensorflow-metal==0.7.0\r\ntensorstore==0.1.28\r\ntermcolor==2.1.1\r\ntext-unidecode==1.3\r\nthreadpoolctl==3.1.0\r\ntimeout-decorator==0.5.0\r\ntokenizers==0.13.2\r\ntomli==2.0.1\r\ntoolz==0.12.0\r\ntorch==1.13.0\r\ntorchaudio==0.13.0\r\ntorchvision==0.14.0\r\ntqdm==4.64.1\r\ntraitlets==5.7.1\r\ntyping_extensions==4.4.0\r\ntzdata==2022.7\r\ntzlocal==4.2\r\nunidic==1.1.0\r\nunidic-lite==1.0.8\r\nuritemplate==4.1.1\r\nurllib3==1.26.13\r\nvirtualenv==20.17.1\r\nwasabi==0.10.1\r\nwcwidth==0.2.5\r\nwebsocket-client==1.4.2\r\nWerkzeug==2.2.2\r\nwrapt==1.14.1\r\nxmltodict==0.13.0\r\nxxhash==3.1.0\r\nyarl==1.8.2\r\nzipp==3.11.0\r\n```\r\nI am on 3.9.11\r\n\r\n\r\nwhen running \r\n conda install -c conda-forge onnxruntime\r\nit says all the components were installed\r\n\r\n\r\n\r\n",
"hmmm I accepted the xcode license agreement.\r\nIt's built transformers I think it fails to build onnx, but whats odd is it uninstalls onnx. Maybe I can do with out it and remove it out of setup.py?\r\n```\r\nBuilding wheels for collected packages: transformers, onnx\r\n Building editable for transformers (pyproject.toml) ... done\r\n Created wheel for transformers: filename=transformers-4.26.0.dev0-0.editable-py3-none-any.whl size=31899 sha256=e63397c8685e05971af9d4767e872200eb8393ed081e37582f8d24760daaca08\r\n Stored in directory: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-ephem-wheel-cache-w75hif9g/wheels/52/2c/02/9b0e2ee52910e61c69011870086c52ab4eaaa554c34005f48f\r\n Building wheel for onnx (setup.py) ... error\r\n error: subprocess-exited-with-error\r\n \r\n × python setup.py bdist_wheel did not run successfully.\r\n │ exit code: 1\r\n ╰─> [67 lines of output]\r\n fatal: not a git repository (or any of the parent directories): .git\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n running create_version\r\n running cmake_build\r\n Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']\r\n -- The C compiler identification is AppleClang 14.0.0.14000029\r\n -- The CXX compiler identification is AppleClang 14.0.0.14000029\r\n -- Detecting C compiler ABI info\r\n -- Detecting C compiler ABI info - done\r\n -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped\r\n -- Detecting C compile features\r\n -- Detecting C compile features - done\r\n -- Detecting CXX compiler ABI info\r\n -- Detecting CXX compiler ABI info - done\r\n -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped\r\n -- Detecting CXX compile features\r\n -- Detecting CXX compile features - done\r\n -- Found PythonInterp: /Users/michaelchung/Code/transformers/.env/bin/python3 (found version \"3.9.11\")\r\n -- Found PythonLibs: /opt/homebrew/Frameworks/Python.framework/Versions/3.11/lib/libpython3.11.dylib (found version \"3.9.11\")\r\n Generated: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/onnx/onnx-ml.proto\r\n CMake Error at CMakeLists.txt:299 (message):\r\n Protobuf compiler not found\r\n Call Stack (most recent call first):\r\n CMakeLists.txt:330 (relative_protobuf_generate_cpp)\r\n \r\n \r\n -- Configuring incomplete, errors occurred!\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log\".\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeError.log\".\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py\", line 332, in <module>\r\n setuptools.setup(\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py\", line 153, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py\", line 148, in setup\r\n dist.run_commands()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 966, in run_commands\r\n self.run_command(cmd)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/wheel/bdist_wheel.py\", line 325, in run\r\n self.run_command(\"build\")\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py\", line 135, in run\r\n self.run_command(cmd_name)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py\", line 223, in run\r\n self.run_command(\"cmake_build\")\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py\", line 209, in run\r\n subprocess.check_call(cmake_args)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py\", line 373, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\n subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']' returned non-zero exit status 1.\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n ERROR: Failed building wheel for onnx\r\n Running setup.py clean for onnx\r\nSuccessfully built transformers\r\nFailed to build onnx\r\nInstalling collected packages: onnx, tf2onnx, nbformat, matplotlib, markdown, jaxlib, jax, huggingface-hub, gql, google-auth, GitPython, Flask, docker, cryptography, cliff, botocore, arrow, alembic, aiohttp, accelerate, transformers, timm, s3transfer, pyOpenSSL, optuna, librosa, kubernetes, jinja2-time, ipython, google-auth-oauthlib, dash, csvw, chex, APScheduler, tensorboard, optax, hf-doc-builder, datasets, dash-bootstrap-components, cookiecutter, clldutils, boto3, tensorflow-macos, sigopt, segments, flax, evaluate, codecarbon, phonemizer\r\n Attempting uninstall: onnx\r\n Found existing installation: onnx 1.13.0\r\n Uninstalling onnx-1.13.0:\r\n Successfully uninstalled onnx-1.13.0\r\n Running setup.py install for onnx ... error\r\n error: subprocess-exited-with-error\r\n \r\n × Running setup.py install for onnx did not run successfully.\r\n │ exit code: 1\r\n ╰─> [55 lines of output]\r\n fatal: not a git repository (or any of the parent directories): .git\r\n running install\r\n running build\r\n running build_py\r\n running create_version\r\n running cmake_build\r\n Using cmake args: ['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']\r\n Generated: /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/onnx/onnx-ml.proto\r\n CMake Error at CMakeLists.txt:299 (message):\r\n Protobuf compiler not found\r\n Call Stack (most recent call first):\r\n CMakeLists.txt:330 (relative_protobuf_generate_cpp)\r\n \r\n \r\n -- Configuring incomplete, errors occurred!\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log\".\r\n See also \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/.setuptools-cmake-build/CMakeFiles/CMakeError.log\".\r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py\", line 332, in <module>\r\n setuptools.setup(\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/__init__.py\", line 153, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/core.py\", line 148, in setup\r\n dist.run_commands()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 966, in run_commands\r\n self.run_command(cmd)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/setuptools/command/install.py\", line 61, in run\r\n return orig.install.run(self)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/install.py\", line 546, in run\r\n self.run_command('build')\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/command/build.py\", line 135, in run\r\n self.run_command(cmd_name)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py\", line 223, in run\r\n self.run_command(\"cmake_build\")\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/distutils/dist.py\", line 985, in run_command\r\n cmd_obj.run()\r\n File \"/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928/setup.py\", line 209, in run\r\n subprocess.check_call(cmake_args)\r\n File \"/Users/michaelchung/.pyenv/versions/3.9.11/lib/python3.9/subprocess.py\", line 373, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\n subprocess.CalledProcessError: Command '['/Users/michaelchung/Code/transformers/.env/bin/cmake', '-DPYTHON_INCLUDE_DIR=/Users/michaelchung/.pyenv/versions/3.9.11/include/python3.9', '-DPYTHON_EXECUTABLE=/Users/michaelchung/Code/transformers/.env/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-39-darwin.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-install-9aulk13y/onnx_331e37bbfc1440c0bc09391c545d3928']' returned non-zero exit status 1.\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n Rolling back uninstall of onnx\r\n Moving to /Users/michaelchung/Code/transformers/.env/bin/backend-test-tools\r\n from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-1ylzipwt/backend-test-tools\r\n Moving to /Users/michaelchung/Code/transformers/.env/bin/check-model\r\n from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-1ylzipwt/check-model\r\n Moving to /Users/michaelchung/Code/transformers/.env/bin/check-node\r\n from /private/var/folders/jn/8d33s3c55jv5pctdc6wdnm2h0000gn/T/pip-uninstall-1ylzipwt/check-node\r\n Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx-1.13.0.dist-info/\r\n from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx-1.13.0.dist-info\r\n Moving to /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/onnx/\r\n from /Users/michaelchung/Code/transformers/.env/lib/python3.9/site-packages/~nnx\r\nerror: legacy-install-failure\r\n\r\n× Encountered error while trying to install package.\r\n╰─> onnx\r\n\r\nnote: This is an issue with the package mentioned above, not pip.\r\nhint: See above for output from the failure.\r\n\r\n```",
"Alright dev environment installed, the key here was not to use conda but miniforge",
"Ok draft created ",
"Do you know how to convert .pth model to config.json/pytorch_model.bin for RWKV4neo?",
"I have a conversion script + draft that has consistent logit ordering with the official implementation here:\r\n\r\nconversion script: https://github.com/tensorpro/transformers/blob/rwkv_draft/src/transformers/models/rwkv4_neo/convert_rwkv_original_pytorch_checkpoint_to_pytorch.py\r\nmodel in torch: https://github.com/tensorpro/transformers/blob/rwkv_draft/src/transformers/models/rwkv4_neo/modeling_rwkv4_neo.py\r\n\r\nI can clean it up and turn it into a PR if that would help?",
"Sure, there is also #22797 that should be in a pretty good state! I'm about to review it but feel free to add your touch to it if you feel like it! ",
"Oh cool, that one looks awesome!",
"> I have a conversion script + draft that has consistent logit ordering with the official implementation here:\r\n> \r\n> conversion script: https://github.com/tensorpro/transformers/blob/main/src/transformers/models/rwkv4_neo/convert_rwkv_original_pytorch_checkpoint_to_pytorch.py model in torch: https://github.com/tensorpro/transformers/blob/main/src/transformers/models/rwkv4_neo/modeling_rwkv4_neo.py\r\n> \r\n> I can clean it up and turn it into a PR if that would help?\r\n\r\n@tensorpro I could really use your scripts but I get 404 when I try to access those links. :/",
"Ah sorry, the links broke when I changed the branch I was working in. I edited the comment to point to the right branch\r\n\r\nThat said, you may want to use the code in #22797 since it will be closer to the official HF version and already supports CUDA accelerated WKV."
] | 1,670
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
### Model description
RWKV - Receptance Weighted Key Value
RWKV is a Sequence to Sequence Model that takes the best features of Generative PreTraining (GPT) and Recurrent Nueral Networks (RNN) that performs Language Modelling (LM). This is used to generate text Auto Regressive manner (AR).
This is a hybrid model.
It has Transformer Level Performance without the quadratic attention mechanism. It borrows ideas from Attention Free Transformers, meaning the attention is a linear in complexity. Allowing for infinite context through the hidden state in RWKV_RNN.
There are two models for RWKV, they are refered to as modes.
RWKV_RNN: This mode is designed for running inference quickly.
RWKV_GPT: This mode is for training or fine tuning your model quickly.
In the first pass we will be implementing RWKV_RNN Although we can weight share to have RWKV_GPT generate the inital context for RWKV_RNN.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
- [ ] Scaffolding
- [ ] API Discussion
### Provide useful links for the implementation
More from the Research and Development Repository: https://github.com/BlinkDL/RWKV-LM
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20737/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 4,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20737/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20736
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20736/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20736/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20736/events
|
https://github.com/huggingface/transformers/pull/20736
| 1,492,339,969
|
PR_kwDOCUB6oc5FN5Of
| 20,736
|
rename `layoutlm_job` to `exotic_models_job`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
This job now contains tests for `nat` and `dinat` models. Rename the job to make it more clear.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20736/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20736",
"html_url": "https://github.com/huggingface/transformers/pull/20736",
"diff_url": "https://github.com/huggingface/transformers/pull/20736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20736.patch",
"merged_at": 1670871736000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20735
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20735/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20735/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20735/events
|
https://github.com/huggingface/transformers/pull/20735
| 1,492,325,680
|
PR_kwDOCUB6oc5FN2Ay
| 20,735
|
Fix AdamWeightDecay for TF 2.11
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@gante take a quick look at this one too, please! Our `AdamWeightDecay` doesn't work with the new Optimizer base class in TF 2.11, so I moved it onto the legacy class for now. When I have more time I should clean this up properly, and possibly just use [the official AdamW](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/AdamW) instead.",
"_The documentation is not available anymore as the PR was closed or merged._",
"The same issue occurred with the Summarization tutorial. [https://huggingface.co/course/chapter7/5?fw=tf#models-for-text-summarization](url)",
"Thanks @el-profesor-04 - can you confirm that the issue goes away with you install the latest version of `transformers` from main with `pip install --upgrade git+https://github.com/huggingface/transformers.git`?"
] | 1,670
| 1,674
| 1,670
|
MEMBER
| null |
Fixes #20724
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20735/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20735",
"html_url": "https://github.com/huggingface/transformers/pull/20735",
"diff_url": "https://github.com/huggingface/transformers/pull/20735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20735.patch",
"merged_at": 1670935868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20734
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20734/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20734/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20734/events
|
https://github.com/huggingface/transformers/issues/20734
| 1,492,171,804
|
I_kwDOCUB6oc5Y8Lwc
| 20,734
|
`AddedToken` 's argument are ignored when called in `add_tokens` 's method of slow tokenizers
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I have not dropped this, its still on my TODO list. There are a lot of linked issues!",
"(The update is gonna take longer as I am refactoring the tokenizers)",
"I had the same issue. It seems that `AddedToken` is implemented [here](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/tokenization_utils_base.py#L80), if the package `tokenizers` is not available. However, the python implementation using dataclasses does not behave the same way as the rust implementation in `tokenizers`."
] | 1,670
| 1,695
| 1,695
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The explanations of the bug and its reproduction are contained in the following google colab: https://colab.research.google.com/drive/19SS6Tzlgo0vntFtM6ZsCYq8BNZ5Dy1cS?usp=sharing
### Expected behavior
I would expect the fast and slow tokenizers to treat the `AddedToken`'s arguments in the same way.
I think the loss of information for the slow tokenizer occurs at this line: https://github.com/huggingface/transformers/blob/a413c725d40027134f28f92974ad7d61751f5640/src/transformers/tokenization_utils.py#L411
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20734/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20733
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20733/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20733/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20733/events
|
https://github.com/huggingface/transformers/issues/20733
| 1,492,120,630
|
I_kwDOCUB6oc5Y7_Q2
| 20,733
|
Verify that a test in `LayoutLMv3` 's tokenizer is checking what we want
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks! Will try to get familiar with this 😉 "
] | 1,670
| 1,685
| 1,683
|
CONTRIBUTOR
| null |
I'm taking the liberty of opening an issue to share a question I've been keeping in the corner of my head, but now that I'll have less time to devote to `transformers` I prefer to share it before it's forgotten.
In the PR where the `LayoutLMv3` model was added, I was not very sure about the target value used for one of the tests that had to be overridden (the value was 1 in one of the previous commits and then changed to 0). The comment I am referring to is this one: https://github.com/huggingface/transformers/pull/17060#discussion_r872265358 .
Might be of interest to @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20733/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20732
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20732/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20732/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20732/events
|
https://github.com/huggingface/transformers/pull/20732
| 1,492,098,602
|
PR_kwDOCUB6oc5FNDPS
| 20,732
|
Convert tokenizer outputs for Keras in doc example
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20732). All of your documentation changes will be reflected on that endpoint."
] | 1,670
| 1,670
| 1,670
|
MEMBER
| null |
The TF examples in `training.mdx` don't turn the tokenizer outputs into a `dict`, so Keras gets confused.
Fixes #20709
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20732/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20732",
"html_url": "https://github.com/huggingface/transformers/pull/20732",
"diff_url": "https://github.com/huggingface/transformers/pull/20732.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20732.patch",
"merged_at": 1670861644000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20731
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20731/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20731/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20731/events
|
https://github.com/huggingface/transformers/pull/20731
| 1,492,033,572
|
PR_kwDOCUB6oc5FM1Cg
| 20,731
|
Disambiguate test for required_input in tokenization base file.
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
This is a prime example of why we should never rely on Python bool conversion magic, as it often fails with array-like types.
Fixes #19136
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20731/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20731",
"html_url": "https://github.com/huggingface/transformers/pull/20731",
"diff_url": "https://github.com/huggingface/transformers/pull/20731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20731.patch",
"merged_at": 1670868790000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20730
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20730/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20730/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20730/events
|
https://github.com/huggingface/transformers/pull/20730
| 1,491,869,289
|
PR_kwDOCUB6oc5FMQw0
| 20,730
|
Fix AutoModelTest.test_model_from_pretrained
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The other way to fix would be to modify the original files to remove the duplicated layer. I think that was suggested by @sgugger .\r\n\r\nI personally am ok with the proposed fix, it does seem the easiest route.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
From @Narsil
> Hmmmm it's because of safetensors is installed in that environement and it's loading the safetensors weights which have indeed 1 less unexpected keys (the duplicated key was removed)
I tried `gpt2` and `roberta-base`, they also have similar issues, on some keys among `missing_keys`, `unexpected_keys` etc..
I decided to stop trying and just give a condition.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20730/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20730",
"html_url": "https://github.com/huggingface/transformers/pull/20730",
"diff_url": "https://github.com/huggingface/transformers/pull/20730.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20730.patch",
"merged_at": 1670855864000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20729
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20729/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20729/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20729/events
|
https://github.com/huggingface/transformers/pull/20729
| 1,491,849,281
|
PR_kwDOCUB6oc5FMMMb
| 20,729
|
Adding ValueError when incompatible parameters are used.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20729). All of your documentation changes will be reflected on that endpoint.",
"@Narsil for what I read in #20662 , it seems what actually should imcompatible may `return_text` and `return_tensors` instead of `return_text` and `return_full_text`?",
"Oh, them too, will create a new PR. But `return_text` and `return_full_text` are also exclusive I think."
] | 1,670
| 1,671
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding ValueError when imcompatible parameters are used.
https://github.com/huggingface/transformers/pull/20662#pullrequestreview-1210120116
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20729/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20729",
"html_url": "https://github.com/huggingface/transformers/pull/20729",
"diff_url": "https://github.com/huggingface/transformers/pull/20729.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20729.patch",
"merged_at": 1670855953000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20728
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20728/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20728/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20728/events
|
https://github.com/huggingface/transformers/pull/20728
| 1,491,642,487
|
PR_kwDOCUB6oc5FLdh5
| 20,728
|
Redundant to_channel_dimension_format() call makes preprocessing fail in case the image has height of 1 pixel
|
{
"login": "dhansmair",
"id": 21751746,
"node_id": "MDQ6VXNlcjIxNzUxNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21751746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhansmair",
"html_url": "https://github.com/dhansmair",
"followers_url": "https://api.github.com/users/dhansmair/followers",
"following_url": "https://api.github.com/users/dhansmair/following{/other_user}",
"gists_url": "https://api.github.com/users/dhansmair/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhansmair/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhansmair/subscriptions",
"organizations_url": "https://api.github.com/users/dhansmair/orgs",
"repos_url": "https://api.github.com/users/dhansmair/repos",
"events_url": "https://api.github.com/users/dhansmair/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhansmair/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @amyeroberts ",
"sure thing!"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
In the `resize()` function in image_transforms.py, the line 267: I think `image = to_channel_dimension_format(image, ChannelDimension.LAST)` is redundant as this conversion is also applied in the following `to_pil_image()`.
This redundant call actually makes the clip preprocessing fail in special cases. The problem can be reproduced with the following code snippet:
```python
import torch
from transformers.models.clip import CLIPFeatureExtractor
vision_processor = CLIPFeatureExtractor.from_pretrained('openai/clip-vit-large-patch14')
images = [
torch.rand(size=(3, 2, 10), dtype=torch.float),
torch.rand(size=(3, 10, 1), dtype=torch.float),
torch.rand(size=(3, 1, 10), dtype=torch.float)
]
for image in images:
processed_image = vision_processor(images=image, return_tensors="pt")['pixel_values']
print(processed_image.shape)
assert processed_image.shape == torch.Size([1, 3, 224, 224])
```
The last image has a height of 1 pixel.
The second call to `to_channel_dimesion_format()` will transpose the image, and the height dimension is wrongly treated as the channels dimension afterwards. Because of this, the following normalize() step will result in an exception.
An image of height 1 pixel honestly doesn't make much sense, but it happened in my training on visual genome region descriptions and took me a while to track down the problem.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- vision models: @amyeroberts and @NielsRogge
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20728/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20728",
"html_url": "https://github.com/huggingface/transformers/pull/20728",
"diff_url": "https://github.com/huggingface/transformers/pull/20728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20728.patch",
"merged_at": 1670939708000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20727
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20727/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20727/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20727/events
|
https://github.com/huggingface/transformers/pull/20727
| 1,490,998,503
|
PR_kwDOCUB6oc5FJOYP
| 20,727
|
Add custom stop token ids for generation
|
{
"login": "tokestermw",
"id": 4722119,
"node_id": "MDQ6VXNlcjQ3MjIxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4722119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tokestermw",
"html_url": "https://github.com/tokestermw",
"followers_url": "https://api.github.com/users/tokestermw/followers",
"following_url": "https://api.github.com/users/tokestermw/following{/other_user}",
"gists_url": "https://api.github.com/users/tokestermw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tokestermw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tokestermw/subscriptions",
"organizations_url": "https://api.github.com/users/tokestermw/orgs",
"repos_url": "https://api.github.com/users/tokestermw/repos",
"events_url": "https://api.github.com/users/tokestermw/events{/privacy}",
"received_events_url": "https://api.github.com/users/tokestermw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante ",
"Think we could actually allow `eos_token_id` to be both an integer and a list of integers no ? Both in the config and in the input. \r\n",
"Hi @tokestermw 👋 \r\n\r\nLike my colleagues, I also think this would be a helpful feature! I also agree with @patrickvonplaten, allowing the existing argument (`eos_token_id`) to also accept list of integers would result in a cleaner interface and fewer lines of code to maintain :) It is also to port to TF/FLAX, which do not use `StoppingCriterion`.\r\n\r\nIn a nutshell, if `eos_token_id` can be a list of integers, we can replace [the existing check](https://github.com/huggingface/transformers/blob/26dd041c6e45379141302e2d293ab4cd9cf805d4/src/transformers/generation/utils.py#L2154) with \r\n```python\r\nunfinished_sequences = unfinished_sequences.mul((sum(next_tokens == i for i in eos_token_id)).long())\r\n```\r\nas long as we always cast `eos_token_id` to a list before the generation loop. In other words, 2 lines of change (per generation method) would probably do the trick!\r\n\r\n@tokestermw WDYT?",
"Got it thanks for the suggestion! I can certainly make it so we use\neos_token_id.\n\nIt is also to port to TF/FLAX, which do not use StoppingCriterion.\n\nah good to know :)\n\nI can look at this again this weekend\n\n\n\nOn Fri, Dec 16, 2022 at 8:59 AM, Joao Gante ***@***.***>\nwrote:\n\n> Hi @tokestermw <https://github.com/tokestermw> [image: 👋]\n>\n> Like my colleagues, I also think this would be a helpful feature! I also\n> agree with @patrickvonplaten <https://github.com/patrickvonplaten>,\n> allowing the existing argument (eos_token_id) to also accept list of\n> integers would result in a cleaner interface and fewer lines of code to\n> maintain :) It is also to port to TF/FLAX, which do not use\n> StoppingCriterion.\n>\n> In a nutshell, if eos_token_id can be a list of integers, we can replace the\n> existing check\n> <https://github.com/huggingface/transformers/blob/26dd041c6e45379141302e2d293ab4cd9cf805d4/src/transformers/generation/utils.py#L2154>\n> with\n>\n> unfinished_sequences = unfinished_sequences.mul((sum(next_tokens == i for i in eos_token_id)).long())\n>\n> as long as we always cast eos_token_id to a list before the generation\n> loop. In other words, 2 lines of change (per generation method) would\n> probably do the trick!\n>\n> @tokestermw <https://github.com/tokestermw> WDYT?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/20727#issuecomment-1355219288>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABEA3R6MBS7EMZQVFCS4POTWNSNXPANCNFSM6AAAAAAS3PYPPE>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"Hi @gante,\r\n\r\n* Made `eos_token_id` into `Union[int, List[int]]` type. I convert into a list at the beginning of the respective functions. Also, looks like `eos_token_id` is used in a few more places, e.g. `beam_search.py`.\r\n* Some parts where we insert the `eos_token_id`, I only insert the first token id, [here](https://github.com/tokestermw/transformers/pull/1/files#diff-f208583b02617e3684c26a002cea6640d42ddd04bbd49eb55fa8f8460e701586R857) and [here](https://github.com/tokestermw/transformers/pull/1/files#diff-26783ca033d92b4ce2e01eb691dbf5b05dd972b0cbfdc69fc726cf77a9dcb011R1343)\r\n\r\nYou can see the changes here: https://github.com/tokestermw/transformers/pull/1/files\r\n\r\nIf this change looks good, I can merge into this PR, and start polishing (fixing tests, docs, remove dead code, etc.).\r\n\r\nthanks!\r\n",
"@tokestermw that's a comprehensive set of changes, it looks great to me! ❤️ ",
"I was actually thinking more about directly adapting all those lines: https://github.com/huggingface/transformers/blob/17292440c069118fbdb992b9a17da2098fab5b87/src/transformers/generation/utils.py#L2154\r\n\r\nTo **also** accept a list of `eos_token_ids` so that we don't even need a new criterion class for this but can just make it work out of the box by passing `model.generate(..., eos_token_id=[0, 1])`",
"Awesome this looks nice to me, @gante @sgugger ok for you?",
"Is this feature already available on the transformers version available through pip (4.25.1)? I have tried enabling it and the generation continued on even though I set `eos_token_id`, in my case to\r\n\r\n tokenizer.encode('\\n', return_tensors='pt')[1]\r\n\r\n(I'm also not sure why 2 integers are returned by `encode` instead of just 1)\r\n\r\n**EDIT**\r\n\r\nNevermind, I got it working\r\n\r\n n = tokenizer.encode('\\n', return_tensors='pt')[0][1]\r\n output = model.generate(input_ids, eos_token_id=n).cuda()"
] | 1,670
| 1,673
| 1,672
|
CONTRIBUTOR
| null |
### Update (using eos_token_id instead): https://github.com/huggingface/transformers/pull/20727#issuecomment-1355219288
# What does this PR do?
Hi 🤗 team!
This adds stop token ids inside, e.g. `model.generate(..., stop_token_ids=[10, 25])`, and syntactic sugar for the generation pipelines, e.g. `pipeline(..., stop_tokens=['\n'])`. When the generation detects the specified token ids for all examples in the batch, it will stop.
Rationale
* It's common to set a stop id/token for text generation tasks. For example for dialogue, we may want to stop it when the speaker changes.
* It's convenient to have arguments for stop tokens similar to `max_new_tokens` without digging into `StoppingCriterion`.
* Some servers like [DeepSpeed MII](https://github.com/microsoft/DeepSpeed-MII/issues/109) uses gRPC and it's difficult to pass `StoppingCriteria` objects.
## Usage Example
```python
# in pipeline
prompt = """Hello I believe in"""
text_generator = pipeline("text-generation", model="hf-internal-testing/tiny-random-gpt2", stop_tokens=[' fe'])
text_generator(prompt)
# from generate
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("hf-internal-testing/tiny-random-gpt2")
gpt2_model = GPT2LMHeadModel.from_pretrained("hf-internal-testing/tiny-random-gpt2").to(torch_device)
input_ids = gpt2_tokenizer(prompt, return_tensors="pt").input_ids.to(torch_device)
stop_token_ids = gpt2_tokenizer.encode(" fe")
gpt2_model.generate(input_ids=input_ids, stop_token_ids=stop_token_ids)
```
## How to Test
```shell
pytest tests/generation/test_stopping_criteria.py::StoppingCriteriaTestCase::test_stop_token_id_criteria
pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_stop_token_ids_stopping_criteria
pytest tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_stop_token_ids_stopping_criteria
pytest tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_stop_tokens_stopping_criteria
```
## Related PR(s)
There is a `stop_sequence` argument for the `TextGeneration` pipeline: https://github.com/huggingface/transformers/pull/18444
But it's limited to a single token, only in the text generation pipeline, and overwrites `eos_token_id`. Instead, we use `StoppingCriteria` directly.
This PR is a bit overlapping with above, so please let me know if this approach is not optimal.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). **(☢️ noting i've tried to update the docs from the instructions, but they don't seem correct)**
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20727/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20727/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20727",
"html_url": "https://github.com/huggingface/transformers/pull/20727",
"diff_url": "https://github.com/huggingface/transformers/pull/20727.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20727.patch",
"merged_at": 1672777105000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20726
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20726/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20726/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20726/events
|
https://github.com/huggingface/transformers/pull/20726
| 1,490,183,074
|
PR_kwDOCUB6oc5FGTmc
| 20,726
|
Enable `decoder_attention_mask` in `generate` function
|
{
"login": "samuelpullely",
"id": 51292066,
"node_id": "MDQ6VXNlcjUxMjkyMDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/51292066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuelpullely",
"html_url": "https://github.com/samuelpullely",
"followers_url": "https://api.github.com/users/samuelpullely/followers",
"following_url": "https://api.github.com/users/samuelpullely/following{/other_user}",
"gists_url": "https://api.github.com/users/samuelpullely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuelpullely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuelpullely/subscriptions",
"organizations_url": "https://api.github.com/users/samuelpullely/orgs",
"repos_url": "https://api.github.com/users/samuelpullely/repos",
"events_url": "https://api.github.com/users/samuelpullely/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuelpullely/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey! Make sure to run `make fixup` to pass all the quality tests 😉 ",
"Thank you both for the quick replies. I’m having difficulties setting up the development environment on my M1 Mac, similar to what is mentioned in #18355. I managed to run `make fixup` but there was still an error regarding TensorFlow. However, one style correction has been made. Please let me know if you have any other suggestions.",
"Style is okay now, but it seems like the `repo-consistency` is not. Running `make repo-consistency` should solve this ",
"Thanks for the guidance. A small change was required for `repo-consistency`. I think it should be fine now.",
"To add some context, I needed this feature when implementing the _self-debiasing_ method from [this paper](https://arxiv.org/abs/2103.00453) with `BartForConditionalGeneration`. \r\n\r\nThis method uses a prefix in order to change the probability distribution of the generated tokens towards a specific bias. Given this biased probability distribution, it is possible to adjust the probability distribution of the text without the prefix using `LogitsProcessor` such that the generated text is less biased. The following images are from the original paper:\r\n\r\n<p float=\"left\">\r\n <img src=\"https://user-images.githubusercontent.com/51292066/207309163-73dd40a1-99ad-449d-8220-1c2fcb5863ec.png\" width=\"400\" />\r\n <img src=\"https://user-images.githubusercontent.com/51292066/207308497-3f93fe02-a162-4f27-b7a1-6fee0a5861db.png\" width=\"400\" /> \r\n</p>\r\n\r\nWhen using this method with `BartForConditionalGeneration` we need to make sure that the generated tokens start at the same position for the text with and without the prefix because they are both processed in the same batch. To achieve this, it is necessary to manually specify `decoder_input_ids` where padding is applied to the left and padding tokens are ignored with `decoder_attention_mask`. \r\n\r\nTo illustrate this with an example, let's take the text `This guy is a jerk because he never listens` and mask the rude word jerk. As encoder input for `BartForConditionalGeneration`, we use \r\n\r\n```\r\n['This guy is a<mask> because he never listens', 'The following text contains rude, disrespectful, or unreasonable language:\\nThis guy is a<mask> because he never listens']\r\n```\r\n\r\nAs decoder input, we use\r\n\r\n```\r\n['', ''The following text contains rude, disrespectful, or unreasonable language:\\n']\r\n```\r\n\r\nPadding is applied to the decoder input such that the decoder starts generating new tokens at the same position for both inputs. For `decoder_input_ids`, we get\r\n\r\n```\r\n[[2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [2, 0, 133, 511, 2788, 6308, 21820, 6, 26401, 6, 50, 24941, 2777, 35, 50118]]\r\n```\r\n\r\nand ignore padding by using the following `decoder_attention_mask`\r\n\r\n```\r\n[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]\r\n```\r\n",
"Hi @samuelpullely 👋 \r\n\r\nThank you for the PR and the use-case example with self-debiasing! The PR looks good to me, but it's missing one small detail: tests 🤗 If possible, I'd like to ask you to add an integration test, perhaps in the [Bart integration tests section](https://github.com/huggingface/transformers/blob/76d02feadbc99bbccd86e67b02728338a2469f22/tests/models/bart/test_modeling_bart.py#L841). It can be a copy of your example above -- it would go a long way ensuring we don't regress. Let me know if you need a hand.",
"Hi @gante, thanks for your feedback! I’ve added an integration test. Please let me know if I should adjust anything.",
"@sgugger can we merge this PR? :)"
] | 1,670
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The documentation for `model_kwargs` in the generate function states that model specific keyword arguments will be forwarded to the `forward` function of the model.
https://github.com/huggingface/transformers/blob/4cf38148dc98b3df1df6eb2f06e4f02448026b19/src/transformers/generation/utils.py#L1216-L1219
However, this is currently not the case for the model specific keyword argument `decoder_attention_mask` when it is passed to the `generate` function.
This PR makes the necessary adjustments such that `decoder_attention_mask` can be passed to `generate` and will be used as an optional input in the `forward` function. More precisely, it is now possible to specify `decoder_input_ids` and `decoder_attention_mask` in the `generate` function for `BartForConditionalGeneration` such that some tokens in `decoder_input_ids` are masked.
To illustrate the change, we can use the (slightly modified) mask filling example from https://huggingface.co/docs/transformers/model_doc/bart#mask-filling
```python
import torch
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
sentence = "UN Chief Says There Is No <mask> in Syria"
batch = tokenizer(sentence, return_tensors="pt")
padding_size = 3
decoder_input_ids = torch.tensor(
[[model.config.decoder_start_token_id] + padding_size * [model.config.pad_token_id] + [model.config.bos_token_id]],
dtype=torch.long
)
decoder_attention_mask = torch.where(decoder_input_ids==model.config.pad_token_id, 0, 1)
generated_ids = model.generate(input_ids=batch["input_ids"], use_cache=False, max_new_tokens=20,
decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask)
decoded = tokenizer.batch_decode(generated_ids)
for seq in decoded:
print(seq)
```
Note that `use_cache=False` is required when `decoder_input_ids` is manually specified.
Output without PR:
```
</s><pad><pad><pad><s><s>,</s>
```
Clearly, padding tokens are not ignored because `decoder_attention_mask` is not forwarded to the `forward` function. Hence, the strange output.
Output with PR:
```
</s><pad><pad><pad><s>UN Chief Says There Is No Plan B for Peace in Syria</s>
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://discuss.huggingface.co/t/is-t5-expected-to-ignore-padding-tokens-in-decoder-input-ids-when-decoder-attention-mask-is-not-provided/10271
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20726/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20726",
"html_url": "https://github.com/huggingface/transformers/pull/20726",
"diff_url": "https://github.com/huggingface/transformers/pull/20726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20726.patch",
"merged_at": 1672757948000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20725
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20725/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20725/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20725/events
|
https://github.com/huggingface/transformers/pull/20725
| 1,489,721,665
|
PR_kwDOCUB6oc5FEojQ
| 20,725
|
Add TVLT
|
{
"login": "zinengtang",
"id": 82334223,
"node_id": "MDQ6VXNlcjgyMzM0MjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/82334223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zinengtang",
"html_url": "https://github.com/zinengtang",
"followers_url": "https://api.github.com/users/zinengtang/followers",
"following_url": "https://api.github.com/users/zinengtang/following{/other_user}",
"gists_url": "https://api.github.com/users/zinengtang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zinengtang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zinengtang/subscriptions",
"organizations_url": "https://api.github.com/users/zinengtang/orgs",
"repos_url": "https://api.github.com/users/zinengtang/repos",
"events_url": "https://api.github.com/users/zinengtang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zinengtang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks a lot for contributing this model! One first remark is that we tried to avoid all-capitals now in model classes to make all model names consistent across the library, so could you rename your classes to `TvltModel`, `TvltConfig` etc... (like `BertModel`, `BertConfig`).\r\n\r\nLeaving you in the hands of @sanchit-gandhi and @ArthurZucker to finish cleaning the PR and all the tests.",
"@NielsRogge @ArthurZucker \r\nI have a question. \r\nI converted the pixel_feature_extractor to imageprocessor class. Should I write a tester for imageprocessor but I couldn't find any example of it from other models.\r\nFeature extractor will be deprecated in v5 so I don't have to create one with mother class imageprocessor?",
"Another question is that ProcessorMixin is multimodal processor but initialization requires tokenizer\r\nBut TVLT only takes in vision and audio. Is there anyway or a processor that can handle this?\r\nWould it be good to propose to support combinations of two feature extractor in initialization of ProcessorMixin?",
"Hey! \r\n1. With regards to testing, you should indeed add a tester. While the `FeatureExctractor` were replaced with `ImageProcessors` I think we are still using the `test_feature_extraction_...` \r\n2. I think it indeed makes sense to support multiple `FeatureExtractors`, and it might also be good to support multiple tokenizers. We could also create a `MultiModalProcessorMixin` class, but it might be a bit too big for this PR. \r\nI think the best solution for now is to just make your class inherit from `PushToHubMixin` and copy the relevant functions accordingly! ",
"> With regards to testing, you should indeed add a tester. While the FeatureExctractor were replaced with ImageProcessors I think we are still using the test_feature_extraction_...\r\n\r\nWe can call them `test_image_processing_xxx.py` now, see #20716 as an example\r\n\r\n> I think it indeed makes sense to support multiple FeatureExtractors, and it might also be good to support multiple tokenizers. We could also create a MultiModalProcessorMixin class, but it might be a bit too big for this PR.\r\nI think the best solution for now is to just make your class inherit from PushToHubMixin and copy the relevant functions accordingly!\r\n\r\nThere's no need for a MultiModalProcessorMixin class I think, as the Processor class is exactly meant for multi-modal models. I think we just need to update processing_utils.py to handle the audio modality as well, cc @amyeroberts ",
"@NielsRogge @zinengtang Yes, a single `TVLTProcessor` class is the way to go to handle processing of multiple modalities. These processor classes already handle audio e.g. [wav2vec2](https://github.com/huggingface/transformers/blob/d1d3ac94033b6ea1702b203dcd74beab68d42d83/src/transformers/models/wav2vec2/processing_wav2vec2.py#L62) so I don't think anything needs to be done to the `ProcessorMixin`. It should work provided `attributes` is modified e.g. [for CLIPProcessor](https://github.com/huggingface/transformers/blob/d1d3ac94033b6ea1702b203dcd74beab68d42d83/src/transformers/models/clip/processing_clip.py#L38). ",
"_The documentation is not available anymore as the PR was closed or merged._",
"A summary of what I have committed following the comments:\r\n\r\n1. I addressed the comments by all reviewers. (Let me know if I miss anything.)\r\n2. Checked all files and makefile and passed the auto checks.\r\n3. Currently, the model input names are _pixel_value_ and _audio_value_. But audio models seem to be all using _input_features_. Should we update to _input_features_? But this makes inputs names combo a little more confusing. But I am good with either options.\r\n4. I added TvltForAudioVisualClassification. AudioVisualClassification could be good for many tasks like video emotion analysis or video-audio retrieval. For this, I am thinking about adding it as a new task. Audiovisual models is a growing community. This could be good for new audiovisual or vision+audio models in the future.\r\n\r\nLet me know if you have any suggestions. @ydshieh @ArthurZucker @NielsRogge ",
"> This is astounding work, @zinengtang!\r\n> \r\n> You seem to be very familiar with the Transformers code base already :D amazing job.\r\n> \r\n> I'll let @amyeroberts review the image processor, and asking our audio experts @sanchit-gandhi @ArthurZucker regarding the naming of the audio modality (whether we can go for audio_values or whether we should use something else).\r\n> \r\n> After that should be good to merge!\r\n\r\nThanks for the review! :)\r\nYou mentioned ViTMAE is more similar but it does not implement attention masks arguments. So, I reverted back to ViLT.\r\nLet me know if you have any other suggestions",
"> @zinengtang Thanks for this PR and adding this model! I've just made a few comments regarding the image processor. Overall looks good, mainly just a few formatting points.\r\n\r\nThanks so much for your review. They look good to me! I resolved all changes and you may check if you want. :)",
"Hi @zinengtang, could you push a commit to trigger the CI? Seems like not all tests are run and many are failing.\r\n\r\nAfter that, I'll assign one team member for a final review.\r\n\r\nThanks!",
"Hi @zinengtang, thanks for the amazing job! Btw I guess that some files were accidentally added in the vilt model directory (https://github.com/huggingface/transformers/pull/20725/files#diff-5342d82acaa480e404377ccc91a49b5203a3119b02c0dac89bcf147cb32e950e). Could you please check?\r\n\r\n<img width=\"370\" alt=\"image\" src=\"https://user-images.githubusercontent.com/18069263/213285759-fd9aab68-1df9-4ad9-99c8-f7ac9ea4f5f9.png\">\r\n",
"> Please make your sure you address all comments (if there is a reason they don't work, please reply!) otherwise we won't be able to merge this.\r\n\r\nRegarding this issue, I replied earlier that vitmae does not have implementation of attention_mask and similar models like videomae/vit also do not have. Therefore, I used ViLT.",
"> Regarding this issue, I replied earlier that vitmae does not have implementation of attention_mask and similar models like videomae/vit also do not have. Therefore, I used ViLT.\r\n\r\n@amyeroberts here is my older comments. Let me know if you have any questions.",
"@zinengtang Great, thank you for clarifying :) ",
"@sanchit-gandhi I have a question. How is it possible to move all masking code to TvltForPreTraining? The masking is done in encoding phase and therefore can only be done in TvltModel.",
"Another question is that @amyeroberts suggests me to put the random part in feature extractor. But it seems [wav2vec](https://github.com/huggingface/transformers/blob/f0fc7912980234f3711b261f13b4e77fa7a43fb5/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L133), as @sanchit-gandhi suggests, uses random in model forward phase. ",
"@zinengtang My suggestion was to make sure any randomness in the image processor could be controlled by the user -- a seed could be passed in and each instance has its own state -- however the randomness was already there. I agree with @sanchit-gandhi that moving it out into the modeling stage would be better and should make things cleaner :) ",
"Full motivations for moving the stochastic MAE masking into the modelling code can be found here: https://github.com/huggingface/transformers/pull/20725#discussion_r1090692949\r\n\r\n@zinengtang I see your point! So the way we can arrange it is:\r\n* We compute all the masks/permutations inside `TvltForPreTraining` \r\n* We pass these masks/permutations to `TvltModel`\r\n\r\nIn general, we can try and keep **as much** of the MAE training code in `TvltForPreTraining`. Where we require it in `TvltModel`, we can add it accordingly!",
"Thanks @NielsRogge I have addressed the comments. For audio-based VQA, it kinda depends on the users on how to use them but usually audio input should be the query/question.\r\n@amyeroberts @sanchit-gandhi thanks for the explanation! I have moved the masking generation to inside modeling file. Feel free to check if they match your expectation.",
"@amyeroberts Hey thanks for your suggestions!\r\nBut we cannot move masks to TvltForPreTraining since it has to be done in TvltModel. If we move masks from embedding to TvltModel, then we will have to break down TvltEmbeddings and directly use TvltPixelEmbeddings and TvltAudioEmbeddings.\r\nLet me know if these make sense.",
"> If we move masks from embedding to TvltModel, then we will have to break down TvltEmbeddings and directly use TvltPixelEmbeddings and TvltAudioEmbeddings.\r\n\r\n@zinengtang Yes, I think it makes sense to directly use `TvltPixelEmbeddings` and `TvltAudioEmbeddings` instead of having the `TvltEmbeddings` class. ",
"OK I found that all my past reviews are 'pending' and maybe they were never sent out lol, which is my bad.\r\nAnyway, I addressed the comments and let me know if they make sense @amyeroberts.",
"@NielsRogge what do you think about current state. Is there anything else left to address? Thanks!",
"@NielsRogge now I addressed the remaining comments. It makes sense to me that TvltForQuestionAnswering is not needed since it is the same as TvltForAudioVisualClassification.",
"@zinengtang Thanks for adding these final changes! I've left a comment on `test_feature_extraction_tvlt.py` on how to resolve one of the current issues on the circleCI run. The failing wav2vec2 tests have been resolved on main - rebasing should resolve them on this branch. \r\n\r\nOnce all the tests are green I think we're good to merge! ",
"@amyeroberts Sounds great. Btw there seems to be a fail from other models\r\nFAILED tests/models/hubert/test_modeling_tf_hubert.py::TFHubertRobustModelTest::test_dataset_conversion\r\nDo you think it comes from this branch or main branch?",
"@zinengtang It's coming from main, not this branch :) I've just pushed a change on main - #21643 - which should resolve this temporarily for now. Could you rebase to add it to this branch? ",
"@amyeroberts Now it passed the tests! Thanks so much for the help/suggestions all the way. :)",
"@zinengtang Thanks for all your work on adding this model, it's great to have it added to the repo! Make sure to spread the word about it being available :) "
] | 1,670
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Add TVLT to transformers. This PR implements a original version of the TVLT: Textless Vision-Language Transformer from the original paper from https://arxiv.org/abs/2209.14156
I have provided the model weights here https://huggingface.co/TVLT/tvlt-base
# Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Yes.
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? It is discussed on slack.
- [x] Did you make sure to update the documentation with your changes? I have added the documentation. Please check if they make sense.
- [x] Did you write any new necessary tests? Yes. Please check if they make sense.
# Who can review?
Anyone.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20725/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20725",
"html_url": "https://github.com/huggingface/transformers/pull/20725",
"diff_url": "https://github.com/huggingface/transformers/pull/20725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20725.patch",
"merged_at": 1676484630000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20724
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20724/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20724/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20724/events
|
https://github.com/huggingface/transformers/issues/20724
| 1,489,404,743
|
I_kwDOCUB6oc5YxoNH
| 20,724
|
Tutorial on token classification throws casting error in Tensorflow 2.11
|
{
"login": "gcelano",
"id": 5678472,
"node_id": "MDQ6VXNlcjU2Nzg0NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5678472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gcelano",
"html_url": "https://github.com/gcelano",
"followers_url": "https://api.github.com/users/gcelano/followers",
"following_url": "https://api.github.com/users/gcelano/following{/other_user}",
"gists_url": "https://api.github.com/users/gcelano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gcelano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcelano/subscriptions",
"organizations_url": "https://api.github.com/users/gcelano/orgs",
"repos_url": "https://api.github.com/users/gcelano/repos",
"events_url": "https://api.github.com/users/gcelano/events{/privacy}",
"received_events_url": "https://api.github.com/users/gcelano/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Can you please specify where is the error happen? Which step?",
"The error is thrown after `model.fit`",
"Okay I'll have a look",
"Also cc @Rocketknight1 ",
"Yeah, I should probably take this one. Investigating!",
"Managed to reproduce it.",
"This is actually a problem with our `AdamWeightDecay`, likely caused by the change in Keras optimizers in 2.11. Figuring out a fix now."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.9.0
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada, the tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` throws the following error in Tensorflow 2.11 but not in Tensorflow 2.9:
`(0) UNIMPLEMENTED: Cast string to float is not supported
[[{{node Cast_1}}]]
(1) CANCELLED: Function was cancelled before it was started
0 successful operations.
`
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` for Tensorflow
### Expected behavior
Training should start, but it does not.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20724/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20723
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20723/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20723/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20723/events
|
https://github.com/huggingface/transformers/pull/20723
| 1,488,665,119
|
PR_kwDOCUB6oc5FAxZ5
| 20,723
|
Add model resources for ViT
|
{
"login": "stanleycai95",
"id": 46463107,
"node_id": "MDQ6VXNlcjQ2NDYzMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/46463107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stanleycai95",
"html_url": "https://github.com/stanleycai95",
"followers_url": "https://api.github.com/users/stanleycai95/followers",
"following_url": "https://api.github.com/users/stanleycai95/following{/other_user}",
"gists_url": "https://api.github.com/users/stanleycai95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stanleycai95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stanleycai95/subscriptions",
"organizations_url": "https://api.github.com/users/stanleycai95/orgs",
"repos_url": "https://api.github.com/users/stanleycai95/repos",
"events_url": "https://api.github.com/users/stanleycai95/events{/privacy}",
"received_events_url": "https://api.github.com/users/stanleycai95/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds resources on ViT according to https://github.com/huggingface/transformers/issues/20055
Fixes https://github.com/huggingface/transformers/issues/20055 (partially)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20723/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20723",
"html_url": "https://github.com/huggingface/transformers/pull/20723",
"diff_url": "https://github.com/huggingface/transformers/pull/20723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20723.patch",
"merged_at": 1671476374000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20722
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20722/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20722/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20722/events
|
https://github.com/huggingface/transformers/pull/20722
| 1,488,626,454
|
PR_kwDOCUB6oc5FAoiB
| 20,722
|
Very small edit to change name to OpenAI GPT
|
{
"login": "stanleycai95",
"id": 46463107,
"node_id": "MDQ6VXNlcjQ2NDYzMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/46463107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stanleycai95",
"html_url": "https://github.com/stanleycai95",
"followers_url": "https://api.github.com/users/stanleycai95/followers",
"following_url": "https://api.github.com/users/stanleycai95/following{/other_user}",
"gists_url": "https://api.github.com/users/stanleycai95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stanleycai95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stanleycai95/subscriptions",
"organizations_url": "https://api.github.com/users/stanleycai95/orgs",
"repos_url": "https://api.github.com/users/stanleycai95/repos",
"events_url": "https://api.github.com/users/stanleycai95/events{/privacy}",
"received_events_url": "https://api.github.com/users/stanleycai95/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20722). All of your documentation changes will be reflected on that endpoint."
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Adjusts a small typo in OpenAI GPT documentation
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20722/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20722",
"html_url": "https://github.com/huggingface/transformers/pull/20722",
"diff_url": "https://github.com/huggingface/transformers/pull/20722.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20722.patch",
"merged_at": 1670856224000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20721
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20721/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20721/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20721/events
|
https://github.com/huggingface/transformers/issues/20721
| 1,488,451,964
|
I_kwDOCUB6oc5Yt_l8
| 20,721
|
Question-answering example datasets
|
{
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The examples are given as just that: examples. You will always need to slightly adapt them when changing the dataset as the data format might not be exactly the same. So there is no official list of possible datasets apart from the ones showcased in the README."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
In the [examples
](https://github.com/huggingface/transformers/tree/0bae286de94f7131b4a2db3f85754b0961c4aaf5/examples/pytorch/question-answering)
Could you please add possible datasets for each script? Neither in the scripts or in the read me I can find this information. especially for long inputs, as I saw from the scripts that you used a tokenizer for that. Also, it would be great if you could provide us please, with the possible models for each script if possible.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20721/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20720
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20720/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20720/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20720/events
|
https://github.com/huggingface/transformers/pull/20720
| 1,488,373,662
|
PR_kwDOCUB6oc5E_vCO
| 20,720
|
Made LUKE Tokenizer independent from RoBERTa
|
{
"login": "salvo96",
"id": 19409488,
"node_id": "MDQ6VXNlcjE5NDA5NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19409488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/salvo96",
"html_url": "https://github.com/salvo96",
"followers_url": "https://api.github.com/users/salvo96/followers",
"following_url": "https://api.github.com/users/salvo96/following{/other_user}",
"gists_url": "https://api.github.com/users/salvo96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/salvo96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/salvo96/subscriptions",
"organizations_url": "https://api.github.com/users/salvo96/orgs",
"repos_url": "https://api.github.com/users/salvo96/repos",
"events_url": "https://api.github.com/users/salvo96/events{/privacy}",
"received_events_url": "https://api.github.com/users/salvo96/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
Related to #19303
Removed dependency on RoBERTa tokenizer in LUKE tokenizer.
Hi @sgugger, could you check it?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20720/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20720",
"html_url": "https://github.com/huggingface/transformers/pull/20720",
"diff_url": "https://github.com/huggingface/transformers/pull/20720.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20720.patch",
"merged_at": 1670854928000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20719
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20719/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20719/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20719/events
|
https://github.com/huggingface/transformers/pull/20719
| 1,488,085,043
|
PR_kwDOCUB6oc5E-tAy
| 20,719
|
fsdp fix
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the fix! Can you expand a bit the description so that users going back to your PR understand what's happening?\r\n\r\nDone, Thanks!\r\n "
] | 1,670
| 1,671
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
1. FSDP fix: Fixes https://github.com/huggingface/transformers/issues/18767. If this argument is not specified and ``module`` is on CPU, FSDP issues a warning mentioning that this argument can be specified for faster initialization.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20719/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20719",
"html_url": "https://github.com/huggingface/transformers/pull/20719",
"diff_url": "https://github.com/huggingface/transformers/pull/20719.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20719.patch",
"merged_at": 1670857673000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20718
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20718/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20718/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20718/events
|
https://github.com/huggingface/transformers/issues/20718
| 1,488,006,923
|
I_kwDOCUB6oc5YsS8L
| 20,718
|
class BeamSearchScorer's finalize function
|
{
"login": "yupeijei1997",
"id": 39047479,
"node_id": "MDQ6VXNlcjM5MDQ3NDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/39047479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yupeijei1997",
"html_url": "https://github.com/yupeijei1997",
"followers_url": "https://api.github.com/users/yupeijei1997/followers",
"following_url": "https://api.github.com/users/yupeijei1997/following{/other_user}",
"gists_url": "https://api.github.com/users/yupeijei1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yupeijei1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yupeijei1997/subscriptions",
"organizations_url": "https://api.github.com/users/yupeijei1997/orgs",
"repos_url": "https://api.github.com/users/yupeijei1997/repos",
"events_url": "https://api.github.com/users/yupeijei1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/yupeijei1997/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,670
| 1,670
| 1,670
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20718/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20717
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20717/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20717/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20717/events
|
https://github.com/huggingface/transformers/pull/20717
| 1,487,697,588
|
PR_kwDOCUB6oc5E9SbR
| 20,717
|
Added resources for albert architecture
|
{
"login": "JuheonChu",
"id": 35699839,
"node_id": "MDQ6VXNlcjM1Njk5ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuheonChu",
"html_url": "https://github.com/JuheonChu",
"followers_url": "https://api.github.com/users/JuheonChu/followers",
"following_url": "https://api.github.com/users/JuheonChu/following{/other_user}",
"gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions",
"organizations_url": "https://api.github.com/users/JuheonChu/orgs",
"repos_url": "https://api.github.com/users/JuheonChu/repos",
"events_url": "https://api.github.com/users/JuheonChu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuheonChu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I tried \r\n`pip install \".[docs]\"`\r\n`doc-builder build transformers .\\docs\\source\\en\\model_doc\\albert.mdx --build_dir ~/tmp/test-build`\r\nto pass the currently failing _Build PR Documentation_ CI/CD test.\r\nHowever, I still fail. May I ask for any further suggestion?",
"@michaelbenayoun ,\r\nHello, this is Adia who is one of the co-authors of this PR. Do you mind if I ask for any suggestion to solve Build PR Documentation / build / build_pr_documentation (pull_request) CI/CD test? Thank you very much for your time. \r\n\r\nBest, \r\nAdia Wu",
"Hi, \r\nNot a specialist on the doc-builder but why did you inline all the `[[autodoc]]` lines? Also [this](https://github.com/huggingface/transformers/pull/20717/files#diff-338f8110a3c55b13adb4cdc9d5cf54fb5f5ee66f7242654a9c9fb53e49c99104L140) seems weird, here the plan is to document the `__call__` method, not to put `call` in bold.",
"Hello @michaelbenayoun, I think auto-formatting with Visual Studio Code resulted in inlining those [[autodoc]] code blocks. Do you think it is better to change those parts?\r\n",
"You should set it back to the original version.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,675
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Co-authored-by: Adia Wu <wua@dickinson.edu>
Co-authored-by: Mollerup23 <mollerup@dickinson.edu>
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20055
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20717/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20717",
"html_url": "https://github.com/huggingface/transformers/pull/20717",
"diff_url": "https://github.com/huggingface/transformers/pull/20717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20717.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20716
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20716/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20716/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20716/events
|
https://github.com/huggingface/transformers/pull/20716
| 1,487,607,767
|
PR_kwDOCUB6oc5E89t-
| 20,716
|
Add BLIP
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The PR is in a good shape! Would love to have a first round of review! 💪 ",
"Thanks so much @sgugger @NielsRogge for your review, I can confirm the model weights and model cards are all up!",
"Thanks very much for the review @NielsRogge !! ",
"The failing CI test seems to be related to https://github.com/huggingface/transformers/pull/20790 ",
"Awesome contribution! Thank you @younesbelkada.\r\n\r\nJust noticed Salesforce released BLIP2. Not sure how much work it would be to port to huggingface.\r\nhttps://github.com/salesforce/LAVIS/tree/main/projects/blip2\r\n\r\n"
] | 1,670
| 1,679
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
BLIP is a model from salesforce, capable of performing Visual question answering, image captioning and image-text retrieval. This model has been also used in several Stable-diffusion finetuned variants, such as Pokemon stable diffusion or Naruto Stable diffusion to generate text descriptions from images in order to create text-image paired dataset.
Original repo: https://github.com/salesforce/BLIP
- [x] add integration tests
- [x] Push weights
- [x] document everything
Users would be able to use Blip for three main usecases:
1- Conditional Generation (Image captioning):
```
from PIL import Image
import requests
from transformers import BlipForConditionalGeneration, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
text = "a picture of" # the prefix is optional
inputs = processor(image, text, return_tensors="pt")
output = model.generate(**inputs)
print(processor.decode(output[0], skip_special_tokens=True))
>>> a picture of a woman and a dog sitting in a beach
```
1- bis Conditional Generation (Image captioning with no prefix!):
```
from PIL import Image
import requests
from transformers import BlipForConditionalGeneration, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
inputs = processor(image, return_tensors="pt")
output = model.generate(**inputs)
print(processor.decode(output[0], skip_special_tokens=True))
>>> an image of a woman and a dog sitting in a beach
```
2- Visual question answering
```
from PIL import Image
import requests
from transformers import BlipForQuestionAnswering, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForVisualQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
question = ["How many dogs are in this image?"]
inputs = processor(image, text, return_tensors="pt")
output = model.generate(**inputs)
print(processor.decode(output[0], skip_special_tokens=True))
>>> 1
```
3- Image text retrieval (score matching)
```
import torch
from PIL import Image
import requests
from transformers import BlipForQuestionAnswering, BlipProcessor
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-vqa-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
question = ["A picture of a woman with a dog sitting in a beach"]
inputs = processor(image, question, return_tensors="pt")
out_itm = model(**inputs, use_itm_head=True)
out = model(**inputs, use_itm_head=False)
print(out) # cosine similarity score
>>> 0.21
print(torch.nn.functional.softmax(out_itm[0], dim=1)[:, 1])
>>> 0.46
```
cc @NielsRogge
Fixes https://github.com/salesforce/LAVIS/issues/64
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20716/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20716",
"html_url": "https://github.com/huggingface/transformers/pull/20716",
"diff_url": "https://github.com/huggingface/transformers/pull/20716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20716.patch",
"merged_at": 1671611950000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20715
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20715/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20715/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20715/events
|
https://github.com/huggingface/transformers/pull/20715
| 1,487,393,782
|
PR_kwDOCUB6oc5E8NPa
| 20,715
|
Replaces xxx_required with requires_backends
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Why do we remove instead of keeping them for backward compatibility?\r\nThey can simply be implemented still as a decorator using `requires_backends`, without breaking existing code.",
"@xkszltl \r\n\r\nCould you open an issue with more details. Thank you in advance.",
"Issue filed:\r\n- https://github.com/huggingface/transformers/issues/25948"
] | 1,670
| 1,693
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Removes `torch_required` and `tf_required` decorators and replaces with more generic `requires_backends`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20715/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20715",
"html_url": "https://github.com/huggingface/transformers/pull/20715",
"diff_url": "https://github.com/huggingface/transformers/pull/20715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20715.patch",
"merged_at": 1671028724000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20714
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20714/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20714/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20714/events
|
https://github.com/huggingface/transformers/issues/20714
| 1,487,336,415
|
I_kwDOCUB6oc5YpvPf
| 20,714
|
AutoTokenizer.from_pretrained is confused by custom model configs
|
{
"login": "Craigacp",
"id": 729696,
"node_id": "MDQ6VXNlcjcyOTY5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/729696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Craigacp",
"html_url": "https://github.com/Craigacp",
"followers_url": "https://api.github.com/users/Craigacp/followers",
"following_url": "https://api.github.com/users/Craigacp/following{/other_user}",
"gists_url": "https://api.github.com/users/Craigacp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Craigacp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Craigacp/subscriptions",
"organizations_url": "https://api.github.com/users/Craigacp/orgs",
"repos_url": "https://api.github.com/users/Craigacp/repos",
"events_url": "https://api.github.com/users/Craigacp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Craigacp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I’ll have a look thanks for posting ",
"Bump for the stale bot.",
"Any updates on this?",
"I did not have time to dive on this, cc @younesbelkada do you think you can have a look,?",
"If there's some guidance or documentation on how unified the configuration stuff should be for custom models then I can look at fixing places where it's inconsistent, but I'm not sure how consistent your team want it to be, because I don't understand the full set of constraints.",
"I would mostly focus on \r\n```python \r\nNote that the first argument used when registering your custom config to [AutoConfig](https://huggingface.co/docs/transformers/v4.26.0/en/model_doc/auto#transformers.AutoConfig) needs to match the model_type of your custom config, and the first argument used when registering your custom models to any auto model class needs to match the config_class of those models.\r\n```\r\nfrom [this](https://huggingface.co/docs/transformers/custom_models), but you probably already looked there",
"Yeah I think we're hitting the right points in the register calls, but then the loading side isn't fully respecting the custom dicts (or the register call is missing adding to a custom dict, not sure which). I can try to take a look next week.",
"Thanks for the issue @Craigacp \r\nA workaround to this is to hack the function `config_class_to_model_type` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/configuration_auto.py#L566-L571):\r\n```python\r\ndef config_class_to_model_type(config):\r\n \"\"\"Converts a config class name to the corresponding model type\"\"\"\r\n for key, cls in CONFIG_MAPPING_NAMES.items():\r\n if cls == config:\r\n return key\r\n return None\r\n```\r\nTo use `CONFIG_MAPPING` instead of `CONFIG_MAPPING_NAMES` as `CONFIG_MAPPING` gets updated when calling `.register`. \r\nYou can apply this quick hack with the snippet below without having to change the source code:\r\n```python\r\nfrom transformers import AutoTokenizer, PretrainedConfig, PreTrainedModel, CONFIG_MAPPING, TOKENIZER_MAPPING, BertTokenizer, BertTokenizerFast, MODEL_MAPPING\r\nfrom transformers.models.auto.configuration_auto import CONFIG_MAPPING_NAMES\r\n\r\nclass MyModel(PreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n self.config = config\r\n self.my_custom_variable = config.my_custom_variable\r\n\r\n\r\nclass MyConfig(PretrainedConfig):\r\n model_type = \"my_model\"\r\n\r\n def __init__(self, **kwargs):\r\n super().__init__(**kwargs)\r\n\r\n self.vocab_size = 1000\r\n self.my_custom_variable = 42\r\n\r\nconfig = MyConfig()\r\nmodel = MyModel(config)\r\n\r\nCONFIG_MAPPING.register(\"my_model\", MyConfig.__name__)\r\nCONFIG_MAPPING_NAMES[\"my_model\"] = MyConfig.__name__\r\nMyConfig.register_for_auto_class()\r\nTOKENIZER_MAPPING.register(MyConfig, (BertTokenizer, BertTokenizerFast))\r\nMODEL_MAPPING.register(MyConfig, MyModel)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\", config=config)\r\n```\r\nBut I am clearly unsure about this approach here, also I don't really understand what do you want to achieve exactly, could you maybe share with us more details on what you are trying yo achieve?\r\nThanks \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### System Info
transformers: 4.23.1, OS: macOS x86, Python: 3.9
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When loading a tokenizer with `AutoTokenizer.from_pretrained("bert-base-uncased", config=my_model_config_instance)` it fails in tokenization_auto.py line 640 with:
```
File "venv/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 640, in <genexpr>
f"Model type should be one of {', '.join(c.__name__ for c in TOKENIZER_MAPPING.keys())}."
AttributeError: 'NoneType' object has no attribute '__name__'
```
I registered the custom model & config using:
```python
CONFIG_MAPPING.register("my_model", MyModelConfig)
TOKENIZER_MAPPING.register(MyModelConfig, (BertTokenizer, BertTokenizerFast))
MODEL_MAPPING.register(MyModelConfig, MyModelModel)
SPECIAL_MODEL_TYPE_TO_MODULE_NAME["my_model"] = "my_model.modeling_my_model"
```
### Expected behavior
I'd expect it to load `bert-base-uncased`. We always supply the config as we use this code for multiple models in `transformers` as well as our custom models and the loading code can't differentiate the two at the point it happens.
I think it's occurring because `configuration_auto.config_class_to_model_type()` doesn't respect `SPECIAL_MODEL_TYPE_TO_MODULE_NAME`, so it looks for a class called `MyModelConfig` in `CONFIG_MAPPING_NAMES` when it should be looking for `my_model.modeling_my_model.MyModelConfig`, and also `CONFIG_MAPPING_NAMES` and/or `config_class_to_model_type` doesn't seem to respect additional configs loaded in via `CONFIG_MAPPING.register()`.
The error still occurs if I download the files for the `bert-base-uncased` tokenizer and use the path to it on disk rather than the name `bert-base-uncased`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20714/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20713
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20713/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20713/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20713/events
|
https://github.com/huggingface/transformers/pull/20713
| 1,487,085,344
|
PR_kwDOCUB6oc5E7IKr
| 20,713
|
Add a progress bar for large model loading
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
As requested in #20669, this PR adds a progress bar when loading large models. The progress bar can be removed with `transformers.utils.disable_progress_bar()`.
Fixes #20669
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20713/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20713/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20713",
"html_url": "https://github.com/huggingface/transformers/pull/20713",
"diff_url": "https://github.com/huggingface/transformers/pull/20713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20713.patch",
"merged_at": 1670868776000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20712
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20712/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20712/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20712/events
|
https://github.com/huggingface/transformers/pull/20712
| 1,487,045,385
|
PR_kwDOCUB6oc5E6_Nk
| 20,712
|
Add vision requirement to image transforms
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"A quick question: If we add `requires_backends(center_crop, [\"vision\"])` in some methods in `src/transformers/image_transforms.py`, shouldn't we do the same in `src/transformers/image_utils.py`, say, `load_image`?\r\n",
"> A quick question: If we add `requires_backends(center_crop, [\"vision\"])` in some methods in `src/transformers/image_transforms.py`, shouldn't we do the same in `src/transformers/image_utils.py`, say, `load_image`?\r\n\r\nYep. I'll add those in too. "
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Addresses issue when importing functions in the `image_transforms` module if Pillow isn't installed. Namely:
* functions which don't require Pillow have errors when importing. This was because other objects were only imported if Pillow was available e.g. `ChannelDimension`. Resolved by rearranging imports.
* Unuseful error message hiding that the issue is with Pillow in the environment. Resolved by adding a `vision_required` decorator to allow for errors only on select functions.
Fixes #20627
This needs to be merged in before the transforms can be removed from the init in #20704
Snippet below shows new behaviour when running in an environment without Pillow installed:
```
>>> import numpy as np
>>>
>>> # Show we can import both methods without issue
>>> from transformers.image_transforms import rescale
>>> from transformers.image_transforms import center_crop
>>>
>>> img = np.random.randint(0, 256, (3, 244, 360))
>>>
>>> # We can call rescale successfully without having Pillow
>>> rescale_img = rescale(img, 10)
>>> print(rescale_img.shape)
(3, 244, 360)
>>> # center_crop call raises error
>>> cropped_img = center_crop(img, (100, 100))
Traceback (most recent call last):
File "/Users/amyroberts/code/transformers/../scripts/test_image_transforms_imports.py", line 10, in <module>
cropped_img = center_crop(img, (100, 100))
File "/Users/amyroberts/code/transformers/src/transformers/utils/import_utils.py", line 1073, in wrapper
raise ImportError(f"Method `{func.__name__}` requires Pillow.")
ImportError: Method `center_crop` requires Pillow.
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20712/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20712",
"html_url": "https://github.com/huggingface/transformers/pull/20712",
"diff_url": "https://github.com/huggingface/transformers/pull/20712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20712.patch",
"merged_at": 1670867026000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20711
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20711/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20711/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20711/events
|
https://github.com/huggingface/transformers/pull/20711
| 1,487,017,734
|
PR_kwDOCUB6oc5E65Dd
| 20,711
|
add model resources for CPMAnt
|
{
"login": "pioliverse",
"id": 119836898,
"node_id": "U_kgDOBySQ4g",
"avatar_url": "https://avatars.githubusercontent.com/u/119836898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pioliverse",
"html_url": "https://github.com/pioliverse",
"followers_url": "https://api.github.com/users/pioliverse/followers",
"following_url": "https://api.github.com/users/pioliverse/following{/other_user}",
"gists_url": "https://api.github.com/users/pioliverse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pioliverse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pioliverse/subscriptions",
"organizations_url": "https://api.github.com/users/pioliverse/orgs",
"repos_url": "https://api.github.com/users/pioliverse/repos",
"events_url": "https://api.github.com/users/pioliverse/events{/privacy}",
"received_events_url": "https://api.github.com/users/pioliverse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20711). All of your documentation changes will be reflected on that endpoint.",
"> Thanks for addressing some of the comments! Some tests are still failing for few reasons 1- You are assigning an attribute that uses `torch` on the config file, I suspect this is not needed 2- You need to make sure that integration tests are not failing, I suggest you to run `pytest tests/models/cpmant/test_modeling_cpmant.py` and understand why these tests are failing. Thanks again for your efforts!\r\n\r\nSome new tests have been added in ```test_modeling_cpmant.py``` ."
] | 1,670
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- **introduction**: [CPM-Ant](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) is an open-source Chinese pre-trained language model (PLM) with 10B parameters.
- **task**: We add code, tests and docs for cpmant model. The model can be used for text generation with "text-generation" pipeline.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20711/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20711/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20711",
"html_url": "https://github.com/huggingface/transformers/pull/20711",
"diff_url": "https://github.com/huggingface/transformers/pull/20711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20711.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20710
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20710/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20710/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20710/events
|
https://github.com/huggingface/transformers/pull/20710
| 1,486,992,886
|
PR_kwDOCUB6oc5E6ziO
| 20,710
|
Change a logic in pipeline test regarding TF
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante to this one - he's been working on a way to check for invalid indices even for embedding layers on GPU"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
- **Currently**, the tiny models in pipeline tests are created using `model_class(config)`, and for TF models, the weights are not created at this point. Then we set the device (**which turns to be CPU instead of GPU!**), and the wegiths are created in CPU context --> we get the expected exceptions.
- In the future, we will use the tiny model from Hub repos
- The models are loaded using `from_pretrained`
- If GPU available, those weights are initialized in GPU context automatically for TF models, including embedding layers
- From what @Rocketknight1 says at the end, we won't get the expected exceptions in this situation (TF, layer weights loaded in GPU context)
**In order to use the tiny models from the Hub without any pipeline test failure, we will have to skip this check under the above described situation.**
From @Rocketknight1
> Embedding layers in Keras have different behaviours on CPU and GPU when you pass invalid indices. On CPU, the layer checks inputs and throws an error if you're out of range, but on GPU you just get a zeros tensor as output with no error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20710/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20710",
"html_url": "https://github.com/huggingface/transformers/pull/20710",
"diff_url": "https://github.com/huggingface/transformers/pull/20710.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20710.patch",
"merged_at": 1670935357000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20709
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20709/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20709/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20709/events
|
https://github.com/huggingface/transformers/issues/20709
| 1,486,963,868
|
I_kwDOCUB6oc5YoUSc
| 20,709
|
Keras finetune examples cannot generate hashable key
|
{
"login": "ZJaume",
"id": 11339330,
"node_id": "MDQ6VXNlcjExMzM5MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/11339330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZJaume",
"html_url": "https://github.com/ZJaume",
"followers_url": "https://api.github.com/users/ZJaume/followers",
"following_url": "https://api.github.com/users/ZJaume/following{/other_user}",
"gists_url": "https://api.github.com/users/ZJaume/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZJaume/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZJaume/subscriptions",
"organizations_url": "https://api.github.com/users/ZJaume/orgs",
"repos_url": "https://api.github.com/users/ZJaume/repos",
"events_url": "https://api.github.com/users/ZJaume/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZJaume/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 ",
"Derp, that's a problem with the example, I'll push a fix! The problem is that Keras recognizes `dict` objects but not our `BatchEncoding` returned by the `tokenizer`, even though `BatchEncoding` is a subclass of `dict`.\r\n\r\nIf you replace the last line with `model.fit(dict(tokenized_data), labels)` it should work.",
"Should be fixed by #20732",
"@Rocketknight1 I'm getting \" 'dict' object is not callable\" typeError when I used this solution, any idea why?",
"@baburz Sorry for the delay! Can you paste the exact code you ran?"
] | 1,670
| 1,679
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.1+cu102 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante @Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Use the Keras [example](https://huggingface.co/docs/transformers/training#train-a-tensorflow-model-with-keras)
```python
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
from tensorflow.keras.optimizers import Adam
from datasets import load_dataset
import tensorflow as tf
import numpy as np
dataset = load_dataset("glue", "cola")
dataset = dataset["train"] # Just take the training split for now
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True)
labels = np.array(dataset["label"]) # Label is already an array of 0 and 1
# Load and compile our model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
# Lower learning rates are often better for fine-tuning transformers
model.compile(optimizer=Adam(3e-5))
model.fit(tokenized_data, labels)
```
```python
All model checkpoint layers were used when initializing TFBertForSequenceClassification.
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.
Traceback (most recent call last):
File "../test_mirrored.py", line 22, in <module>
model.fit(tokenized_data, labels)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_cache.py", line 117, in lookup
dispatch_key = self._dispatch_table.dispatch(key)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/type_dispatch.py", line 78, in dispatch
if request in self._dispatch_table:
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_cache.py", line 77, in __hash__
return hash((self.call_context, self.function_type))
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_type.py", line 246, in __hash__
return hash(
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/polymorphism/function_type.py", line 106, in __hash__
return hash((self.name, self.kind, self.optional, self.type_constraint))
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/trace_type/default_types.py", line 207, in __hash__
return hash(self.components)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/trace_type/default_types.py", line 207, in __hash__
return hash(self.components)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/core/function/trace_type/default_types.py", line 584, in __hash__
return hash((self.identifier, self.base))
ValueError: Cannot generate a hashable key for IteratorSpec(({'input_ids': TensorSpec(shape=(None, 47), dtype=tf.int64, name=None), 'token_type_ids': TensorSpec(shape=(None, 47), dtype=tf.int64, name=None), 'attention_mask': TensorSpec(shape=(None, 47), dtype=tf.int64, name=None)}, TensorSpec(shape=(None,), dtype=tf.int64, name=None)),) because the _serialize() method returned an unsupproted value of type <class 'transformers.tokenization_utils_base.BatchEncoding'>
```
I also had to change `dataset['text']` to `dataset['example']`
### Expected behavior
Train succesfully.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20709/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20708
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20708/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20708/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20708/events
|
https://github.com/huggingface/transformers/pull/20708
| 1,486,941,608
|
PR_kwDOCUB6oc5E6oKi
| 20,708
|
Fix rendering issue in quicktour
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok so the extra line is what causes the issue, but then `black` is unhappy with the formatting.",
"Ok, so I just split the content in two blocks at the end, since `doc-builder` wants no more than one empty lines and `black` wants two :man_shrugging: "
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #20700 I think the two new lines is what makes the problem happen.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20708/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20708",
"html_url": "https://github.com/huggingface/transformers/pull/20708",
"diff_url": "https://github.com/huggingface/transformers/pull/20708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20708.patch",
"merged_at": 1670611895000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20707
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20707/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20707/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20707/events
|
https://github.com/huggingface/transformers/pull/20707
| 1,486,897,370
|
PR_kwDOCUB6oc5E6eVy
| 20,707
|
[Whisper] Fix multilingual tokeniser
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20707). All of your documentation changes will be reflected on that endpoint.",
"This looks good, could you also push the normalizer file? 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, when we can expect this to be merged? The wrong normalisation basically makes everything that Whisper models gives from languages other than English useless. Thanks",
"Hey, no real timeline. Was dropped as this was not requested. If you really need this however, feel free to take the PR over ! 🤗 ",
"Hey @thomas-ferraz,\r\n\r\nWhat you can do as a temporary workaround is set `_normalize=False` when you decode with the tokenizer (the default behaviour:\r\n```python\r\ntranscription = processor.batch_decode(predicted_ids, skip_special_tokens=True)\r\n```\r\nIf you don't need normalisation, you can stop here - your transcriptions won't be normalised.\r\n\r\nIf you need normalisation, you can manually normalise using the `BasicTextNormalizer` (the **multilingual** normaliser) as follows:\r\n\r\n```python\r\nfrom transformers.models.whisper.english_normalizer import BasicTextNormalizer\r\n\r\nnormalizer = BasicTextNormalizer()\r\nnorm_transcriptions = [normalizer(input_str) for input_str in transcription]\r\n```\r\n\r\nHope that helps!",
"This should definitely be re-opened and merged. ",
"Will take a look when I get the chance!",
"Closing in favour of https://github.com/huggingface/transformers/pull/26149"
] | 1,670
| 1,694
| 1,694
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #20703
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20707/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20707",
"html_url": "https://github.com/huggingface/transformers/pull/20707",
"diff_url": "https://github.com/huggingface/transformers/pull/20707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20707.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20706
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20706/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20706/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20706/events
|
https://github.com/huggingface/transformers/issues/20706
| 1,486,876,427
|
I_kwDOCUB6oc5Yn-8L
| 20,706
|
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"multi-gpu pre-training in one machine for Large GPT from scratch without horovod (**Model Parallelism**)",
"Closing as this survey has wrapped up"
] | 1,670
| 1,680
| 1,680
|
MEMBER
| null |
Thanks to all of you, Transformers just passed 75k :star2: last week!
Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face ecosystem. There were 25 new releases of `transformers`, 21 new releases of `datasets`, 13 new releases of `accelerate`.
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
[**hf.co/oss-survey**](https://docs.google.com/forms/d/e/1FAIpQLSf4xFQKtpjr6I_l7OfNofqiR8s-WG6tcNbkchDJJf5gYD72zQ/viewform?usp=sf_link)
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! 🤗
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20706/reactions",
"total_count": 15,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 15,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20706/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20705
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20705/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20705/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20705/events
|
https://github.com/huggingface/transformers/pull/20705
| 1,486,863,231
|
PR_kwDOCUB6oc5E6WxG
| 20,705
|
[`ViTHybrid`] fix last `accelerate` slow test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
FIxes the last test that I forgot to run on multi gpu setup ! Sorry for the multiple iterations
Fixes:
1- https://github.com/huggingface/transformers/actions/runs/3650247864
2- in `test_model_parallelism` the test splits the model into different sub-modules, with respect to the class attribute `_no_split_modules`. Sometimes a manual device assignment is needed to avoid errors such as `tensors not on the same device`
3- Similar to https://github.com/huggingface/transformers/pull/19912 (check the last modification)
cc @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20705/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20705",
"html_url": "https://github.com/huggingface/transformers/pull/20705",
"diff_url": "https://github.com/huggingface/transformers/pull/20705.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20705.patch",
"merged_at": 1670600792000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20704
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20704/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20704/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20704/events
|
https://github.com/huggingface/transformers/pull/20704
| 1,486,851,128
|
PR_kwDOCUB6oc5E6UGQ
| 20,704
|
Remove image_transforms functions from init
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Removes functions in the `image_transforms` library from `src/transformers/_init__.py`.
The functionality this module provides are utils for processing images and are not the primary goal of the library.
Address an issue raised in #20627 - which highlighted some functions were in the init and others weren't.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20704/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20704",
"html_url": "https://github.com/huggingface/transformers/pull/20704",
"diff_url": "https://github.com/huggingface/transformers/pull/20704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20704.patch",
"merged_at": 1671013031000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20703
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20703/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20703/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20703/events
|
https://github.com/huggingface/transformers/issues/20703
| 1,486,836,597
|
I_kwDOCUB6oc5Yn1N1
| 20,703
|
[Whisper] Multilingual Tokeniser uses wrong normaliser
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I see, that’s indeed correct. Adding it to my whisper todo, this will probably make the tests fail again ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.5.1 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
### Who can help?
@sanchit-gandhi @ArthurZucker (cc @Vaibhavs10 for info)
### Reproduction
All of the English-only and multilingual Whisper models have `normalizer.json` files in their model repository's on the HF Hub, e.g. for Whisper tiny: https://huggingface.co/openai/whisper-tiny/blob/main/normalizer.json
This means that when we load the tokenisers for any of these models from pre-trained, we default to using the "English Text Normaliser" specified by this `normalizer.json` file:
https://github.com/huggingface/transformers/blob/7319850902ba9b2a44c36ccddd044f98abd9b597/src/transformers/models/whisper/tokenization_whisper.py#L300-L302
and then:
https://github.com/huggingface/transformers/blob/7319850902ba9b2a44c36ccddd044f98abd9b597/src/transformers/models/whisper/tokenization_whisper.py#L488
However, this English text normaliser should **only** be used for English-only models. A separate, "Basic Text Normaliser" should be used in the multilingual setting. This is in accordance with the official Whisper implementation and paper (see Appendix C on page 21 of the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf)).
In short, the English normaliser is **too stringent** for multilingual languages, removing diacritics and other linguistic features that change the meaning of the words:
```python
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny")
input_str = "Ça va?"
norm_str = tokenizer._normalize(input_str)
print("Un-normalised:", input_str)
print("Normalised: ", norm_str)
```
**Print Output**:
```
Un-normalised: Ça va?
Normalised: ca va
```
We should not ever apply normalisation to the ASR output that changes the meaning of the words. This is not allowed when evaluating systems and is undesirable behaviour.
In the Whisper fine-tuning event, we've seen participants using the multilingual models, fine-tuning on multilingual languages, and normalising according to the default normaliser, thus giving erroneous and invalid evaluation results that have had to be repeated.
### Expected behavior
The basic text normaliser does not remove such diacritics, but does lower case and strip punctuation (as intended):
```python
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
normalizer = BasicTextNormalizer()
basic_norm_str = normalizer(input_str)
print("Un-normalised: ", input_str)
print("Basic normalised:", basic_norm_str)
```
**Print Output:**
```
Un-normalised: Ça va?
Basic normalised: ça va
```
We should revert to using the `BasicTextNormalizer` for the multilingual models. We can do this by:
1. Removing the English `normalizer.json` files from the multilingual Whisper models on the HF Hub
2. Defaulting to using the `BasicTextNormalizer` when the normaliser file is `None`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20703/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20702
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20702/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20702/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20702/events
|
https://github.com/huggingface/transformers/pull/20702
| 1,486,649,803
|
PR_kwDOCUB6oc5E5nR2
| 20,702
|
Fix bug: replace left over FE references
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Fixes failing Flava processor tests introduced by #20590 - replaces any exisiting feature extractor reference with image processors.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20702/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20702",
"html_url": "https://github.com/huggingface/transformers/pull/20702",
"diff_url": "https://github.com/huggingface/transformers/pull/20702.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20702.patch",
"merged_at": 1670588640000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20701
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20701/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20701/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20701/events
|
https://github.com/huggingface/transformers/pull/20701
| 1,486,514,576
|
PR_kwDOCUB6oc5E5JFr
| 20,701
|
[Tests] Improve test_attention_outputs
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
I noticed several vision models overwrite `test_attention_outputs`, but we have a `has_attentions` flag exactly for this purpose.
Hence, I've adapted the test in `test_modeling_common.py` and `test_modeling_tf_common.p`y to improve this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20701/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20701",
"html_url": "https://github.com/huggingface/transformers/pull/20701",
"diff_url": "https://github.com/huggingface/transformers/pull/20701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20701.patch",
"merged_at": 1671025301000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20699
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20699/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20699/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20699/events
|
https://github.com/huggingface/transformers/pull/20699
| 1,486,433,900
|
PR_kwDOCUB6oc5E43OK
| 20,699
|
Use `config.layer_norm_eps` in some `nn.LayerNorm`.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just to confirm:\r\n\r\n> - these changes don't have an impact on the integration tests, right? This only results in different outputs when one would train these models from scratch?\r\n\r\nIt has impact even for integration tests: the change is the **constant** `eps` being changed, which affects the forward call both in inference as well as in training.\r\n\r\n\r\n- technically we should check the `layer_norm_eps` for each paper to match the original code\r\n\r\nI agree - but I am not sure, for recent models, if all these attributes are set according to the papers, or people just used add new model like templates ...",
"This is too breaking I think. We need to be more careful on new models added that this attribute is consistently used but I don't think we should touch old models like this as it will change the results of the forward.",
"OK! I will keep this list of models to skip in the WIP PR where we add a test for checking unused config attributes.",
"Close as it is too breaking!"
] | 1,670
| 1,675
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
⚠️⚠️⚠️ This changes the `eps` of those LayerNorm layers from (the default) `1e-5` to `1e-12`, and the outputs will have slightly differences before/after this PR. ⚠️⚠️⚠️
----
Similar to #20554, but this time instead of removing the attribute from config, we use `config.layer_norm_eps` in some `nn.LayerNorm`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20699/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20699",
"html_url": "https://github.com/huggingface/transformers/pull/20699",
"diff_url": "https://github.com/huggingface/transformers/pull/20699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20699.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20698
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20698/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20698/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20698/events
|
https://github.com/huggingface/transformers/issues/20698
| 1,486,116,667
|
I_kwDOCUB6oc5YlFc7
| 20,698
|
How to use pipeline for Custom token-classification model?
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should use subclasses of `PreTrainedModel` (and `PretrainedConfig` if necessary). Here is the [documentation](https://huggingface.co/docs/transformers/create_a_model) on that and [here](https://huggingface.co/docs/transformers/add_new_pipeline) is an example of custom pipeline if you need it.",
"@sgugger , I am kind of in a similar situation, I looked up the custom `Pipeline` class, I wanted to understand if there is additional documentation of the methods in the class, I do see `_sanitary_parameter`, is it mostly to add preprocessing inputs ?, for context I am trying to train a custom NER model as well.",
"@sgugger , never mind I was able to write my own postprocess and preprocess function and also inherit a of the implementation from task specific classes, thank you.",
"> @sgugger , never mind I was able to write my own postprocess and preprocess function and also inherit a of the implementation from task specific classes, thank you.\r\n\r\nHi @rajathpatel23 could you please upload your code/notebook for reference?",
"Hi @pratikchhapolika , I will try to upload the the notebook soon over the weekend",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,674
| 1,674
|
NONE
| null |
**Model description**
I add simple custom `pytorch-crf layer` on top of `TokenClassification model`. It will make the model more robust.
I train the model successfully but when I test the mode. The folder doesn't have `config.json` file inside it. So the `pipeline` function gives error as
`Error:AttributeError: 'BERT_CRF' object has no attribute 'config'`
**CODE**
```
class BERT_CRF(nn.Module):
def __init__(self, bert_model, num_labels):
super(BERT_CRF, self).__init__()
self.bert = bert_model
self.dropout = nn.Dropout(0.25)
self.classifier = nn.Linear(768, num_labels)
self.crf = CRF(num_labels, batch_first = True)
def forward(self, input_ids, attention_mask, labels=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask)
sequence_output = torch.stack((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4])).mean(dim=0)
sequence_output = self.dropout(sequence_output)
emission = self.classifier(sequence_output) # [32,256,17]
if labels is not None:
labels=labels.reshape(attention_mask.size()[0],attention_mask.size()[1])
loss = -self.crf(log_soft(emission, 2), labels, mask=attention_mask.type(torch.uint8), reduction='mean')
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return [loss, prediction]
else:
prediction = self.crf.decode(emission, mask=attention_mask.type(torch.uint8))
return prediction
```
```
tokenizer = AutoTokenizer.from_pretrained("fine-tuned_model",model_max_length=256)
bert_model = BertForTokenClassification.from_pretrained('spanbert_base',id2label=id2label,label2id=label2id)
bert_model.config.output_hidden_states=True
model = BERT_CRF(bert_model, num_labels=21)
model.load_state_dict(torch.load("fine-tuned_model/pytorch_model.bin"))
model.eval()
```
`token_classifier = pipeline("token-classification", model=model, aggregation_strategy="max",tokenizer=tokenizer,grouped_entities=True)`
`AttributeError: 'BERT_CRF' object has no attribute 'config'`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20698/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20697
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20697/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20697/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20697/events
|
https://github.com/huggingface/transformers/pull/20697
| 1,486,019,473
|
PR_kwDOCUB6oc5E3Zhw
| 20,697
|
Added resources on albert model
|
{
"login": "JuheonChu",
"id": 35699839,
"node_id": "MDQ6VXNlcjM1Njk5ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuheonChu",
"html_url": "https://github.com/JuheonChu",
"followers_url": "https://api.github.com/users/JuheonChu/followers",
"following_url": "https://api.github.com/users/JuheonChu/following{/other_user}",
"gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions",
"organizations_url": "https://api.github.com/users/JuheonChu/orgs",
"repos_url": "https://api.github.com/users/JuheonChu/repos",
"events_url": "https://api.github.com/users/JuheonChu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuheonChu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thank you @younesbelkada ! Would you mind if I ask you how I can pass the \r\nci/circleci: tests_pipelines_tf tests?\r\n\r\nI tried\r\n`pip3 install --upgrade pip`\r\n`pip3 install --upgrade tensorflow`",
"Thanks for your PR! Could you focus it solely on the new resources added? There are multiple changes that are not desired.",
"Thank you! Will try! \r\nSo, I deleted those undesired behaviors, and now I will look for more resources to add!\r\n\r\n",
"Do you mind if I open a new Pull Request in order to contain only meaningful commits?",
"Yes please, that'd be great!"
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Co-author: @Adia Wu <wua@dickinson.edu>
Fixes #20055
## Before submitting
- [o] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [o] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [o] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [o] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [o] Did you write any new necessary tests?
## Who can review?
@stevhliu @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20697/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20697",
"html_url": "https://github.com/huggingface/transformers/pull/20697",
"diff_url": "https://github.com/huggingface/transformers/pull/20697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20697.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20696
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20696/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20696/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20696/events
|
https://github.com/huggingface/transformers/pull/20696
| 1,485,881,248
|
PR_kwDOCUB6oc5E259k
| 20,696
|
added model resources for CPMAnt
|
{
"login": "pioliverse",
"id": 119836898,
"node_id": "U_kgDOBySQ4g",
"avatar_url": "https://avatars.githubusercontent.com/u/119836898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pioliverse",
"html_url": "https://github.com/pioliverse",
"followers_url": "https://api.github.com/users/pioliverse/followers",
"following_url": "https://api.github.com/users/pioliverse/following{/other_user}",
"gists_url": "https://api.github.com/users/pioliverse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pioliverse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pioliverse/subscriptions",
"organizations_url": "https://api.github.com/users/pioliverse/orgs",
"repos_url": "https://api.github.com/users/pioliverse/repos",
"events_url": "https://api.github.com/users/pioliverse/events{/privacy}",
"received_events_url": "https://api.github.com/users/pioliverse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Add some description to the model."
] | 1,670
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- **introduction**: [CPM-Ant](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) is an open-source Chinese pre-trained language model (PLM) with 10B parameters.
- **task**: We add code, tests and docs for cpmant model. The model can be used for text generation with "text-generation" pipeline.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20696/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20696",
"html_url": "https://github.com/huggingface/transformers/pull/20696",
"diff_url": "https://github.com/huggingface/transformers/pull/20696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20696.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20695
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20695/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20695/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20695/events
|
https://github.com/huggingface/transformers/issues/20695
| 1,485,875,447
|
I_kwDOCUB6oc5YkKj3
| 20,695
|
NER example
|
{
"login": "jiaqianjing",
"id": 16071449,
"node_id": "MDQ6VXNlcjE2MDcxNDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/16071449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqianjing",
"html_url": "https://github.com/jiaqianjing",
"followers_url": "https://api.github.com/users/jiaqianjing/followers",
"following_url": "https://api.github.com/users/jiaqianjing/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqianjing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqianjing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqianjing/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqianjing/orgs",
"repos_url": "https://api.github.com/users/jiaqianjing/repos",
"events_url": "https://api.github.com/users/jiaqianjing/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqianjing/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,670
| 1,670
| 1,670
|
NONE
| null |
https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/examples/pytorch/token-classification/run_ner_no_trainer.py#L601
I think modifying it this way would be better.
```python
true_predictions = [
[label_list[p] for (p, l) in zip(prediction, label) if p != -100]
for prediction, label in zip(predictions, labels)
]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20695/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20694
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20694/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20694/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20694/events
|
https://github.com/huggingface/transformers/issues/20694
| 1,485,841,299
|
I_kwDOCUB6oc5YkCOT
| 20,694
|
A bug with subscript while maintaining beam_indices.
|
{
"login": "jicoder-nwpu",
"id": 46379875,
"node_id": "MDQ6VXNlcjQ2Mzc5ODc1",
"avatar_url": "https://avatars.githubusercontent.com/u/46379875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jicoder-nwpu",
"html_url": "https://github.com/jicoder-nwpu",
"followers_url": "https://api.github.com/users/jicoder-nwpu/followers",
"following_url": "https://api.github.com/users/jicoder-nwpu/following{/other_user}",
"gists_url": "https://api.github.com/users/jicoder-nwpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jicoder-nwpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jicoder-nwpu/subscriptions",
"organizations_url": "https://api.github.com/users/jicoder-nwpu/orgs",
"repos_url": "https://api.github.com/users/jicoder-nwpu/repos",
"events_url": "https://api.github.com/users/jicoder-nwpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jicoder-nwpu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante, but note that there is little we can do to help without a reproducer of the bug.",
"Hey @aj666aj 👋 \r\n\r\nCan I have a script to reproduce your problem? I'll struggle to pinpoint the issue without it, which will make me deprioritize this issue :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,674
| 1,674
|
NONE
| null |
#### This is the position with this bug: https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/generation/utils.py#L2875
#### The problem caused by this bug:
I want to get all decoder_hidden_states (with dimension [max_seq_len * (1 + decoder_layers) * (num_beam * batch_size) * embedding_size]) and beam_indices to keep track of those hidden states of each word generated.
After some example early stop (\<eos\> is generated), some beam_indices will become zero incorrectly. This bug is caused by the wrong subscript (`beam_indices[beam_idx[i]]`) when the model update the beam_indices and it should be `beam_indices[i]`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20694/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20693
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20693/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20693/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20693/events
|
https://github.com/huggingface/transformers/issues/20693
| 1,485,835,477
|
I_kwDOCUB6oc5YkAzV
| 20,693
|
torch.jit.script for BertEmbeddings works on MacOS but fails on Linux
|
{
"login": "priyamtejaswin",
"id": 8805316,
"node_id": "MDQ6VXNlcjg4MDUzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8805316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/priyamtejaswin",
"html_url": "https://github.com/priyamtejaswin",
"followers_url": "https://api.github.com/users/priyamtejaswin/followers",
"following_url": "https://api.github.com/users/priyamtejaswin/following{/other_user}",
"gists_url": "https://api.github.com/users/priyamtejaswin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/priyamtejaswin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/priyamtejaswin/subscriptions",
"organizations_url": "https://api.github.com/users/priyamtejaswin/orgs",
"repos_url": "https://api.github.com/users/priyamtejaswin/repos",
"events_url": "https://api.github.com/users/priyamtejaswin/events{/privacy}",
"received_events_url": "https://api.github.com/users/priyamtejaswin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Looks like the inconsistency comes from PyTorch, if all versions are the same.",
"Thanks @sgugger . I have a temporary workaround on Linux. Will raise it on the PyTorch forums.",
"> Thanks @sgugger . I have a temporary workaround on Linux. Will raise it on the PyTorch forums.\r\n\r\nHow did you solve this problem? I also encountered the same problem. Can you ask me for experience?\r\n\r\n"
] | 1,670
| 1,678
| 1,670
|
NONE
| null |
### System Info
transformers : 4.23.1
torch : 1.13.0
python : 3.9.13
**Versions are the same on MacOS and Linux.**
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
###### On MacOS ######
import torch
from transformers.models.bert.modeling_bert import BertConfig, BertEmbeddings
config = BertConfig()
embed = BertEmbeddings(config)
scripted = torch.jit.script(embed)
print(scripted)
OUTPUT = """
/opt/anaconda3/envs/mobi/lib/python3.9/site-packages/torch/jit/annotations.py:299: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either.
warnings.warn("TorchScript will treat type annotations of Tensor "
RecursiveScriptModule(
original_name=BertEmbeddings
(word_embeddings): RecursiveScriptModule(original_name=Embedding)
(position_embeddings): RecursiveScriptModule(original_name=Embedding)
(token_type_embeddings): RecursiveScriptModule(original_name=Embedding)
(LayerNorm): RecursiveScriptModule(original_name=LayerNorm)
(dropout): RecursiveScriptModule(original_name=Dropout)
)
"""
```
```python
###### On Linux ######
import torch
from transformers.models.bert.modeling_bert import BertConfig, BertEmbeddings
config = BertConfig()
embed = BertEmbeddings(config)
scripted = torch.jit.script(embed)
print(scripted)
OUTPUT = """
/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/annotations.py:299: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either.
warnings.warn("TorchScript will treat type annotations of Tensor "
Traceback (most recent call last):
File "/ocean/projects/tra220029p/tejaswin/ViLT/script_bertembeddings.py", line 7, in <module>
scripted = torch.jit.script(embed)
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_script.py", line 1286, in script
return torch.jit._recursive.create_script_module(
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_recursive.py", line 476, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_recursive.py", line 542, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/torch/jit/_recursive.py", line 393, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
'Optional[Tensor]' object has no attribute or method 'size'.:
File "/ocean/projects/tra220029p/tejaswin/miniconda3/envs/mobile/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 212
input_shape = input_ids.size()
else:
input_shape = inputs_embeds.size()[:-1]
~~~~~~~~~~~~~~~~~~ <--- HERE
seq_length = input_shape[1]
"""
```
### Expected behavior
The code should work on both platforms.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20693/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20700
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20700/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20700/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20700/events
|
https://github.com/huggingface/transformers/issues/20700
| 1,486,469,574
|
I_kwDOCUB6oc5YmbnG
| 20,700
|
[docs] transformers/quicktour rendering issue
|
{
"login": "scott-vsi",
"id": 5100631,
"node_id": "MDQ6VXNlcjUxMDA2MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5100631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scott-vsi",
"html_url": "https://github.com/scott-vsi",
"followers_url": "https://api.github.com/users/scott-vsi/followers",
"following_url": "https://api.github.com/users/scott-vsi/following{/other_user}",
"gists_url": "https://api.github.com/users/scott-vsi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scott-vsi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scott-vsi/subscriptions",
"organizations_url": "https://api.github.com/users/scott-vsi/orgs",
"repos_url": "https://api.github.com/users/scott-vsi/repos",
"events_url": "https://api.github.com/users/scott-vsi/events{/privacy}",
"received_events_url": "https://api.github.com/users/scott-vsi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Transfering to the transformers repo as it's more relevant there cc @sgugger ",
"Should be fixed by the PR mentioned above.",
"Probably not worth looking at more, but just by way of comparison, there is multiple returns on this page that seem to work (again referencing `dataset.map`):\r\n\r\nhttps://huggingface.co/docs/transformers/training#prepare-a-dataset\r\nhttps://github.com/huggingface/transformers/blob/74330083b5eea68eadf25bcbad3bcbb094f60c57/docs/source/en/training.mdx?plain=1#L50-L54\r\n\r\nI have no idea if it matters, but when viewed as markdown, there are some sections of the quickstart that don't render right. E.g., see the\r\n\r\n```\r\n```py >>> pt_batch = tokenizer( ... [\"We are very happy to show you the...\r\n```\r\n\r\nin this [section](https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/docs/source/en/quicktour.mdx#autotokenizer)\r\n\r\n",
"Thanks for raising this issue! \r\n\r\nIt isn't as important when viewed as Markdown because we use a special syntax (`<frameworkcontent>`) to generate the PyTorch and TensorFlow code blocks in our docs."
] | 1,670
| 1,670
| 1,670
|
NONE
| null |
There is a rendering issue in https://huggingface.co/docs/transformers/quicktour#trainer-a-pytorch-optimized-training-loop
step 5.
The markdown looks like [this](https://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/docs/source/en/quicktour.mdx?plain=1#L445-L451)
```python
>>> def tokenize_dataset(dataset):
... return tokenizer(dataset["text"])
>>> dataset = dataset.map(tokenize_dataset, batched=True)
```
but `dataset = ...` renders as a separate triple-indented quote
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20700/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20692
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20692/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20692/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20692/events
|
https://github.com/huggingface/transformers/issues/20692
| 1,485,750,658
|
I_kwDOCUB6oc5YjsGC
| 20,692
|
A (possible) bug of sentence permuation in BART pretraining script
|
{
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @StevenTang1998!\r\n\r\nGreat question! In our Flax pre-training script, use the PAD token as the end of sentence indicator:\r\nhttps://github.com/huggingface/transformers/blob/76924384af6081e58460231c3c637f9c83cabf97/examples/flax/language-modeling/run_bart_dlm_flax.py#L626-L627\r\n\r\nThis way, the PAD token is appended after the end of a sentence (full stop, exclamation marks, question marks etc) and used to indicate sentence boundaries. So in splitting a document by the PAD token, we are in effect splitting sentences on end of sentence punctuation (equivalent to the original paper!).\r\n\r\nHope this addresses your concerns! Let me know if you have any questions and I'd be happy to answer 🤗",
"Thanks for kind reply! I understand it now!\r\n\r\nBut I have a concern that the PAD token may affect the results (through position embedding?) although it will not be attend to.",
"Hey @StevenTang1998! Glad to hear that clarified things! It should be fine since we build the attention mask based on the position of the PAD token ids :)",
"OK, thank you for your reply! Wish you a good day!"
] | 1,670
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
### System Info
None
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
As described in the [original BART paper](https://arxiv.org/pdf/1910.13461.pdf), the sentence permutation should be: A document is divided into sentences based on **full stops**, and these sentences are shuffled in a random order.
However, in the [examples/flax/language-modeling](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_bart_dlm_flax.py#L318), it uses pad token as the `end_sentence_mask`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20692/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20691
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20691/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20691/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20691/events
|
https://github.com/huggingface/transformers/issues/20691
| 1,485,419,960
|
I_kwDOCUB6oc5YibW4
| 20,691
|
Doing data preprocessing in a separated run
|
{
"login": "fahad7033",
"id": 52211706,
"node_id": "MDQ6VXNlcjUyMjExNzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/52211706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fahad7033",
"html_url": "https://github.com/fahad7033",
"followers_url": "https://api.github.com/users/fahad7033/followers",
"following_url": "https://api.github.com/users/fahad7033/following{/other_user}",
"gists_url": "https://api.github.com/users/fahad7033/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fahad7033/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fahad7033/subscriptions",
"organizations_url": "https://api.github.com/users/fahad7033/orgs",
"repos_url": "https://api.github.com/users/fahad7033/repos",
"events_url": "https://api.github.com/users/fahad7033/events{/privacy}",
"received_events_url": "https://api.github.com/users/fahad7033/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @fahad7033! Cool to see that you're using the CTC example script for training 🤗 The argument `--preprocessing_only` will run the fine-tuning script up to the end of the dataset pre-processing: https://github.com/huggingface/transformers/blob/0ba94aceb6e1ab448e0acc896764a4496759cb14/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L656\r\n\r\nOnce this run has completed, disable the flag `--preprocessing_only` (remove it from your args or set `--preprocessing_only=\"False\"`) and re-run the training script. This time, the training script will use the cached dataset (i.e. it will re-use the pre-processed dataset files that you prepared in your pre-processing run) and then commence training.\r\n\r\nIt's worth noting that using the `--preprocessing_only` flag is only recommended in distributed training when there is risk of a timeout. If this happens, we switch to a non-distributed set-up and set the `--preprocessing_only` flag. We can then go back to the distributed training set-up and have our dataset ready in cache for training.\r\n\r\nIf you are not running distributed training or aren't at risk of a timeout (i.e. a very large dataset), it'll be faster and easier for you just to run the script once without the `--preprocessing_only` argument.\r\n\r\nLet me know if you have any other questions, happy to help!",
"Thank you so much sanchit-gandhi for your response.\r\nI do have a large dataset, so it is better for me to do it in 2 steps. \r\n\r\nAfter complete the running with --preprocessing_only=\"True\", there is no cached file in the output directory, so when I re-run the script again with disabling --preprocessing_only the original dataset is processed again.\r\n\r\nAlso, I tried to review the script it seems the code processes the original dataset regardless the flag --preprocessing_only is true or false.\r\n",
"When data preprocessing is completed I got this message: \r\nINFO:__main__:Data preprocessing finished. Files cached at {'train': [], 'eval': []}",
"Hey @fahad7033! I've tried to reproduce this behaviour with a minimum working example.\r\n\r\nSystem info:\r\n- `transformers` version: 4.26.0.dev0\r\n- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.11.1\r\n- PyTorch version (GPU?): 2.0.0.dev20221210+cu117 (True)\r\n\r\nScript uses a tiny subset of the LibriSpeech ASR dataset (~9MB) and fine-tunes on a tiny Wav2Vec2 CTC model:\r\n```\r\npython run_speech_recognition_ctc.py \\\r\n --dataset_name=\"hf-internal-testing/librispeech_asr_dummy\" \\\r\n --model_name_or_path=\"hf-internal-testing/tiny-random-wav2vec2\" \\\r\n --dataset_config_name=\"clean\" \\\r\n --train_split_name=\"validation\" \\\r\n --eval_split_name=\"validation\" \\\r\n --output_dir=\"./\" \\\r\n --max_steps=\"10\" \\\r\n --per_device_train_batch_size=\"16\" \\\r\n --per_device_eval_batch_size=\"16\" \\\r\n --learning_rate=\"3e-4\" \\\r\n --warmup_steps=\"5\" \\\r\n --evaluation_strategy=\"steps\" \\\r\n --length_column_name=\"input_length\" \\\r\n --save_strategy=\"no\" \\\r\n --eval_steps=\"5\" \\\r\n --preprocessing_only=\"True\" \\\r\n --preprocessing_num_workers=\"4\" \\\r\n --freeze_feature_encoder \\\r\n --fp16 \\\r\n --overwrite_output_dir\\\r\n --group_by_length \\\r\n --do_train \\\r\n --do_eval \\\r\n```\r\n\r\n<details>\r\n<summary> Output: </summary>\r\n\r\n```\r\n12/19/2022 15:29:32 - INFO - __main__ - Data preprocessing finished. Files cached at \r\n{'train': [{'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-dc486168c3937e95.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-53095567e8277865.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-e089d2a96576c6bb.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-6d3d1c061f60c29b.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00000_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00001_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00002_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00003_of_00004.arrow'}], \r\n'eval': [{'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-dc486168c3937e95.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-53095567e8277865.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-e089d2a96576c6bb.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-6d3d1c061f60c29b.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00000_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00001_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00002_of_00004.arrow'}, {'filename': '/home/ubuntu/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b/cache-41f1795b92412228_00003_of_00004.arrow'}]}\r\n```\r\n\r\n</details>\r\n\r\nWe can see here that the dataset has been correctly prepared and cached, so the script is working for me with this toy example. Do you have a reproducible script that I could use to re-create your run? It's impossible for me to say what the issue is without being able to reproduce the error on my side!\r\n\r\nAlso re-iterating a point raised in my previous message: unless you're fine-tuning using a **large dataset** on **multiple GPUs**, there is no need to use the flag `--preprocessing_only`. For a large dataset on a single GPU, it's better not to use this flag and just run training directly.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,674
| 1,674
|
NONE
| null |
### System Info
I am trying to run the file run_speech_recognition_ctc.py on a custom dataset. I use the argument preprocessing_only to make the data preprocessing as a separated step. My question is how to start model (training as a second step) since there is no previous checkpoint.
Thanks in advance.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
none
### Expected behavior
none
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20691/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20690
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20690/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20690/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20690/events
|
https://github.com/huggingface/transformers/pull/20690
| 1,485,415,288
|
PR_kwDOCUB6oc5E1Rvt
| 20,690
|
Add missing image transforms to init
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Adds functionality in the image transforms library to the init to allow direct imports e.g.:
`from transformers import center_crop`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20690/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20690",
"html_url": "https://github.com/huggingface/transformers/pull/20690",
"diff_url": "https://github.com/huggingface/transformers/pull/20690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20690.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20689
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20689/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20689/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20689/events
|
https://github.com/huggingface/transformers/issues/20689
| 1,485,265,385
|
I_kwDOCUB6oc5Yh1np
| 20,689
|
Best model selection changed
|
{
"login": "creisle",
"id": 4342641,
"node_id": "MDQ6VXNlcjQzNDI2NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4342641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/creisle",
"html_url": "https://github.com/creisle",
"followers_url": "https://api.github.com/users/creisle/followers",
"following_url": "https://api.github.com/users/creisle/following{/other_user}",
"gists_url": "https://api.github.com/users/creisle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/creisle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/creisle/subscriptions",
"organizations_url": "https://api.github.com/users/creisle/orgs",
"repos_url": "https://api.github.com/users/creisle/repos",
"events_url": "https://api.github.com/users/creisle/events{/privacy}",
"received_events_url": "https://api.github.com/users/creisle/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It was never possible, it has always been on the evaluation loss (or any other metric).",
"@sgugger thanks for getting back to me so quickly! Did the default metric change at any point? I am just trying to determine why the \"best model\" being saved is different when I haven't changed anything in my data/training arguments. I thought maybe it was selecting on the training loss before because that is the metric that looked closest to the checkpoint it selected before",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,670
| 1,673
| 1,673
|
NONE
| null |
### System Info
```
OS: Ubuntu 18.04.2 LTS
python: 3.8.0
torch 1.10.1+cu113
torchaudio 0.10.1+cu113
torchvision 0.11.2+cu113
transformers 4.15.0
```
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am training an NLI model (but I am pretty sure this is task independent). Here are my training arguments
```
training_args = TrainingArguments(
output_dir=args.output, # output directory
num_train_epochs=1, # total # of training epochs
per_device_train_batch_size=args.batch_size, # batch size per device during training
per_device_eval_batch_size=args.batch_size, # batch size for evaluation
warmup_steps=0, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir=os.path.join(args.output, 'logs'), # directory for storing logs
logging_steps=args.eval_steps,
learning_rate=1e-05,
evaluation_strategy="steps",
save_strategy="steps",
eval_steps=args.eval_steps,
save_steps=args.eval_steps,
gradient_checkpointing=True,
load_best_model_at_end=True,
metric_for_best_model='loss', # default changed from this at some point?
disable_tqdm=True,
save_total_limit=args.save_total_limit,
)
```
With no argument for `metric_for_best_model` or with as above setting it explicitly to `loss` the results are the same and the trainer always chooses the best model based on eval loss not training loss
### Expected behavior
I expected it to save the model based n the best training loss, is there any way to do this? from earlier versions of ths library I had used that used to be the default behaviour but after looking the details it doesn't seem like that is possible any more. Is that intentional? Is there an option here that would get the training loss that I am just not using correctly?
https://github.com/huggingface/transformers/blame/e3cc4487fe66e03ec85970ea2db8e5fb34c455f4/src/transformers/trainer.py#L2228
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20689/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20688
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20688/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20688/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20688/events
|
https://github.com/huggingface/transformers/pull/20688
| 1,485,196,683
|
PR_kwDOCUB6oc5E0g4m
| 20,688
|
skip `test_multi_gpu_data_parallel_forward` for `MaskFormerSwinModelTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
`MaskFormerSwinModel` outputs `hidden_states_spatial_dimensions`, which is ``tuple(tuple(int, int))``, and can't be collected by `nn.DataParallel` to form a final output value.
(When I remove this attribute, this test passes.)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20688/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20688",
"html_url": "https://github.com/huggingface/transformers/pull/20688",
"diff_url": "https://github.com/huggingface/transformers/pull/20688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20688.patch",
"merged_at": 1670580600000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20687
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20687/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20687/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20687/events
|
https://github.com/huggingface/transformers/pull/20687
| 1,485,086,230
|
PR_kwDOCUB6oc5E0INr
| 20,687
|
Update CI to PyTorch `1.13.0`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,670
| 1,670
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
The job runs with PT 1.13 show everything is fine except
```python
test_model_parallelism
(line 980) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:1)
```
Once #20686 being merged, we can merge this PR to use PT 1.13 for CI
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20687/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20687",
"html_url": "https://github.com/huggingface/transformers/pull/20687",
"diff_url": "https://github.com/huggingface/transformers/pull/20687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20687.patch",
"merged_at": 1670871897000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20686
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20686/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20686/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20686/events
|
https://github.com/huggingface/transformers/pull/20686
| 1,485,060,935
|
PR_kwDOCUB6oc5E0Cb8
| 20,686
|
Fix CIs for PyTorch 1.13
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello and have a good time\r\nCan anyone here help me with the \"multi_label_classification\" and \"single _label_classification\" functions?\r\nI am writing a code for dynamic quantization, and I have encountered problems with these functions and their writing.",
"Hi @fkoeini \r\n\r\nFirst of all, your question is unrelated to this PR, and here (this PR page) is not the correct place for your question above.\r\n\r\nAlso, from you description, [Hugging Face Forums](https://discuss.huggingface.co/)](https://discuss.huggingface.co/) is the place to post the question. On GitHub, it's for bugs or feature requests :-) Thank you!"
] | 1,670
| 1,672
| 1,670
|
COLLABORATOR
| null |
# What does this PR do?
Before we can update CI to use PyTorch 1.13, there are a few things to fix.
This PR avoids `test_model_parallelism` to fail with some models, where `PyTorch 1.13` requires more strictly the indices to be on the same device of the indexed tensor.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20686/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20686",
"html_url": "https://github.com/huggingface/transformers/pull/20686",
"diff_url": "https://github.com/huggingface/transformers/pull/20686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20686.patch",
"merged_at": 1670521915000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.