url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/20184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20184/comments
https://api.github.com/repos/huggingface/transformers/issues/20184/events
https://github.com/huggingface/transformers/pull/20184
1,446,511,400
PR_kwDOCUB6oc5CwhAl
20,184
New logging support to "Trainer" Class (ClearML Logger)
{ "login": "skinan", "id": 46783803, "node_id": "MDQ6VXNlcjQ2NzgzODAz", "avatar_url": "https://avatars.githubusercontent.com/u/46783803?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skinan", "html_url": "https://github.com/skinan", "followers_url": "https://api.github.com/users/skinan/followers", "following_url": "https://api.github.com/users/skinan/following{/other_user}", "gists_url": "https://api.github.com/users/skinan/gists{/gist_id}", "starred_url": "https://api.github.com/users/skinan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skinan/subscriptions", "organizations_url": "https://api.github.com/users/skinan/orgs", "repos_url": "https://api.github.com/users/skinan/repos", "events_url": "https://api.github.com/users/skinan/events{/privacy}", "received_events_url": "https://api.github.com/users/skinan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.", "@sgugger , We have made the changes according to your suggestions. Please have a look. Thank you for your co-operation.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.", "I'm finding this integration very aggressive.\r\nBasically, it seems to require that if the clearml python package is installed, you must have clearml set up in order to use huggingface.\r\n\r\nWe have optional clearml setup in our environment (with a private clearml server), but not using it in most situations at the moment.\r\n\r\nThis now started crashing with\r\n```\r\nFile /usr/local/lib/python3.8/dist-packages/clearml/backend_api/session/session.py:180, in Session.__init__(self, worker, api_key, secret_key, host, logger, verbose, initialize_logging, config, http_retries_config, **kwargs)\r\n 177 raise ValueError(\"ClearML host was not set, check your configuration file or environment variable\")\r\n 179 if not self._offline_mode and (not self.secret_key and not self.access_key and not self.__auth_token):\r\n--> 180 raise MissingConfigError()\r\n 182 self._ssl_error_count_verbosity = self.config.get(\r\n 183 \"api.ssl_error_count_verbosity\", self._ssl_error_count_verbosity)\r\n 185 self.__host = host.strip(\"/\")\r\n\r\nMissingConfigError: It seems ClearML is not configured on this machine!\r\nTo get started with ClearML, setup your own 'clearml-server' or create a free account at https://app.clear.ml/\r\nSetup instructions can be found here: https://clear.ml/docs\r\n```\r\n\r\nEDIT: Now I realize that I missed that `report_to` in fact defaults to `all` and that's gist of the issue. The wording of the ClearML documentation implies quite a bit that it needs to be added to `report_to` manually, though, which confused me.", "You should have seen a log informing you that `report_to` was defaulting to all tools installed (issued [here](https://github.com/huggingface/transformers/blob/3830b3f74a57b3bcf6f14016c483fb0bb14b01ce/src/transformers/training_args.py#L1218)). We should probably change it to a warning so that it's more visible, but the default will change in v5 of Transformers (from all to None)." ]
1,668
1,672
1,668
CONTRIBUTOR
null
I have added a ClearML callback class to log experiments using `ClearML Task.` ClearML logger logs everything to ClearML WebUI. ClearML logs Hyperparameters, Scalars, Models, Checkpoints, and other necessary artifacts. I request @sgugger to review this pull request as it works with the `Trainer` class. The `integrations.py` contains the major contents of this pull request through the `ClearMLCallback` class. Some other small changes have been made to another set of files to maintain the integrity of the codebase, such as adding simple basic tests. > ClearML is a leading MLOps stack that can supercharge HuggingFace Transformers training and tracking with its state-of-the-art experiment tracking capability. ClearML: https://clear.ml/ **What ClearML Experiment Manager can log? Everything! You just name it. Example Screenshots:** ![Screenshot from 2022-11-12 21-28-41](https://user-images.githubusercontent.com/46783803/201482999-be3d93cc-f815-4252-b18e-547fea46a09a.png) ![Screenshot from 2022-11-12 21-29-10](https://user-images.githubusercontent.com/46783803/201483021-1aa4a584-210a-43d4-a295-4cb00364cbe9.png) ![Screenshot from 2022-11-12 21-27-27](https://user-images.githubusercontent.com/46783803/201483025-90df7687-4646-4fe0-b84c-626bb8c590e4.png) ![Screenshot from 2022-11-12 21-28-13](https://user-images.githubusercontent.com/46783803/201483034-ed4ac684-0762-4b15-93d0-e55fa6e959e5.png) ![Screenshot from 2022-11-12 22-07-02](https://user-images.githubusercontent.com/46783803/201483201-b845e7ce-eb13-4829-b4b3-50ecb7281941.png) ![Screenshot from 2022-11-12 21-26-58](https://user-images.githubusercontent.com/46783803/201483065-b7c1cf39-b69e-4462-a702-659b8e0f2b23.png) ![Screenshot from 2022-11-12 22-06-34](https://user-images.githubusercontent.com/46783803/201483190-29999111-ffe8-443b-8adb-aebcc2519610.png) ![Screenshot from 2022-11-12 21-28-27](https://user-images.githubusercontent.com/46783803/201483040-9541982e-fcdc-44fb-a327-c859bec61e3b.png) ### **Example script to utilize ClearML Callback with Trainer:** [trainer_with_clearml.zip](https://github.com/huggingface/transformers/files/9995390/trainer_with_clearml.zip)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20184", "html_url": "https://github.com/huggingface/transformers/pull/20184", "diff_url": "https://github.com/huggingface/transformers/pull/20184.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20184.patch", "merged_at": 1668524939000 }
https://api.github.com/repos/huggingface/transformers/issues/20183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20183/comments
https://api.github.com/repos/huggingface/transformers/issues/20183/events
https://github.com/huggingface/transformers/issues/20183
1,446,460,556
I_kwDOCUB6oc5WNzyM
20,183
[docs] translating.md needs an update
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Sorry for the drag in progress, but may you please put the `WIP` tag to this issue as well? Thank you @sgugger.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I am working on this issue.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,676
1,676
CONTRIBUTOR
null
Dear 🤗 HuggingFace Team, I would like to suggest updating the guide for issues in the following sections: - 🗞️ Open an issue - [x] As of now, there is no template called `Translation template` while opening an issue. Instead I had to copy-paste issues made by the community. https://github.com/huggingface/transformers/pull/20199 - [ ] To view the translated documentation in the _first_ PR for a new language, one should first update line 18 of `build_documentation.yml` and line 17 of `build_pr_documentation.yml` from `.github/workflows` to include their language code. This should be expained in the guide, ideally in a separate section called `Open a pull request`. - 📋 Copy-paste the English version with a new language code - [ ] It should be made clear that this step is only needed for the first PR, and subsequent PRs can get the documents they want translated from `en` separately, with updates to `_toctree.yml` done in tandem. - 📚 etc. - [ ] For non-alphabet languages, translators must put in a custom link as shown [here](https://github.com/huggingface/doc-builder#writing-documentation-for-hugging-face-libraries:~:text=a%20way%20to%20customize%20the%20anchor%20link) (screenshot included below). <hr/> ![image](https://user-images.githubusercontent.com/29195190/201503131-4bf8eed6-e5df-475d-b545-a983b424174d.png) As a possible solution we could: - refactor the document to 2 sections: `New languages` and `In-progress languages` - add more content regarding how the first pull request should be done - and account for non-alphabet languages by providing a small sidenote Thank you so much for this wonderful library.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20183/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20182/comments
https://api.github.com/repos/huggingface/transformers/issues/20182/events
https://github.com/huggingface/transformers/issues/20182
1,446,443,450
I_kwDOCUB6oc5WNvm6
20,182
Make tokenizer.pad() compatible with labels
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There is no canonical way to pad labels directly using the tokenizer, since what we want for padding depends on the task (we don't want any padding in sentence classification, but a different one in token classification, summarization or translation) and the tokenizer is not aware of the task.\r\n\r\nIn your code sample, the easiest way is just to replace the name `\"labels\"` by `\"input_ids\"` in your call to pad. As shown in all examples, the canonical way is to just do everything in `__call__`, or you can use the data collators to help as well, since they contain the code to pad adapted to the task at hand.", "That makes a lot of sense! I've been spending too much time on translation-related topics that I forgot that this is not as straightforward as it seems. Perhaps in a perfect world, the tokenizer (+ model) would be task-aware. Just like some tokenizers can switch between a source/target language, they could then switch between tasks and therefore appropriate padding techniques. Maybe that would make automatic integrations with pipelines easy, too! This is just me dreaming - I am aware that there are probably billions of reasons not to implement it like that. :-)" ]
1,668
1,668
1,668
COLLABORATOR
null
### Feature request In my project, I need to do some custom sorting and filtering in my sampler. Therefore, each item in my Dataset is already tokenized/truncated (and not padded). So the padding occurs in the `collate_fn` of the dataloader. Here I would need to pad both inputs and labels. While this is straightforward to do with `tokenizer.pad()` for the inputs (input_ids, attention_mask) I cannot get this to work for labels. This first example just tries to put two samples (dixtionaries with keys input_ids, attention_mask, and labels) through tokenizer.pad. I would not expect this to work because in seq2seq, inputs and ouputs can have different lengths so the total max. sequence length will differ. ``` import torch from transformers import MBartTokenizer samples = [{'input_ids': torch.LongTensor([8622, 621, 5941, 2750, 765, 10, 10422, 111, 104687, 27771, 4, 92136, 538, 100244, 3642, 8966, 85493, 4, 53927, 621, 24911, 5245, 552, 49725, 4, 398, 621, 1053, 26255, 13081, 34, 59207, 4, 87, 73275, 13, 110, 2661, 9, 36904, 297, 65842, 7, 5, 2, 250004]), 'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'labels': torch.LongTensor([250132, 250147, 250122, 250132, 250092, 5941, 250030, 250148, 250132, 10422, 250104, 250031, 250132, 104687, 27771, 250057, 250135, 250132, 250057, 250057, 250057, 250057, 2, 250004])}, {'input_ids': torch.LongTensor([20625, 32692, 34475, 1821, 5792, 5941, 182417, 32, 4, 136201, 4, 201, 1530, 4, 7911, 2, 250004]), 'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'labels': torch.LongTensor([250132, 250147, 250122, 250132, 8337, 250104, 250030, 250132, 32692, 250057, 250031, 250132, 199, 1681, 250031, 250148, 250132, 765, 250132, 250145, 250057, 250057, 250123, 250132, 136, 250072, 136201, 250073, 201, 1530, 250074, 7911, 250057, 250057, 2, 250004])}] tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25") padded = tokenizer.pad(samples, padding=True, return_tensors="pt") ``` Error: > Traceback (most recent call last): > File "transformers\tokenization_utils_base.py", line 715, in convert_to_tensors > tensor = as_tensor(value) > ValueError: expected sequence of length 24 at dim 1 (got 36) > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "scratch_2.py", line 43, in <module> > padded = tokenizer.pad(samples, > File "transformers\tokenization_utils_base.py", line 2985, in pad > return BatchEncoding(batch_outputs, tensor_type=return_tensors) > File "transformers\tokenization_utils_base.py", line 210, in __init__ > self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) > File "transformers\tokenization_utils_base.py", line 731, in convert_to_tensors > raise ValueError( > ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected). So I figured I would just split up my data into the inputs and the labels, but that also does not work: ```python import torch from transformers import MBartTokenizer samples = [{'input_ids': torch.LongTensor([8622, 621, 5941, 2750, 765, 10, 10422, 111, 104687, 27771, 4, 92136, 538, 100244, 3642, 8966, 85493, 4, 53927, 621, 24911, 5245, 552, 49725, 4, 398, 621, 1053, 26255, 13081, 34, 59207, 4, 87, 73275, 13, 110, 2661, 9, 36904, 297, 65842, 7, 5, 2, 250004]), 'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'labels': torch.LongTensor([250132, 250147, 250122, 250132, 250092, 5941, 250030, 250148, 250132, 10422, 250104, 250031, 250132, 104687, 27771, 250057, 250135, 250132, 250057, 250057, 250057, 250057, 2, 250004])}, {'input_ids': torch.LongTensor([20625, 32692, 34475, 1821, 5792, 5941, 182417, 32, 4, 136201, 4, 201, 1530, 4, 7911, 2, 250004]), 'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'labels': torch.LongTensor([250132, 250147, 250122, 250132, 8337, 250104, 250030, 250132, 32692, 250057, 250031, 250132, 199, 1681, 250031, 250148, 250132, 765, 250132, 250145, 250057, 250057, 250123, 250132, 136, 250072, 136201, 250073, 201, 1530, 250074, 7911, 250057, 250057, 2, 250004])}] tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25") inputs = [{k: v for k, v in sample.items() if k in ["attention_mask", "input_ids"]} for sample in samples] padded_inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt") labels = [{"labels": sample["labels"]} for sample in samples] padded_labels = tokenizer.pad(labels, padding=True, return_tensors="pt") ``` Error: > Traceback (most recent call last): > File "scratch_2.py", line 49, in <module> > padded_labels = tokenizer.pad(labels, > File "transformers\tokenization_utils_base.py", line 2904, in pad > raise ValueError( > ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['labels'] My assumption is that the intention for `pad` has always been to only pad inputs and not labels but I might just be missing something. ### Motivation Unless I am missing something, it is currently not straightforward to use tokenizer.pad() on labels. ### Your contribution I'd need some guidance on what needs to change to make a tokenizer's `.pad` work with labels. If I know what to change, I can make a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20182/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20181/comments
https://api.github.com/repos/huggingface/transformers/issues/20181/events
https://github.com/huggingface/transformers/pull/20181
1,446,349,039
PR_kwDOCUB6oc5Cv-1n
20,181
translate zh quicktour(#20095)
{ "login": "bfss", "id": 31245245, "node_id": "MDQ6VXNlcjMxMjQ1MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/31245245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bfss", "html_url": "https://github.com/bfss", "followers_url": "https://api.github.com/users/bfss/followers", "following_url": "https://api.github.com/users/bfss/following{/other_user}", "gists_url": "https://api.github.com/users/bfss/gists{/gist_id}", "starred_url": "https://api.github.com/users/bfss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bfss/subscriptions", "organizations_url": "https://api.github.com/users/bfss/orgs", "repos_url": "https://api.github.com/users/bfss/repos", "events_url": "https://api.github.com/users/bfss/events{/privacy}", "received_events_url": "https://api.github.com/users/bfss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I have add them to doc build workflow. @sgugger " ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Translate the quicktour to zh #20095
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20181/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20181", "html_url": "https://github.com/huggingface/transformers/pull/20181", "diff_url": "https://github.com/huggingface/transformers/pull/20181.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20181.patch", "merged_at": 1669038258000 }
https://api.github.com/repos/huggingface/transformers/issues/20180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20180/comments
https://api.github.com/repos/huggingface/transformers/issues/20180/events
https://github.com/huggingface/transformers/pull/20180
1,446,340,825
PR_kwDOCUB6oc5Cv9Hh
20,180
[i18n-KO] Translated index page to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "Hey! 만나서 반가워요! Would love to hear more about the project you were working on and how transformers is used in Korea! 🤗 \r\n\r\nThanks a lot for your work! \r\n- The PR documentation endpoint was updating and should now be visible! The `KO` is not appearing because I think you did not add the file to the `toctree` indeed.\r\n- You should delete all the other files! We don't keep template files and if someone wants to translate for futur updates, they will just copy the single file 😄 \r\n- I think you did a great job in keeping `Transformer` as it is indeed more the name of the library! When talking about actual `transformers` architecture, its up to you, and depends on what is more used/understandable! \r\n\r\nI will be glad to review once you have cleaned up the modified files! 👍🏻 Also cc @eunseojo just FYI 😉 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "Hello @ArthurZucker, 저도 만나서 반가워요! Thank you so much for your warm welcome and guidance.\r\nThank you for your help with this PR, @eunseojo 🤗 \r\n\r\n- I made updates to `_toctree.yml` as you suggested and also added a language code to `.github/workflows/build_*_documentation`.\r\n- So much simpler now! I translated the big section titles as they were mentioned in the `index.mdx` page. This raised errors from the pr docs workflow saying sections were empty,\r\n![image](https://user-images.githubusercontent.com/29195190/201475900-79bb7272-1628-4591-9e37-3399c27607cf.png)\r\nso I added a placeholder called `in_translation.mdx`. As a result, the left sidebar is a zebra now 🦓 🤣 .\r\n![image](https://user-images.githubusercontent.com/29195190/201475993-23aa9a7d-4d63-43e3-a81f-2802e6cd71f2.png)\r\nHow would you solve this problem? \r\n- Yay, 🧨 thank you very much for your compliment hehe\r\n\r\nP.S. \r\nI'm just a beginner but Korea, at least a community I learn lots from- [PseudoLab](https://pseudo-lab.com/), is extensively using huggingface. The government funds a yearly hackathon called [OSSCA](https://www.contribution.ac/), kinda like hacktoberfest but with dedicated mentors. RustPython and i18n-Kubernetes team got the Gold and Silver for this year. Our team AzureSDK got Bronze with 5 others.\r\n\r\nMy side-project goal is to enhance \"google translator-ish\" results: **to a more friendly, neighbourhood tone** in Korean (_grammar checking included_!) I'm still tinkering, just learning mostly how to get this to work. I have another idea for generating children's books, but for a later time perhaps. Thank you for your interest!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.", "Thanks for working on this! Excited to see a new doc in Korean!" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Translated the `Getting started - index.mdx` file of the documentation to Korean. I use transformers for my side-project and wanted to contribute. As a beginner, this seemed appropriate. Some questions: - I can't access the pr documentation endpoint for Korean. Is there a missing step? - I left all the other files alone after copying the English docs. Is this acceptable for future updates or should I add contents to the `_toctree.yml` only when translation is complete? Should I delete all other files for a cleaner review? - In Korea, people use both `Transformer` and `트랜스포머` to describe the model. However, as this is more of a product of HuggingFace I opted not to translate it. May you please let me know which you would prefer? Thank you in advance for your review. <!-- Remove if not applicable --> Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, may you please review this PR? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20180/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20180", "html_url": "https://github.com/huggingface/transformers/pull/20180", "diff_url": "https://github.com/huggingface/transformers/pull/20180.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20180.patch", "merged_at": 1668445761000 }
https://api.github.com/repos/huggingface/transformers/issues/20179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20179/comments
https://api.github.com/repos/huggingface/transformers/issues/20179/events
https://github.com/huggingface/transformers/issues/20179
1,446,296,960
I_kwDOCUB6oc5WNL2A
20,179
🌐 [i18n-KO] Translating docs to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" }, { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello @sgugger, may you please add the `WIP` tag to this issue? Thank you so much.", "For contributors and PseudoLab team members, please see a PR template [gist](https://gist.github.com/wonhyeongseo/af2a8855264bb494212f81e8b8173b9a) ([raw](https://gist.githubusercontent.com/wonhyeongseo/af2a8855264bb494212f81e8b8173b9a/raw/4a50fca630c4f6188cea658bdba98704d9a3e979/pr-template.md)) that could ease your first PR experience.\r\n@0525hhgus, @KIHOON71, @gabrielwithappy, @jungnerd, @sim-so, @HanNayeoniee, @wonhyeongseo", "Dear @sgugger, would you add `document` label to this issue?\r\nI think other issues for the translation have a `document` label.\r\nThank you in advance\r\n\r\n@wonhyeongseo \r\nI changed my PR with a new PR template. would you change \r\n`Load pretrained instances with an AutoClass` to [[WIP]🌐[i18n-KO] Translate autoclass_tutorial to Korean and Fix the typo of quicktour #22533](https://github.com/huggingface/transformers/pull/22533)\r\n", "@sgugger wow! Thank you a million! :-)", "@sgugger\r\nDear HuggingFace Team,\r\n\r\nI hope you are doing well. My name is Wonhyeong Seo from the [Pseudo Lab](https://linkedin.com/company/pseudolab) team. As you may know, we are actively working on localizing the `huggingface/transformers` repository documentation into Korean. Our goal is to make this valuable resource more accessible to Korean-speaking users, thereby promoting the development of NLP and machine learning in Korea and beyond.\r\n\r\nWe are currently in the process of applying for [government sponsorship](https://www.oss.kr/notice/show/eac6d5c8-01c1-4cc0-b2d1-1a88e96942e2?page=1) to support our localization efforts. To strengthen our application, we kindly request your permission to use the documentation's Google Analytics data to include in our reports. This data will help us demonstrate the impact of our work and the potential benefits of localizing the documentation.\r\n\r\nAdditionally, we would be grateful for any feedback or suggestions from the HuggingFace team regarding our localization project. Your insights will be invaluable in ensuring our efforts align with your vision and standards, and in fostering a successful collaboration.\r\n\r\nThank you for considering our request. We look forward to your response and the opportunity to work together to expand the reach of the `huggingface/transformers` repository.\r\n\r\nBest regards,\r\nHyunseo Yun, Kihoon Son, Gabriel Yang, Sohyun Sim,\r\nNayeon Han, Woojun Jung, Wonhyeong Seo\r\nThe Localization Initiative members of Pseudo Lab", "Hey @wonhyeongseo, thanks for all you work on translating the documentation to Korean!\r\n\r\nDo you mind contacting me at lysandre at hf.co so we may see how best to help you?", "Welcome to a simple guide on how to use ChatGPT to speed up the translation process. By following these guidelines, you can create a first draft in less than an hour. Please note that it is essential to proofread your work thoroughly before sharing it with your colleagues.\r\n\r\n(Optional) If you want to extract only the content without code blocks, tables, and redundant new lines, you can use the command `sed '/```/,/```/d' file.md | sed '/^|.*|$/d' | sed '/^$/N;/^\\n$/D'`. In case you are using a mobile device, you can check the link https://sed.js.org/ for using sed online.\r\n\r\nTo initiate the translation process, you need to provide your sentences as input to ChatGPT. Your first prompt should look like this:\r\n\r\n```mdx\r\nWhat do these sentences about Hugging Face Transformers (a machine learning library) mean in Korean? Please do not translate the word after a 🤗 emoji as it is a product name.\r\n```md\r\n<your sentences>\r\n```\r\n\r\nAfter submitting the first prompt, you can use the following prefix for the next ten prompts:\r\n\r\n```mdx\r\n```next-part\r\n<your sentences>\r\n```\r\n\r\n*Note that after ten prompts, you must remind ChatGPT of the task if you are not using LangChain.*\r\n\r\nBy following these guidelines, you can create a first draft of your translation in a shorter time frame. However, it is crucial to emphasize that the quality of the final output depends on the accuracy of the input and the proofreading process.\r\n\r\nPS: Please note that we do not have a Korean LLM that can automate the proofreading process at the moment. However, in July, Naver plans to launch their HyperCLOVA Korean LLM model, which might automate the entire process. We are optimistic that our government proposal will be accepted, allowing us to increase our talent pool and work towards achieving a more automated translation process with them.", "Dear @LysandreJik ,\r\n\r\nI hope you are doing well. I wanted to inform you that I have sent an email with the subject line \"[i18n-KO] Request for Collaboration: Hugging Face Mentorship Program.\" Whenever you have a moment, please take a look and provide a response. Thank you so much for your interest to this collaboration. If you have any questions, please don't hesitate to contact me.\r\n\r\nBest regards,\r\nWonhyeong Seo", "@gabrielwithappy @sim-so @jungnerd @HanNayeoniee @0525hhgus @KIHOON71 \nFrom this merge of `model_sharing.mdx` #22991 , I learned that we don't have to `git rebase -i` as other open source libraries mandate. Therefore, I propose we commit in 4 steps like this:\n\n1. `docs: ko: <file-name>` - As we always do for the first commit. Copy the initial English file under `ko` and edit TOC: both external and (soon-to-be-automated) internal.\n\n> From this point forward, you may need to squash commits in each step.\n\n2. `feat: [nmt|manual] draft` - Machine-translate the entire file with: dedicated translators, prompts, or any kind of automation. You may choose to translate manually, and that is ok as long as you specify it in the commit message.\n3. `fix: manual edits` - Proofread the draft thoroughly.\n4. `fix: resolve suggestions` - Get reviews and resolve suggestions.\n\nWith this, it will be easier for collaborators to see the original English and your changes side by side. Not to mention, we can use diffs as pre-training data for the in-house rlhf translation model.\n\n@ArthurZucker @sgugger , when merging a PR, how is the main commit message decided if there are multiple commits? Do you have to manually write it, or is the first commit message of the PR selected? Thank you for your insights and continued support. Much love from Korea 🇰🇷💖💕🙏", "The main commit message is the title of the PR.", "Hey all! As some people were interested in a place to discuss about translations, we opened a category in the [HF Discord server](http://hf.co/join/discord) with a category for internationalization and translation efforts, including a Korean channel!", "Hi Pseudo Lab friends! I just wanted to provide a quick update on where the translation progress currently stands:\r\n\r\n- 73% done ✅ \r\n- 6 PRs pending review; once merged, you'll be up to 81% 📈 \r\n- 15 files left to translate before ✨ **100%** ✨\r\n\r\nGreat work, and big thanks again for all your contributions to fully translate the 🤗 Transformers documentation. ", "안녕하세요 개인적으로 ```text generation``` part의 번역에 참여하고자 합니다.\r\ndraft가 완성되면 PR보내드리겠습니다!\r\n\r\nHi All! I would like to participate the translation job (especailly the part of ```text generation```).\r\nIf a first draft is done, I will send a PR request and then let you know.", "[huggingface_hub](https://github.com/huggingface/huggingface_hub/issues/1626)의 docs를 [transformer](https://github.com/huggingface/transformers/issues/20179)로 잘못 멘션했습니다. 현재 수정해 두었으며, 바로 위 멘션은 무시해주세요. 죄송합니다.\r\n\r\nI incorrectly mentioned [huggingface_hub](https://github.com/huggingface/huggingface_hub/issues/1626)'s docs as a [transformer](https://github.com/huggingface/transformers/issues/20179), I've fixed it now, please ignore the comment immediately above, sorry." ]
1,668
1,700
null
CONTRIBUTOR
null
Hi! Let's bring the documentation to all the Korean-speaking community 🌏 (currently 9 out of 77 complete) Would you want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `ko` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `ko/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger and @eunseojo for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). * With the [HuggingFace Documentation l10n](https://pseudo-lab.com/HuggingFace-0558662add4949558f6b4c4d526547da) initiative of [Pseudo Lab](https://pseudo-lab.com/), full translation will be done even faster. 🎉 Please give us your support! Cheers to our team 👍@0525hhgus, @KIHOON71, @gabrielwithappy, @jungnerd, @sim-so, @HanNayeoniee, @wonhyeongseo 안녕하세요! 한국어를 사용하는 모두가 기술 문서를 읽을 수 있게 해보아요 🌏 (현재 77개 문서 중 9개 완료) 번역에 참여하고 싶으신가요? 🤗 [번역 가이드](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md)를 먼저 읽어보시기 바랍니다. 끝 부분에 번역해야할 파일들이 나열되어 있습니다. 작업하고 계신 파일이 있다면 여기에 간단히 알려주세요. 중복되지 않도록 `작업중`으로 표시해둘게요. 참고 사항: * 기술 문서이지만 (친구에게 설명 듣듯이) 쉽게 읽히면 좋겠습니다. __존댓말__ 로 써주시면 감사하겠습니다. * 성별은 일부 언어(스페인어, 프랑스어 등)에만 적용되는 사항으로, 한국어의 경우 번역기를 사용하신 후 문장 기호와 조사 등이 알맞는지 확인해주시기 바랍니다. * [소스 폴더](https://github.com/huggingface/transformers/tree/main/docs/source) 아래 `ko` 폴더에 번역본을 넣어주세요. * 목차(`ko/_toctree.yml`)도 함께 업데이트해주세요. [영어 목차](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml)와 순서가 동일해야 합니다. * 모두 마치셨다면, 기록이 원활하도록 PR을 여실 때 현재 이슈(`#20179`)를 내용에 넣어주시기 바랍니다. 리뷰 요청은 @ArthurZucker님, @sgugger님, @eunseojo님께 요청해주세요. * 🙋 커뮤니티에 마음껏 홍보해주시기 바랍니다! 🤗 [포럼](https://discuss.huggingface.co/)에 올리셔도 좋아요. * [가짜연구소](https://pseudo-lab.com/)의 [이니셔티브](https://pseudo-lab.com/HuggingFace-0558662add4949558f6b4c4d526547da)로 번역이 더욱 빠르게 진행될 예정입니다. 🎉 많은 응원 부탁드려요! 우리팀 화이팅 👍 @0525hhgus, @KIHOON71, @gabrielwithappy, @jungnerd, @sim-so, @HanNayeoniee, @wonhyeongseo ## GET STARTED - [x] 🤗 Transformers https://github.com/huggingface/transformers/pull/20180 - [x] Quick tour https://github.com/huggingface/transformers/pull/20946 - [x] Installation https://github.com/huggingface/transformers/pull/20948 ## TUTORIAL - [x] Pipelines for inference https://github.com/huggingface/transformers/pull/22508 - [x] Load pretrained instances with an AutoClass https://github.com/huggingface/transformers/pull/22533 - [x] Preprocess https://github.com/huggingface/transformers/pull/22578 - [x] Fine-tune a pretrained model https://github.com/huggingface/transformers/pull/22670 - [x] Train with a script https://github.com/huggingface/transformers/pull/22793 - [x] Distributed training with 🤗 Accelerate https://github.com/huggingface/transformers/pull/22830 - [x] Load and train adapters with 🤗 PEFT https://github.com/huggingface/transformers/pull/25706 - [x] Share a model - [x] Agents https://github.com/huggingface/transformers/pull/24881 - [x] Generation with LLMs https://github.com/huggingface/transformers/pull/25791 ## TASK GUIDES ### NATURAL LANGUAGE PROCESSING - [x] Text classification https://github.com/huggingface/transformers/pull/22655 - [x] Token classification https://github.com/huggingface/transformers/pull/22945 - [x] Question answering - [x] Causal language modeling - [x] Masked language modeling https://github.com/huggingface/transformers/pull/22838 - [x] Translation https://github.com/huggingface/transformers/pull/22805 - [x] Summarization https://github.com/huggingface/transformers/pull/22783 - [x] Multiple choice ### AUDIO - [x] Audio classification https://github.com/huggingface/transformers/pull/26200 - [x] Automatic speech recognition ### COMPUTER VISION - [x] Image classification - [x] Semantic segmentation https://github.com/huggingface/transformers/pull/26515 - [x] Video classification - [x] Object detection - [x] Zero-shot object detection - [x] Zero-shot image classification - [x] Depth estimation ### MULTIMODAL - [x] Image captioning - [x] Document Question Answering - [x] Visual Question Answering https://github.com/huggingface/transformers/pull/25679 - [ ] Text to speech ### GENERATION - [ ] Customize the generation strategy ## DEVELOPER GUIDES - [x] Use tokenizers from 🤗 Tokenizers https://github.com/huggingface/transformers/pull/22956 - [x] Inference for multilingual models - [x] Create a custom architecture https://github.com/huggingface/transformers/pull/22754 - [x] Sharing custom models https://github.com/huggingface/transformers/pull/22534 - [x] Run training on Amazon SageMaker https://github.com/huggingface/transformers/pull/22509 - [x] Export to ONNX https://github.com/huggingface/transformers/pull/22806 - [x] Export to TFLite - [x] Export to TorchScript - [ ] Benchmarks - [ ] Notebooks with examples - [x] Community resources https://github.com/huggingface/transformers/pull/25674 - [x] Custom Tools and Prompts - [x] Troubleshoot ## PERFORMANCE AND SCALABILITY - [x] Overview ### EFFICIENT TRAINING TECHNIQUES - [ ] Training on one GPU https://github.com/huggingface/transformers/pull/25250 - [x] Training on many GPUs https://github.com/huggingface/transformers/pull/26244 - [x] Training on CPU https://github.com/huggingface/transformers/pull/24911 - [x] Training on many CPUs https://github.com/huggingface/transformers/pull/24923 - [ ] Training on TPUs - [x] Training on TPU with TensorFlow - [ ] Training on Specialized Hardware - [x] Custom hardware for training https://github.com/huggingface/transformers/pull/24966 - [x] Hyperparameter Search using Trainer API ### OPTIMIZING INFERENCE - [x] Inference on CPU https://github.com/huggingface/transformers/pull/24920 - [x] Inference on one GPU https://github.com/huggingface/transformers/pull/24978 - [x] Inference on many GPUs https://github.com/huggingface/transformers/pull/24943 - [ ] Inference on Specialized Hardware - [x] Instantiating a big model https://github.com/huggingface/transformers/pull/26245 - [x] Debugging https://github.com/huggingface/transformers/pull/26246 - [x] XLA Integration for TensorFlow Models https://github.com/huggingface/transformers/pull/24904 - [ ] Optimize inference using `torch.compile` ### CONTRIBUTE - [x] How to contribute to transformers? https://github.com/huggingface/transformers/pull/25877 - [x] How to add a model to 🤗 Transformers? https://github.com/huggingface/transformers/pull/24957 - [x] How to convert a 🤗 Transformers model to TensorFlow? https://github.com/huggingface/transformers/pull/25017 - [x] How to add a pipeline to 🤗 Transformers? https://github.com/huggingface/transformers/pull/25498 - [x] Testing https://github.com/huggingface/transformers/pull/24900 - [ ] Checks on a Pull Request ### CONCEPTUAL GUIDES - [x] Philosophy https://github.com/huggingface/transformers/pull/25010 - [ ] Glossary - [x] What 🤗 Transformers can do - [x] How 🤗 Transformers solve tasks https://github.com/huggingface/transformers/pull/23844 - [x] The Transformer model family https://github.com/huggingface/transformers/pull/24625 - [x] Summary of the tokenizers https://github.com/huggingface/transformers/pull/26243 - [x] Attention mechanisms - [x] Padding and truncation https://github.com/huggingface/transformers/pull/23823 - [x] BERTology https://github.com/huggingface/transformers/pull/23968 - [x] Perplexity of fixed-length models https://github.com/huggingface/transformers/pull/23850 - [x] Pipelines for webserver inference https://github.com/huggingface/transformers/pull/24828 - [x] Model training anatomy https://github.com/huggingface/transformers/pull/25755 <details> <summary> ## Other relevant PRs along the way </summary> - Enable easy Table of Contents editing https://github.com/huggingface/transformers/pull/22581 - Added forgotten internal English anchors for `sagemaker.mdx` https://github.com/huggingface/transformers/pull/22549 - Fixed anchor links for `auto_class`, `training` https://github.com/huggingface/transformers/pull/22796 - Update ToC from upstream https://github.com/huggingface/transformers/pull/23112 </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20179/reactions", "total_count": 5, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20179/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/20178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20178/comments
https://api.github.com/repos/huggingface/transformers/issues/20178/events
https://github.com/huggingface/transformers/pull/20178
1,445,920,446
PR_kwDOCUB6oc5Cug7W
20,178
Support Bloom models in distillation example
{ "login": "mapmeld", "id": 643918, "node_id": "MDQ6VXNlcjY0MzkxOA==", "avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mapmeld", "html_url": "https://github.com/mapmeld", "followers_url": "https://api.github.com/users/mapmeld/followers", "following_url": "https://api.github.com/users/mapmeld/following{/other_user}", "gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}", "starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions", "organizations_url": "https://api.github.com/users/mapmeld/orgs", "repos_url": "https://api.github.com/users/mapmeld/repos", "events_url": "https://api.github.com/users/mapmeld/events{/privacy}", "received_events_url": "https://api.github.com/users/mapmeld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20178). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20178). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20178). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This updates the model distillation example to support more models on the Hub, including Bloom models. I use this code to create smaller, monolingual generative models based on mGPT or Bloom. Demo notebook: https://colab.research.google.com/drive/1DJLNA4TcW45HYzuSDAVb8t6n7OTU1NyI?usp=sharing - Support Bloom model and tokenizer in distillation example scripts - Handles Hub model and tokenizer names with a slash, such as 'sberbank-ai/mGPT' - Allow models without `max_position_embeddings` and tokenizers without set `max_model_input_sizes` - Add `max_model_input_size` as a CLI param to make it easier to train on one GPU and support tokenizers without `max_model_input_sizes` - remove `git log` call and `git-python` dependency ---- - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20178/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20178", "html_url": "https://github.com/huggingface/transformers/pull/20178", "diff_url": "https://github.com/huggingface/transformers/pull/20178.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20178.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20177/comments
https://api.github.com/repos/huggingface/transformers/issues/20177/events
https://github.com/huggingface/transformers/pull/20177
1,445,651,293
PR_kwDOCUB6oc5CtmsZ
20,177
Add missing ESM autoclass
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
MEMBER
null
The autoclass for `EsmForMaskedLM` was missing, this adds it back!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20177/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20177", "html_url": "https://github.com/huggingface/transformers/pull/20177", "diff_url": "https://github.com/huggingface/transformers/pull/20177.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20177.patch", "merged_at": 1668522022000 }
https://api.github.com/repos/huggingface/transformers/issues/20176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20176/comments
https://api.github.com/repos/huggingface/transformers/issues/20176/events
https://github.com/huggingface/transformers/issues/20176
1,445,554,333
I_kwDOCUB6oc5WKWid
20,176
Add GPT-SW3 models to huggingface
{ "login": "ekgren", "id": 1921821, "node_id": "MDQ6VXNlcjE5MjE4MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1921821?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekgren", "html_url": "https://github.com/ekgren", "followers_url": "https://api.github.com/users/ekgren/followers", "following_url": "https://api.github.com/users/ekgren/following{/other_user}", "gists_url": "https://api.github.com/users/ekgren/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekgren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekgren/subscriptions", "organizations_url": "https://api.github.com/users/ekgren/orgs", "repos_url": "https://api.github.com/users/ekgren/repos", "events_url": "https://api.github.com/users/ekgren/events{/privacy}", "received_events_url": "https://api.github.com/users/ekgren/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "PR merged! :partying_face: " ]
1,668
1,670
1,670
CONTRIBUTOR
null
### Model description At AI Sweden we are developing GPT models for the nordic region. Languages include English, Swedish, Danish, Norwegian and Icelandic. The models are of the GPT family. The models will range in size from 126m to 20B. They are trained from scratch on a large corpora of 320B tokens. They are trained with the nemo megatron framework and has a sentencepiece tokenizer. The weights are not shared yet and we intend to share them through huggingface as well as publishing our training process and results. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Training framework: https://developer.nvidia.com/nemo/megatron
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20176/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20175/comments
https://api.github.com/repos/huggingface/transformers/issues/20175/events
https://github.com/huggingface/transformers/pull/20175
1,445,431,701
PR_kwDOCUB6oc5Cs5Fj
20,175
[ROC_BERT] Make CI happy
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20175). All of your documentation changes will be reflected on that endpoint.", "Hi @younesbelkada Thank you for trying make CI happy.\r\n\r\nBut I am not sure why we need `torch.allclose` for 2 integer tensors. This should be used for float tensors. For the integer outputs (here token ids), they should be equal, so `self.assertEqual` would be the one to use I think.", "Thanks @ydshieh for the heads up, indeed I believe we can use that. Let me update it", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20175). All of your documentation changes will be reflected on that endpoint.", "ahah merci beaucoup @ydshieh !! " ]
1,668
1,669
1,668
CONTRIBUTOR
null
# What does this PR do? Fixes a small slow test that was failing when trying to play with 8-bit conversion for BERT models cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20175/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20175", "html_url": "https://github.com/huggingface/transformers/pull/20175", "diff_url": "https://github.com/huggingface/transformers/pull/20175.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20175.patch", "merged_at": 1668445465000 }
https://api.github.com/repos/huggingface/transformers/issues/20174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20174/comments
https://api.github.com/repos/huggingface/transformers/issues/20174/events
https://github.com/huggingface/transformers/pull/20174
1,445,424,186
PR_kwDOCUB6oc5Cs3eD
20,174
Add `accelerate` support for `ViT` family
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR adds `accelerate` support for ViT models, therefore these models can be loaded in 8-bit as follows: ``` # pip install accelerate bitsandbytes from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) model_8bit = ViTForImageClassification.from_pretrained('google/vit-large-patch32-384', device_map="auto", load_in_8bit=True) outputs_8bit = model_8bit(**inputs) logits = outputs_8bit.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model_8bit.config.id2label[predicted_class_idx]) ``` This PR introduces the first 8-bit compatible vision model. The same script works for `deit` too Putting the PR as a draft as I have few questions! cc @NielsRogge @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20174/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20174", "html_url": "https://github.com/huggingface/transformers/pull/20174", "diff_url": "https://github.com/huggingface/transformers/pull/20174.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20174.patch", "merged_at": 1668506762000 }
https://api.github.com/repos/huggingface/transformers/issues/20173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20173/comments
https://api.github.com/repos/huggingface/transformers/issues/20173/events
https://github.com/huggingface/transformers/issues/20173
1,445,320,861
I_kwDOCUB6oc5WJdid
20,173
Misleading tensor type in code and documentation of Wav2Vec2ForPreTraining
{ "login": "PierreOrhan", "id": 38761938, "node_id": "MDQ6VXNlcjM4NzYxOTM4", "avatar_url": "https://avatars.githubusercontent.com/u/38761938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PierreOrhan", "html_url": "https://github.com/PierreOrhan", "followers_url": "https://api.github.com/users/PierreOrhan/followers", "following_url": "https://api.github.com/users/PierreOrhan/following{/other_user}", "gists_url": "https://api.github.com/users/PierreOrhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/PierreOrhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreOrhan/subscriptions", "organizations_url": "https://api.github.com/users/PierreOrhan/orgs", "repos_url": "https://api.github.com/users/PierreOrhan/repos", "events_url": "https://api.github.com/users/PierreOrhan/events{/privacy}", "received_events_url": "https://api.github.com/users/PierreOrhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "Hey @PierreOrhan, \r\n\r\nYou're entirely correct, the variable `sampled_negative_indices` is a tensor of integers of dim `(batch_size, sequence_length, num_negatives)` that specifies the positions (indices) of quantised vectors to use as negatives during pre-training, not a boolean mask. Would you like to open a PR to fix this? 🤗 You can tag me for a review!\r\n\r\nChanging the name of the arg `mask_time_indices` would be a breaking change, but we can certainly update the docstring to clarify this!", "Sure!\r\nWhile we are at it, in the original paper and fairseq implementation, there is a scaling of the feature extractor gradient by 0.1 for the base architecture. I can add that gradient scaling if you wish too, it really helped my trainings. This would require either to use a constant value (0.1, but since its not used for the Large model this might not be the best idea) or to change the config class of Wav2vec2 to allow the setting of this multiplier.", "This is specifically for pre-training right? We could modify the pre-training scripts accordingly!\r\n\r\nhttps://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining", "Hey @PierreOrhan! Let me know if you're still interested in opening a PR to fix this! It would be awesome to update the Wav2Vec2 code to prevent further silent bugs like this occurring for other users in the future!\r\n\r\nWe can also look into the gradient scaling in a separate PR if you want! Happy to help with this integration too (think it could be beneficial for training)", "Sure, I have this on my todo list and should be able to submit in a month or so (finishing a project right now), it will take me time since I never oppened a PR on a large repo so I need to understand the whole process.\r\nBest,", "Hey @PierreOrhan! Sounds good! Best of luck with finishing off your current project! \r\n\r\nExciting that this will be your first PR on a large repo! I'll be on hand to help you with this process to make it as easy as possible! We have a pretty comprehensive guide for opening a PR on transformers: https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md Feel free to have a skim through this guide to get a feel for the steps involved, and don't hesitate to ask any questions here or on a draft PR! More than happy to help!\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,679
1,679
NONE
null
The Wav2Vec2ForPreTraining expects in its forward pass a torch.BoolTensor for the sampled_negative_indices. https://github.com/huggingface/transformers/blob/cbbeca3d1733aa7c9b443af5ff231a5affcd8a1e/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1404 But as the example and the code then indicates, this tensor should not be a boolean mask but should instead store the indices of the negative sample. Therefore its expected type should be long. This was misleading and lead to a silent bug in my codes for quite long as I would send a boolean tensor fo the negative indices instead of a long tensor. Is my reading of the code is correct, could we change the expected type of sampled_negative_indices ? Maybe @patrickvonplaten ( as it seems that he implemented the code) could confirm my reading? It could be worth clarifying the name of mask_time_indices as well, since this one expects a mask and therefore a boolean and not a long tensor. It is also converted to a boolean tensor directly in the beginning of the forward call so this could be misleading for others as well. Thank you for the implementation and help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20173/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20172/comments
https://api.github.com/repos/huggingface/transformers/issues/20172/events
https://github.com/huggingface/transformers/pull/20172
1,445,229,031
PR_kwDOCUB6oc5CsNMC
20,172
Remove Optional[PILImageResampling] typing
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20172). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Resolves bug if Pillow < 2.9.1 resulting from type checks. If lower versions of Pillow are used, `PILImageResampling` is an alias for the `PIL.Image` module c.f. [definition here](https://github.com/huggingface/transformers/blob/d3c05666798bd8fcc7a03564436e5100b080a5df/src/transformers/image_utils.py#L48). When defining`var: Optional[obj] = None` - specified `obj` in `Optional` can't be a module - so it fails e.g. [here](https://github.com/huggingface/transformers/blob/d3c05666798bd8fcc7a03564436e5100b080a5df/src/transformers/models/segformer/image_processing_segformer.py#L231). The PR also replaces any remaining `PIL.Image.Resampling` with `PILImageResampling` in the codebase. Another option is to define a new type. I decided to go for the fastest resolution - LMK if you think this is better. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20172/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20172", "html_url": "https://github.com/huggingface/transformers/pull/20172", "diff_url": "https://github.com/huggingface/transformers/pull/20172.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20172.patch", "merged_at": 1668185760000 }
https://api.github.com/repos/huggingface/transformers/issues/20171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20171/comments
https://api.github.com/repos/huggingface/transformers/issues/20171/events
https://github.com/huggingface/transformers/issues/20171
1,445,203,127
I_kwDOCUB6oc5WJAy3
20,171
How to train transformer using my own data?
{ "login": "Arsmart1", "id": 49458769, "node_id": "MDQ6VXNlcjQ5NDU4NzY5", "avatar_url": "https://avatars.githubusercontent.com/u/49458769?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arsmart1", "html_url": "https://github.com/Arsmart1", "followers_url": "https://api.github.com/users/Arsmart1/followers", "following_url": "https://api.github.com/users/Arsmart1/following{/other_user}", "gists_url": "https://api.github.com/users/Arsmart1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arsmart1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arsmart1/subscriptions", "organizations_url": "https://api.github.com/users/Arsmart1/orgs", "repos_url": "https://api.github.com/users/Arsmart1/repos", "events_url": "https://api.github.com/users/Arsmart1/events{/privacy}", "received_events_url": "https://api.github.com/users/Arsmart1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Here is a tutorial in the transformers Doc on how to fine tune a pretrained model using the Transformers library (https://huggingface.co/docs/transformers/training), you can also check the Hugging face course https://huggingface.co/course/chapter1/1, for more help you will need to precise what do you want to do exactly\r\n\r\nBesides, I noticed that you opened another issue #20187 for the exact same question. You should always avoid doing that you're just making the maintainers job harder, you should close your new issue. Also, I personally think it will be better to ask these kinds of questions on the Hugging Face's forum (https://discuss.huggingface.co/) or discord server (https://discord.com/invite/JfAtkvEtRb).", "Check out the free HuggingFace course: https://hf.co/course." ]
1,668
1,668
1,668
NONE
null
### Feature request I have not see a tutorial for training my own data.....Can I ? I am a new AI learner. Thank you!!! ### Motivation I have not see a tutorial for training my own data.....Can I ? I am a new AI learner. Thank you!!! ### Your contribution I have not see a tutorial for training my own data.....Can I ? I am a new AI learner. Thank you!!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20171/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20170/comments
https://api.github.com/repos/huggingface/transformers/issues/20170/events
https://github.com/huggingface/transformers/issues/20170
1,445,201,736
I_kwDOCUB6oc5WJAdI
20,170
I am getting below error message when loading XLM 17 Language pretrained model.
{ "login": "Allricstan", "id": 79781259, "node_id": "MDQ6VXNlcjc5NzgxMjU5", "avatar_url": "https://avatars.githubusercontent.com/u/79781259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Allricstan", "html_url": "https://github.com/Allricstan", "followers_url": "https://api.github.com/users/Allricstan/followers", "following_url": "https://api.github.com/users/Allricstan/following{/other_user}", "gists_url": "https://api.github.com/users/Allricstan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Allricstan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Allricstan/subscriptions", "organizations_url": "https://api.github.com/users/Allricstan/orgs", "repos_url": "https://api.github.com/users/Allricstan/repos", "events_url": "https://api.github.com/users/Allricstan/events{/privacy}", "received_events_url": "https://api.github.com/users/Allricstan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can anyone help me how to use this model for language translation?\r\n", "You can disregard the warning, it's issued wrongly (and it's been fixed on main).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
Some weights of XLMWithLMHeadModel were not initialized from the model checkpoint at xlm-mlm-17-1280 and are newly initialized: ['transformer.position_ids'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20170/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20169/comments
https://api.github.com/repos/huggingface/transformers/issues/20169/events
https://github.com/huggingface/transformers/issues/20169
1,444,774,020
I_kwDOCUB6oc5WHYCE
20,169
trocr deepspeed
{ "login": "dwyane1023", "id": 32123383, "node_id": "MDQ6VXNlcjMyMTIzMzgz", "avatar_url": "https://avatars.githubusercontent.com/u/32123383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyane1023", "html_url": "https://github.com/dwyane1023", "followers_url": "https://api.github.com/users/dwyane1023/followers", "following_url": "https://api.github.com/users/dwyane1023/following{/other_user}", "gists_url": "https://api.github.com/users/dwyane1023/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyane1023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyane1023/subscriptions", "organizations_url": "https://api.github.com/users/dwyane1023/orgs", "repos_url": "https://api.github.com/users/dwyane1023/repos", "events_url": "https://api.github.com/users/dwyane1023/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyane1023/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "issue resolved by\r\n`hidden_size = 768`\r\n`setattr(model.config, \"hidden_size\", hidden_size)`\r\n\r\nrefer to this #15526" ]
1,668
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.10.14-1.el7.elrepo.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Hi @stas00, ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am using huggingface integrated deepspeed to train TROCR in jupyterlab. For both load method are the same issue. `model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained()` `model = VisionEncoderDecoderModel.from_pretrained()` ### Expected behavior However, I met an issue `AttributeError: 'VisionEncoderDecoderConfig' object has no attribute 'hidden_size'` Do we have any configuration to fix this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20169/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20168/comments
https://api.github.com/repos/huggingface/transformers/issues/20168/events
https://github.com/huggingface/transformers/pull/20168
1,444,491,418
PR_kwDOCUB6oc5CpvdN
20,168
Enable PyTorch 1.13
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @sgugger , there is a PR\r\n\r\nhttps://github.com/huggingface/transformers/pull/20149\r\n\r\nbut we forgot to ping you.\r\n\r\n(Oh, I just saw you are aware of that PR)", "Hi @sgugger :) - One question, you have removed `torch-scatter` dependency but at the same time you have added decorator `require_scatter` for `test_pt_tf_model_equivalence` test. Is that correct or am I just missing something?", "The scatter dependency is only removed from the CPU runners on circleCI, it's not removed from the library byt his PR, that's your job ;-) . When testing in our other setups that include the `scatter` dependency, the tests will be run (until your PR is merged and the dep is removed entirely)." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? This PR enables PyTorch 1.13 for Transformers so we can start adding functionality like safer loading with `torch.load`. Since there are no wheels for torch scatter, this comes at the price of uninstalling `torch-scatter`. However the PR to move away from this dep and use the PyTorch core ops seems well under way, so skipping the TAPAS tests for now until the PR is merged does not seem like a heavy price to pay (cc @NielsRogge for information). A couple of tests are still failing, which are all torch FX tests (cc @michaelbenayoun, see failing job [here](https://app.circleci.com/pipelines/github/huggingface/transformers/51241/workflows/4a2c365b-8721-4c2c-adac-54f0fd7cf9c8/jobs/613048)). I'm skipping them and we can fix them next week in a followup PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20168/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20168/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20168", "html_url": "https://github.com/huggingface/transformers/pull/20168", "diff_url": "https://github.com/huggingface/transformers/pull/20168.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20168.patch", "merged_at": 1668529989000 }
https://api.github.com/repos/huggingface/transformers/issues/20167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20167/comments
https://api.github.com/repos/huggingface/transformers/issues/20167/events
https://github.com/huggingface/transformers/pull/20167
1,444,355,461
PR_kwDOCUB6oc5CpR5m
20,167
Fix object-detection bug (height, width inversion).
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20167). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Fixes a bug I didn't catch, but height and width where inversed. https://huggingface.co/Narsil/layoutlm-funsd (Contains the fix) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20167/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20167", "html_url": "https://github.com/huggingface/transformers/pull/20167", "diff_url": "https://github.com/huggingface/transformers/pull/20167.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20167.patch", "merged_at": 1668158089000 }
https://api.github.com/repos/huggingface/transformers/issues/20166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20166/comments
https://api.github.com/repos/huggingface/transformers/issues/20166/events
https://github.com/huggingface/transformers/pull/20166
1,444,208,418
PR_kwDOCUB6oc5Coxs0
20,166
Fix arg names for our models
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
MEMBER
null
Some arg names on the `infer` method for `ESMFold` don't fit our port, and this PR updates/removes them. In future these methods will probably be moved to the tokenizer/processor but this quick fix will at least get them working for now! Fixes #20120
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20166/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20166", "html_url": "https://github.com/huggingface/transformers/pull/20166", "diff_url": "https://github.com/huggingface/transformers/pull/20166.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20166.patch", "merged_at": 1668098879000 }
https://api.github.com/repos/huggingface/transformers/issues/20165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20165/comments
https://api.github.com/repos/huggingface/transformers/issues/20165/events
https://github.com/huggingface/transformers/issues/20165
1,444,121,310
I_kwDOCUB6oc5WE4re
20,165
Adding a siamese text similarity inference pipeline
{ "login": "rohit1998", "id": 18055780, "node_id": "MDQ6VXNlcjE4MDU1Nzgw", "avatar_url": "https://avatars.githubusercontent.com/u/18055780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohit1998", "html_url": "https://github.com/rohit1998", "followers_url": "https://api.github.com/users/rohit1998/followers", "following_url": "https://api.github.com/users/rohit1998/following{/other_user}", "gists_url": "https://api.github.com/users/rohit1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohit1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohit1998/subscriptions", "organizations_url": "https://api.github.com/users/rohit1998/orgs", "repos_url": "https://api.github.com/users/rohit1998/repos", "events_url": "https://api.github.com/users/rohit1998/events{/privacy}", "received_events_url": "https://api.github.com/users/rohit1998/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@Narsil , please have a look 😇", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, sorry for failing to see those messages.\r\n\r\nIn principle there's no issue with adding such a pipeline, however, do `transformers` have such models ?\r\n\r\nIdeally pipelines do not reflect architecture (here siamese) only the input and outcome, so something like `text-similarity` comes to mind. (This is covered by `sentence-transformers` typically).", "I dont think transformers has any siamese models right now. Package currently lacks a way to train `text-similarity` directly too. \r\n\r\nSentence-transformers cover it but i has its own issues. You cannot use other pipeline abstractions or TrainerAPI that are part of transformers when using sentence-transformers. Package doesn't even support multi-gpu training out of the box. So I believe there is a value to getting them to transformers in future.\r\n\r\nWe need to send a pair of text instead of text, and we need to tokenise the both texts separately. Also support datacollator separately for text pairs. Since pipeline supports these operations i believe pipeline would need change. I haven't looked deep into how `sentence-transformers` does it.\r\n\r\nFor now, I have moved to using pytorch-lighting for my training and inference, so I dont have immediate need for this now. We can mark this issue closed if it doesnt align with package direction right now. Else i can make a design and contribute some basic pipeline and training framework to add support after design gets a go ahead.", "Tagging @sgugger for information.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Siamese training & inference would be very useful. It is used in SetFit for instance." ]
1,668
1,700
1,676
CONTRIBUTOR
null
### Feature request Siamese networks have been really popular for variety of tasks in nlp. I am wondering if we can build a inference pipeline class for siamese like architecture. Pipeline can look something like ``` class SiamesePipeline(Pipeline): def _sanitize_parameters(self): return {}, {}, {} def preprocess(self, texts): return self.tokenizer(texts['text'], return_tensors=self.framework),self.tokenizer(texts['text_pair'], return_tensors=self.framework) def _forward(self, model_inputs): output_text = self.model(**model_inputs[0]) output_text_pair = self.model(**model_inputs[1]) sentence_embedding_text = output_text['last_hidden_state'][:,0,:] sentence_embedding_text_pair = output_text_pair['last_hidden_state'][:,0,:] cos = torch.nn.functional.cosine_similarity(sentence_embedding_text, sentence_embedding_text_pair) return cos def postprocess(self, model_outputs): return model_outputs.item() ``` Please note that preprocess takes in a dictionary of text pairs eg ({'text':'I like you.', 'text_pair':'I love you.'}). Also we would need to make change in datacollator to handle the tuples for [base](https://github.com/huggingface/transformers/blob/e0d7c831c7691a0069a57ba03993a8d531343de1/src/transformers/pipelines/base.py#L172) pipeline. ``` def inner_wrapper(items): if isinstance(items[0], tuple): items_text, items_text_pair = zip(*items) return inner(items_text), inner(items_text_pair) else: return inner(items) return inner_wrapper ``` inference will looks something like ``` tokenizer = AutoTokenizer.from_pretrained(input_path_model) model = AutoModel.from_pretrained(input_path_model) dataset = Dataset.from_pandas(dataset_df[['text', 'text_pair']]) pipe = SiamesePipeline( model=model, tokenizer=tokenizer, device=model_params_parsed['device'], num_workers=4) score = list(tqdm(pipe( KeyPairDataset(dataset, 'text', 'text_pair'), batch_size=model_params_parsed['batch_size']), total=len(dataset))) ``` or without dataset `pipe({'text':'I like you.', 'text_pair':'I love you.'})` Would love to know if this approach is good and in line with pipeline conventions. If yes then does it makes sense to add this change permanently. ### Motivation I am using this solution for my Siamese experiments. ### Your contribution I can raise a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20165/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/20165/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20164/comments
https://api.github.com/repos/huggingface/transformers/issues/20164/events
https://github.com/huggingface/transformers/pull/20164
1,444,034,353
PR_kwDOCUB6oc5CoLjh
20,164
doc comment fix: Args was in wrong place
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20164). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Small fix for `Args:` being in the wrong place in the doc comments. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20164/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20164/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20164", "html_url": "https://github.com/huggingface/transformers/pull/20164", "diff_url": "https://github.com/huggingface/transformers/pull/20164.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20164.patch", "merged_at": 1668092545000 }
https://api.github.com/repos/huggingface/transformers/issues/20163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20163/comments
https://api.github.com/repos/huggingface/transformers/issues/20163/events
https://github.com/huggingface/transformers/issues/20163
1,444,005,941
I_kwDOCUB6oc5WEcg1
20,163
OWLViT - Allow text model to compute text embeddings only once
{ "login": "ekazakos", "id": 20310086, "node_id": "MDQ6VXNlcjIwMzEwMDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20310086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekazakos", "html_url": "https://github.com/ekazakos", "followers_url": "https://api.github.com/users/ekazakos/followers", "following_url": "https://api.github.com/users/ekazakos/following{/other_user}", "gists_url": "https://api.github.com/users/ekazakos/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekazakos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekazakos/subscriptions", "organizations_url": "https://api.github.com/users/ekazakos/orgs", "repos_url": "https://api.github.com/users/ekazakos/repos", "events_url": "https://api.github.com/users/ekazakos/events{/privacy}", "received_events_url": "https://api.github.com/users/ekazakos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This is a great feature request and can help in improving the usability of OWL-ViT. Please do consider.\r\n\r\ncc @alaradirik ", "Hi @amitjena40, sorry for my late reply! \r\n\r\nWe looked into this issue and while we think it would make it easier to use OWL-ViT at a larger scale, it is a big breaking change as it requires introducing new arguments and rearranging multiple components. " ]
1,668
1,677
1,671
NONE
null
### Feature request Hi again, Currently, the OWL-ViT pipeline works by having a list of queries **per example** (e.g. for classification problems with 10 classes there would be 10 queries for each image). I understand that the rationale behind this is to allow to train with multiple datasets simultaneously where each dataset has a different set of classes, and where images from different dataset may be in the batch at the same time. Nevertheless, in many settings (in most actually), we train/test using a single training/test set. And calculating the text embedding multiple times causes redundant computations. So, could you provide functionality for allowing the text embedding to be calculated only once? By once I mean **one time for each of the queries that correspond to the dataset classes**, and this to be used for all examples in the batch rather than calculating it for each batch element. Thank you! cc @alaradirik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20163/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20162/comments
https://api.github.com/repos/huggingface/transformers/issues/20162/events
https://github.com/huggingface/transformers/pull/20162
1,443,945,383
PR_kwDOCUB6oc5Cn4BE
20,162
[WHISPER] Update modeling tests
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.", "I agree, it is indeed better to just use `add_special_tokens=False`! Slipped my mind 😉 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.", "Ping me again once when you think it's ready :) ", "Looks like we still need to add `add_special_tokens=False` to the tf test @ArthurZucker!", "Yeah on it 🤗", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.", "Ah there's small bug with this, the kwargs is passed to `self.pad` in the feature extractor. Gonna fix that", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20162). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Fixes the test after the update of the tokenizer, which added sufix and prefix tokens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20162/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20162", "html_url": "https://github.com/huggingface/transformers/pull/20162", "diff_url": "https://github.com/huggingface/transformers/pull/20162.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20162.patch", "merged_at": 1668506699000 }
https://api.github.com/repos/huggingface/transformers/issues/20161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20161/comments
https://api.github.com/repos/huggingface/transformers/issues/20161/events
https://github.com/huggingface/transformers/issues/20161
1,443,781,711
I_kwDOCUB6oc5WDlxP
20,161
how to fine tune custom dataset using coreference pretrained model
{ "login": "SavitaKumariPandit", "id": 46734351, "node_id": "MDQ6VXNlcjQ2NzM0MzUx", "avatar_url": "https://avatars.githubusercontent.com/u/46734351?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SavitaKumariPandit", "html_url": "https://github.com/SavitaKumariPandit", "followers_url": "https://api.github.com/users/SavitaKumariPandit/followers", "following_url": "https://api.github.com/users/SavitaKumariPandit/following{/other_user}", "gists_url": "https://api.github.com/users/SavitaKumariPandit/gists{/gist_id}", "starred_url": "https://api.github.com/users/SavitaKumariPandit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SavitaKumariPandit/subscriptions", "organizations_url": "https://api.github.com/users/SavitaKumariPandit/orgs", "repos_url": "https://api.github.com/users/SavitaKumariPandit/repos", "events_url": "https://api.github.com/users/SavitaKumariPandit/events{/privacy}", "received_events_url": "https://api.github.com/users/SavitaKumariPandit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nFor that I'll refer to the training guide of Sentence Transformers: https://www.sbert.net/docs/training/overview.html.", "> Hi,\r\n> \r\n> For that I'll refer to the training guide of Sentence Transformers: https://www.sbert.net/docs/training/overview.html.\r\n\r\nHi @NielsRogge , I have to use my own dataset for co-reference resolution task, so above mention suggestion will work on pretrained model of \"nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large\". \r\nafter finetune I got output folder which contains 1_Pooling ,config.json, config_sentence_transformers.json,eval,modules.json,pytorch_model.bin,README.md,sentence_bert_config.json\r\nsentencepiece.bpe.model,special_tokens_map.json, tokenizer.json, tokenizer_config.json and this folder I saved as zip file and load path of fine tune model for prediction but when use for prediction it is not working" ]
1,668
1,668
1,668
NONE
null
the pre-trained model available in hugging face hub "nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large" how to fine tuning with own custom dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20161/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20160/comments
https://api.github.com/repos/huggingface/transformers/issues/20160/events
https://github.com/huggingface/transformers/pull/20160
1,443,706,666
PR_kwDOCUB6oc5CnDpL
20,160
Add segmentation + object detection image processors
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20160). All of your documentation changes will be reflected on that endpoint.", "@NielsRogge @sgugger @alaradirik Sorry for the previous issues with the docstrings. They should all be resolved now." ]
1,668
1,669
1,669
COLLABORATOR
null
# What does this PR do? Adds image processors for DETR, Deformable DETR, Conditional DETR, YOLOS and Maskformer, as many of the image processors methods are copied from DETR. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20160/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/20160/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20160", "html_url": "https://github.com/huggingface/transformers/pull/20160", "diff_url": "https://github.com/huggingface/transformers/pull/20160.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20160.patch", "merged_at": 1669803843000 }
https://api.github.com/repos/huggingface/transformers/issues/20159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20159/comments
https://api.github.com/repos/huggingface/transformers/issues/20159/events
https://github.com/huggingface/transformers/pull/20159
1,443,698,309
PR_kwDOCUB6oc5CnB0K
20,159
Generate: fix TF doctests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20159). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
MEMBER
null
# What does this PR do? Fixes doctests in `src/transformers/generation/tf_utils.py` that were not passing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20159/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20159", "html_url": "https://github.com/huggingface/transformers/pull/20159", "diff_url": "https://github.com/huggingface/transformers/pull/20159.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20159.patch", "merged_at": 1668094239000 }
https://api.github.com/repos/huggingface/transformers/issues/20158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20158/comments
https://api.github.com/repos/huggingface/transformers/issues/20158/events
https://github.com/huggingface/transformers/pull/20158
1,443,590,391
PR_kwDOCUB6oc5CmqLA
20,158
[MaskFormer] Add doc tests
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20158). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR fixes all code snippets for MaskFormer, and makes sure they are tested. None of them actually ran without issues. It makes a distinction between semantic and panoptic segmentation. Fixes #20132
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20158/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20158", "html_url": "https://github.com/huggingface/transformers/pull/20158", "diff_url": "https://github.com/huggingface/transformers/pull/20158.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20158.patch", "merged_at": 1668090331000 }
https://api.github.com/repos/huggingface/transformers/issues/20157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20157/comments
https://api.github.com/repos/huggingface/transformers/issues/20157/events
https://github.com/huggingface/transformers/pull/20157
1,443,546,089
PR_kwDOCUB6oc5Cmgfp
20,157
Update `OnnxConfig.generate_dummy_inputs` to check `ImageProcessingMixin`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "We still have `OwlViTFeatureExtractor` and no `OwlViTImageProcessor`.\r\n\r\n@amyeroberts I guess it is a miss, right? If so, could you work on adding `OwlViTImageProcessor` in another PR, thanks.", "@ydshieh Yes - the `OwlViTImageProcessor` doesn't exist yet (along with other object detection / segmentation models). These will be added soon. There's two pending PRs:\r\n* Adding transforms: https://github.com/huggingface/transformers/pull/20003\r\n* Adding these models image processors: https://github.com/huggingface/transformers/pull/20160\r\n\r\nI believe this should be only affect OwlViT - as it has a processor class which contains both the feature extractor and the image processor. As you've done in the PR - I think maintaining a check for both `FeatureExtactionMixin` and `ImageProcessingMixin` should work. We can then remove the check for \r\n`elif isinstance(preprocessor, FeatureExtractionMixin) and preprocessor.model_input_names[0] == \"pixel_values\":` \r\n" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? As we have a new class `ImageProcessingMixin`, the method `OnnxConfig.generate_dummy_inputs` needs to check this case, otherwise it can't find a if/else branch to create dummy inputs. See https://github.com/huggingface/transformers/blob/7ec1dc8817a99d16e6f9e0ab94ce4027ef74b72d/src/transformers/onnx/config.py#L371 Current failing error is: ```bash AssertionError: beit, default -> Unable to generate dummy inputs for the model. Please provide a tokenizer or a preprocessor. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20157/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20157", "html_url": "https://github.com/huggingface/transformers/pull/20157", "diff_url": "https://github.com/huggingface/transformers/pull/20157.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20157.patch", "merged_at": 1668092692000 }
https://api.github.com/repos/huggingface/transformers/issues/20156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20156/comments
https://api.github.com/repos/huggingface/transformers/issues/20156/events
https://github.com/huggingface/transformers/issues/20156
1,443,517,388
I_kwDOCUB6oc5WClPM
20,156
Models can't be loaded after updating to Python 3.10
{ "login": "NeuroinformaticaFBF", "id": 84315341, "node_id": "MDQ6VXNlcjg0MzE1MzQx", "avatar_url": "https://avatars.githubusercontent.com/u/84315341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NeuroinformaticaFBF", "html_url": "https://github.com/NeuroinformaticaFBF", "followers_url": "https://api.github.com/users/NeuroinformaticaFBF/followers", "following_url": "https://api.github.com/users/NeuroinformaticaFBF/following{/other_user}", "gists_url": "https://api.github.com/users/NeuroinformaticaFBF/gists{/gist_id}", "starred_url": "https://api.github.com/users/NeuroinformaticaFBF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NeuroinformaticaFBF/subscriptions", "organizations_url": "https://api.github.com/users/NeuroinformaticaFBF/orgs", "repos_url": "https://api.github.com/users/NeuroinformaticaFBF/repos", "events_url": "https://api.github.com/users/NeuroinformaticaFBF/events{/privacy}", "received_events_url": "https://api.github.com/users/NeuroinformaticaFBF/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @NeuroinformaticaFBF,\r\n\r\nThanks a lot for this issue! The issue seems to be coming from an external dependency [sentencepiece](https://github.com/google/sentencepiece) which is using protobuf . Can you share with us the versions of the following packages please :pray: ?\r\n\r\nFor example with:\r\n```\r\npip freeze | grep \"protobuf|sentencepiece|tokenizers\"\r\n```", "Yes of course.\r\n\r\nThese are the versions:\r\n\r\n- `protobuf` = `3.0.0`\r\n- `tokenizers` = `0.13.2`\r\n- `sentencepiece` = `0.1.97`", "I can't have an env with python 3.10.8 right now, but the first thing I would want to try is to upgrade protobuf to its latest version which is `4.21.9` :relaxed: ", "I installed version `4.21.9`. That changed the error, which was:\r\n\r\n```\r\nTypeError: Descriptors cannot not be created directly.\r\nIf this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.\r\nIf you cannot immediately regenerate your protos, some other possible workarounds are:\r\n 1. Downgrade the protobuf package to 3.20.x or lower.\r\n 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).\r\n\r\nMore information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates\r\n```\r\n\r\nDowngrading `protobuf` to version `3.20.0` fixed it!\r\nMany thanks for the quick help 👍🏻 ", "None of these things worked for me. In my case, I could downgrade Python to 3.9 version, without causing any other issue in my code. Note that this issue only happened to me with the Pytorch backend. Tensforflow models worked fine." ]
1,668
1,707
1,668
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.27 - Python version: 3.10.8 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @SaulLu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I updated Python to version `3.10.8`. Note that I use JupyterLab. I had to re-install a lot of packages (not `transformers` is `4.24.0`). At first, I got an error message that the fast tokenizer couldn't be loaded (sorry to be this vague, I didn't track it), so I updated some packages. Now I get an error when I try to load the tokenizer of [this model](https://huggingface.co/deepset/xlm-roberta-large-squad2), and I am not able to overcome it. Steps to reproduce the behavior: 1. Update Python to 3.10.8 2. Update JupyterLab and related libraries 3. Run the following code: ``` # Import libraries from transformers import pipeline, AutoTokenizer # Define checkpoint model_checkpoint = 'deepset/xlm-roberta-large-squad2' # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) ``` I tried several solutions ([this](https://stackoverflow.com/questions/70943244/attributeerror-module-collections-has-no-attribute-mutablemapping) and [this](https://stackoverflow.com/questions/69512672/getting-attributeerror-module-collections-has-no-attribute-mutablemapping-w)) but none seem to work. [Here](https://github.com/googleapis/google-auth-library-python/pull/419) they suggest I should change `collections.Mapping` into `collections.abc.Mapping`, but I wouldn't knwo where to do it. Another possible solution is downgrading Python to 3.9, but I would like to keep it as last resort. Many thanks for your help ### Expected behavior Tokenizer should be loaded. Instead, I get this error: ``` AttributeError Traceback (most recent call last) Cell In [3], line 2 1 # Tokenizer ----> 2 tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) File ~/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:637, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 635 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 636 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 637 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 638 else: 639 if tokenizer_class_py is not None: File ~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1777, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1774 else: 1775 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}") -> 1777 return cls._from_pretrained( 1778 resolved_vocab_files, 1779 pretrained_model_name_or_path, 1780 init_configuration, 1781 *init_inputs, 1782 use_auth_token=use_auth_token, 1783 cache_dir=cache_dir, 1784 local_files_only=local_files_only, 1785 _commit_hash=commit_hash, 1786 **kwargs, 1787 ) File ~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1932, in PreTrainedTokenizerBase._from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, local_files_only, _commit_hash, *init_inputs, **kwargs) 1930 # Instantiate tokenizer. 1931 try: -> 1932 tokenizer = cls(*init_inputs, **init_kwargs) 1933 except OSError: 1934 raise OSError( 1935 "Unable to load vocabulary from file. " 1936 "Please check that the provided vocabulary is accessible and not corrupted." 1937 ) File ~/.local/lib/python3.10/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py:155, in XLMRobertaTokenizerFast.__init__(self, vocab_file, tokenizer_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, **kwargs) 139 def __init__( 140 self, 141 vocab_file=None, (...) 151 ): 152 # Mask token behave like a normal word, i.e. include the space before it 153 mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token --> 155 super().__init__( 156 vocab_file, 157 tokenizer_file=tokenizer_file, 158 bos_token=bos_token, 159 eos_token=eos_token, 160 sep_token=sep_token, 161 cls_token=cls_token, 162 unk_token=unk_token, 163 pad_token=pad_token, 164 mask_token=mask_token, 165 **kwargs, 166 ) 168 self.vocab_file = vocab_file 169 self.can_save_slow_tokenizer = False if not self.vocab_file else True File ~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:114, in PreTrainedTokenizerFast.__init__(self, *args, **kwargs) 111 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) 112 elif slow_tokenizer is not None: 113 # We need to convert a slow tokenizer to build the backend --> 114 fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) 115 elif self.slow_tokenizer_class is not None: 116 # We need to create and convert a slow tokenizer to build the backend 117 slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py:1162, in convert_slow_tokenizer(transformer_tokenizer) 1154 raise ValueError( 1155 f"An instance of tokenizer class {tokenizer_class_name} cannot be converted in a Fast tokenizer instance." 1156 " No converter was found. Currently available slow->fast convertors:" 1157 f" {list(SLOW_TO_FAST_CONVERTERS.keys())}" 1158 ) 1160 converter_class = SLOW_TO_FAST_CONVERTERS[tokenizer_class_name] -> 1162 return converter_class(transformer_tokenizer).converted() File ~/.local/lib/python3.10/site-packages/transformers/convert_slow_tokenizer.py:438, in SpmConverter.__init__(self, *args) 434 requires_backends(self, "protobuf") 436 super().__init__(*args) --> 438 from .utils import sentencepiece_model_pb2 as model_pb2 440 m = model_pb2.ModelProto() 441 with open(self.original_tokenizer.vocab_file, "rb") as f: File ~/.local/lib/python3.10/site-packages/transformers/utils/sentencepiece_model_pb2.py:20 18 from google.protobuf import descriptor as _descriptor 19 from google.protobuf import message as _message ---> 20 from google.protobuf import reflection as _reflection 21 from google.protobuf import symbol_database as _symbol_database 24 # @@protoc_insertion_point(imports) File /usr/lib/python3/dist-packages/google/protobuf/reflection.py:58 56 from google.protobuf.pyext import cpp_message as message_impl 57 else: ---> 58 from google.protobuf.internal import python_message as message_impl 60 # The type of all Message classes. 61 # Part of the public interface, but normally only used by message factories. 62 GeneratedProtocolMessageType = message_impl.GeneratedProtocolMessageType File /usr/lib/python3/dist-packages/google/protobuf/internal/python_message.py:69 66 import copyreg as copyreg 68 # We use "as" to avoid name collisions with variables. ---> 69 from google.protobuf.internal import containers 70 from google.protobuf.internal import decoder 71 from google.protobuf.internal import encoder File /usr/lib/python3/dist-packages/google/protobuf/internal/containers.py:182 177 collections.MutableMapping.register(MutableMapping) 179 else: 180 # In Python 3 we can just use MutableMapping directly, because it defines 181 # __slots__. --> 182 MutableMapping = collections.MutableMapping 185 class BaseContainer(object): 187 """Base container class.""" AttributeError: module 'collections' has no attribute 'MutableMapping' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20156/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20155
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20155/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20155/comments
https://api.github.com/repos/huggingface/transformers/issues/20155/events
https://github.com/huggingface/transformers/pull/20155
1,443,296,018
PR_kwDOCUB6oc5Clq7y
20,155
Add to DeBERTa resources
{ "login": "Saad135", "id": 22683922, "node_id": "MDQ6VXNlcjIyNjgzOTIy", "avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saad135", "html_url": "https://github.com/Saad135", "followers_url": "https://api.github.com/users/Saad135/followers", "following_url": "https://api.github.com/users/Saad135/following{/other_user}", "gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saad135/subscriptions", "organizations_url": "https://api.github.com/users/Saad135/orgs", "repos_url": "https://api.github.com/users/Saad135/repos", "events_url": "https://api.github.com/users/Saad135/events{/privacy}", "received_events_url": "https://api.github.com/users/Saad135/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20155). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20155). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20155). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Adds resources to Deberta Relates to https://github.com/huggingface/transformers/issues/20055 @stevhliu Can you please take a look :-). I could not really find anything on DeBERTa but since DeBERTa builds upon RoBERTa, should I add the materials for RoBERTa? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20155/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20155", "html_url": "https://github.com/huggingface/transformers/pull/20155", "diff_url": "https://github.com/huggingface/transformers/pull/20155.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20155.patch", "merged_at": 1668536768000 }
https://api.github.com/repos/huggingface/transformers/issues/20154
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20154/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20154/comments
https://api.github.com/repos/huggingface/transformers/issues/20154/events
https://github.com/huggingface/transformers/issues/20154
1,443,226,872
I_kwDOCUB6oc5WBeT4
20,154
Less noisy console output
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can adjust the logging level to your preferred value with `transformers.utils.logging.set_verbosity(log_level)`.\r\n\r\nAlso cc @LysandreJik ", "Thanks @sgugger, I just tried that.\r\n\r\nThere's still plenty of noise from code in HF (datasets) like this:\r\n```py\r\nlogger.warning(f\"Loading cached processed dataset at {cache_file_name}\")\r\n```\r\n\r\nAlso, `Trainer` will call `args.get_process_log_level()` and overwrite whatever I've set with `logging.set_verbosity()`. I think there might be something wrong in `get_process_log_level` (after a quick glance). The [docstring says](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L205-L208) that the default log level is `passive` and that this won't change anything, but [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1610) in the code explicitly sets the log level to `INFO`. Should that not default to `logger.getEffectiveLevel()`?", "You'll need to do the same thing for `datasets` (same API I believe 🤞 ). As for the `Trainer` it's very possible that there is a bug. Do you want to suggest a fix in a PR?", "I've got a full schedule with study at the moment, sorry.\r\n\r\nSo in summary this looks like 2.5 issues:\r\n* messages like \"loaded from cache\" should be log level INFO, not WARNING\r\n* the default log level should read from the user's environment, or at least use the same default as Python, which is 30/WARNING, not 20/INFO\r\n* And a nice to have would be that all HF packages shared a log-level setting, although if the defaults are right this is not a big deal.", "I disagree with the first two, and even if I did, it's too late to change it without surprising the whole user community. The only issue I see is the bug in `Trainer` you reported :-)", "Ah, interesting, I thought the first one was quite clear cut. From the [Python logging how-to](https://docs.python.org/3/howto/logging.html):\r\n* INFO: Confirmation that things are working as expected.\r\n* WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.\r\n\r\nBut I do agree that too much change for minor things is not great, so fair enough if you want to stick with things the way they are. But for me, this makes Huggingface harder to work with than it needs to be, swamping out my own logging. I'm quite surprised it's intentional!", "Ah sorry I misread, I do agree with you on the first comment, and this one might be something we can change as a warning is indeed too strong for this message.", "Just dug in the code base and didn't find any obvious `logger.warning` that tells the user about something loaded from cache. Could you tell me which one you saw (or was it from the Datasets library?)", "Oh good! :) Yes I just checked and this is actually coming from the datasets package.\r\n\r\n", "Ok, so you should open an issue there. Agreed with you that those should be info (and it's the reason you will see most of our examples set the log level of datasets to Error, to avoid getting those warnings).", "@sgugger something else I've just noticed is that sometimes transformers will set the log level to info. I can't pin down exactly when, but I see that there's lots of code that calls `logging.set_verbosity_info()` at the top level of the module.\r\n\r\nIs that intentional? I don't understand the logic of a module globally changing the log level to INFO.", "Only scripts do this (mostly conversion scripts of models from their original repos to Transformers), not the module itself.", "Hmm, are these scripts ever called from the application code? There's definitely _something_ that sets the log level to INFO, and I think it's related to loading a model for the first time (which is why it's hard to replicate).", "Here's an example, I have this code that references a model not in my cache.\r\n```py\r\nprint(f\"Verbosity: {transformers.logging.get_verbosity()}\")\r\nconf = transformers.AutoModel.from_pretrained(\"distilgpt2\")\r\nprint(f\"Verbosity: {transformers.logging.get_verbosity()}\")\r\n```\r\n\r\nInterestingly, it ran and had the same log levels immediately before and after, but a second later, when I queried the log level in the console, it had changed.\r\n![image](https://user-images.githubusercontent.com/4443482/213947678-76399f2d-384b-47cf-bb52-fdee00959ca7.png)\r\n\r\nI had a breakpoint on `transformers.logging.set_verbosity` that wasn't triggered, not sure why.\r\n\r\nSo do you have some post-download steps running a script that changes the log level?\r\n\r\nPerhaps a good idea would be to move all those `logging.set_verbosity_info()` calls inside the `if __name__ == \"__main__\":` guards.", "Actually I'm going to re-open this, since there's still the bug of `TrainingArguments` defaulting to log level INFO so that just the act of creating a `Trainer` changes the log level.\r\n\r\nPlease do let me know if I'm wasting my time reporting these issues, maybe there's bigger fish to fry and no interest in fiddling with logging.", "I'm not sure I understand the bug here. Creating the `Trainer` with `TrainingArguments` at logel level INFO will change the log level yes. If you want another log level you should select it.", "No, if I have log level set to WARNING (the default) and create a `Trainer`, this _changes_ the log level to INFO.\r\n\r\nThis code:\r\n```py\r\nprint(f\"Verbosity: {transformers.logging.get_verbosity()}\")\r\ntrainer = transformers.Trainer(\r\n model=model,\r\n args=TrainingArguments(\r\n output_dir=dg.get_root_dir(\"logs/hf\"),\r\n evaluation_strategy=\"epoch\",\r\n report_to=None,\r\n fp16=True,\r\n ),\r\n train_dataset=dataset[\"train\"],\r\n eval_dataset=dataset[\"validation\"],\r\n data_collator=DataCollatorWithPadding(tokenizer=tokenizer),\r\n)\r\nprint(f\"Verbosity: {transformers.logging.get_verbosity()}\")\r\n```\r\n\r\nResults in this:\r\n![image](https://user-images.githubusercontent.com/4443482/214172791-f287a1d7-d14d-4740-9e2e-68b3159b1c21.png)\r\n", "Yes, because you have left the logging value of the `TrainingArguments` to its default value of info.\r\n\r\nJust so I understand better, you would like the `TrainingArguments` to defaults to `None` and only change the logging level if explicitly set to some value? I can get behind that if you want to make a PR.", "`TrainingArguments` defaults to `passive`, doesn't it? See [here](https://github.com/huggingface/transformers/issues/20154#issuecomment-1310772907)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,679
1,679
NONE
null
### Feature request I have just started using HF transformers and am struck by the amount of text it dumps into the console. Just following the steps in the course, a simple script that loads a model and trains it spews out this: ![image](https://user-images.githubusercontent.com/4443482/200996914-b944f497-459b-4fbd-b47f-0d6bf10651b3.png) ...my monitor isn't big enough to screenshot it all :) Note that I'm using PyCharm, and specifically the built in "Python Console". ### Motivation This is a problem because now a user has two options: * every single time they run their code, scan the wall of text to see if there is any new information that they haven't seen 300 times before * not re-read the wall of text every time, and potentially miss out on _useful_ information that they need to attend to. Also, it's just not very pretty, and pretty is nice. ### Your contribution Only suggestions/questions: * Do any HF developers use PyCharm's Python Console? Maybe it's worth testing on this, it's flawed, but quite popular. * You can check whether the environment is a TTY with `sys.stdout.isatty()`. tqdm simply doesn't work well when not in a terminal (beyond a simple indicator, as long as you don't print anything while the indicator is active). So a good solution is to simply print the results at the end for these environments. * I don't think writing cache files is something to notify the user about. Perhaps it's worth thinking in terms of 'user personas': to the first-time user, this is useful information. For every other run, once the user knows that HF writes checkpoints in a certain place, it's no longer information that needs to be logged and so goes from helpful to detrimental, since it makes important info harder to spot. Maybe the problem is just that HF it setting the log level to "INFO" when the default Python level is "WARNING" and all you need to do is pick up the correct log level from the user's environment and most of the junk will disappear. * I don't know why it's all red, this also adds to the difficulty in seeing real errors (and also adds to the ugliness). I hope this is useful and not just me complaining...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20154/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20153
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20153/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20153/comments
https://api.github.com/repos/huggingface/transformers/issues/20153/events
https://github.com/huggingface/transformers/issues/20153
1,443,182,653
I_kwDOCUB6oc5WBTg9
20,153
How to fine-tune a pre-trained protein language model on the protein folding task ?
{ "login": "pengshuang", "id": 11802795, "node_id": "MDQ6VXNlcjExODAyNzk1", "avatar_url": "https://avatars.githubusercontent.com/u/11802795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pengshuang", "html_url": "https://github.com/pengshuang", "followers_url": "https://api.github.com/users/pengshuang/followers", "following_url": "https://api.github.com/users/pengshuang/following{/other_user}", "gists_url": "https://api.github.com/users/pengshuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/pengshuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pengshuang/subscriptions", "organizations_url": "https://api.github.com/users/pengshuang/orgs", "repos_url": "https://api.github.com/users/pengshuang/repos", "events_url": "https://api.github.com/users/pengshuang/events{/privacy}", "received_events_url": "https://api.github.com/users/pengshuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @pengshuang - right now our port of ESMFold is only really usable for inference, and is lacking some of the training code. This happened because we were rushing to launch simultaneously with the release by FAIR, and so we had to launch with a couple of bits missing!\r\n\r\nWe're working with the team at FAIR to add this, though, and I'll let you know when we have anything to report.", "@Rocketknight1 Thanks for your quick reply. Looking forward to your future work.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@Rocketknight1 Following up on @pengshuang request. Was hoping to finetune ESM 2 for Antibodies. How can we plug in the finetuned ESM-2 into ESMFold structure prediction" ]
1,668
1,672
1,671
NONE
null
### Feature request @Rocketknight1, Thanks for releasing the notebook and example of [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) and [How to generate protein folds](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb). These codes help me quickly apply the SOTA protein structure prediction model. However, I wonder what is the best way to train the ESMFold from the scratch. To be specific, I want to finetune the protein language model ESM-2 on the large-scale protein sequence database (e.g. UniRef 90), And I can get a new model for the downstream protein folding task. But I don't know whether it can use the Transformers [Trainer](https://huggingface.co/docs/transformers/training) to implement this function. Hope for your suggestions. Thanks in advance! ### Motivation [HuggingFace Trainer](https://huggingface.co/docs/transformers/training) has provided very convenient functions (e.g training on many GPUs) to fine-tune a pre-trained model. Providing a notebook or example to fine-tune the protein language model ESM-2 for the protein folding task may be very helpful for the engineer that works on protein structure prediction. ### Your contribution I can work with you to contribute the notebook or example of fine-tune the protein language model ESM-2 for the protein folding task if necessary.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20153/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20152
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20152/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20152/comments
https://api.github.com/repos/huggingface/transformers/issues/20152/events
https://github.com/huggingface/transformers/pull/20152
1,443,061,129
PR_kwDOCUB6oc5Ck4PR
20,152
Fix typo (line 221) in portuguese translation. Documentation @sugger …
{ "login": "kant", "id": 32717, "node_id": "MDQ6VXNlcjMyNzE3", "avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kant", "html_url": "https://github.com/kant", "followers_url": "https://api.github.com/users/kant/followers", "following_url": "https://api.github.com/users/kant/following{/other_user}", "gists_url": "https://api.github.com/users/kant/gists{/gist_id}", "starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kant/subscriptions", "organizations_url": "https://api.github.com/users/kant/orgs", "repos_url": "https://api.github.com/users/kant/repos", "events_url": "https://api.github.com/users/kant/events{/privacy}", "received_events_url": "https://api.github.com/users/kant/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20152). All of your documentation changes will be reflected on that endpoint.", "@kant Thank you for the PR.\r\n\r\nHowever, as @sgugger mentioned before:\r\n> there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?\r\n\r\nYou can also try to push an empty commit first to see if it can trigger the CI. You can do it by\r\n```bash\r\ngit commit --allow-empty -m \"push an empty commit to trigger CI\"\r\n```\r\n\r\nOtherwise, could you try refreshing your CircleCI permissions as mentioned above. Thanks!", "done the steps via this [resource](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)", "(juste waiting that the `check-quality` test pass not sure why it is not triggering) ", "@kant We still need you to push an empty commit on this branch so that the tests are re-triggered with the appropriate permissions.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
CONTRIBUTOR
null
# What does this PR do? Fix typo (line 221) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) Relative with this issue #19443 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No ## Who can review? @sgugger @ydshieh Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20152/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20152", "html_url": "https://github.com/huggingface/transformers/pull/20152", "diff_url": "https://github.com/huggingface/transformers/pull/20152.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20152.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20151
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20151/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20151/comments
https://api.github.com/repos/huggingface/transformers/issues/20151/events
https://github.com/huggingface/transformers/pull/20151
1,443,041,242
PR_kwDOCUB6oc5Ckz04
20,151
Add video classification pipeline
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20151). All of your documentation changes will be reflected on that endpoint.", "Holding off on this PR as we discuss https://github.com/huggingface/datasets/issues/5225 - I think I will update the PR here to use `av` instead of `decord` because of it. Feel free to join the conversation there.\r\n\r\n---\r\n\r\nedit: wrong issue link", "> Holding off on this PR as we discuss #5225 - I think I will update the PR here to use `av` instead of `decord` because of it. Feel free to join the conversation there.\r\n\r\nYour link is wrong I think, you meant https://github.com/huggingface/datasets/issues/5225 (Tip: GH will shorten the URL on its own so you don't have to care, just copy&paste raw URLs :) )\r\n\r\nMaybe a core maintainer could jump in, but I feel like \"blocking\" PRs like this is not desirable, we should merge whatever is ready first, and hardmonize later. if this PRs code isolate the dependency enough, it should be a breeze to update.\r\nAnd if it's not it could be an argument in favor/defavor of some library. Real code always trumps whatever feelings about library X.", "I agree the PR should not be held off until a feature is merged in Datasets. We can adapt to it later on when Datasets has the features.", "Ok thanks for the advice @Narsil and @sgugger - in that case I'll just resolve all PR comments here and finish this out this week.", "@Narsil is it ok to leave decord for now? I think its fine for this use case, and is just constrained to this pipeline. Later, we'll probably want to add some `video_utils.py` file, just as we do with image utils, where we can keep some more permanent video utilities.\r\n\r\nBased on the convo in the datasets repo, I think we'll end up using PyAV.\r\n\r\nTo try this feature:\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline('video-classification')\r\npipe('https://huggingface.co/datasets/nateraw/video-demo/resolve/main/archery.mp4')\r\n\r\n# Result\r\n\"\"\"\r\n[{'score': 0.6418354511260986, 'label': 'archery'}, {'score': 0.0026529659517109394, 'label': 'riding unicycle'}, {'score': 0.00258301617577672, 'label': 'golf driving'}, {'score': 0.002545431721955538, 'label': 'throwing ball'}, {'score': 0.0023797585163265467, 'label': 'tobogganing'}]\r\n\"\"\"\r\n```" ]
1,668
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds a video classification pipeline using VideoMAE. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20151/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20151", "html_url": "https://github.com/huggingface/transformers/pull/20151", "diff_url": "https://github.com/huggingface/transformers/pull/20151.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20151.patch", "merged_at": 1670534564000 }
https://api.github.com/repos/huggingface/transformers/issues/20150
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20150/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20150/comments
https://api.github.com/repos/huggingface/transformers/issues/20150/events
https://github.com/huggingface/transformers/pull/20150
1,442,981,473
PR_kwDOCUB6oc5CkmPD
20,150
Typo fixed (line 219) in german translation. Documentation: @sgugger @ydshieh
{ "login": "kant", "id": 32717, "node_id": "MDQ6VXNlcjMyNzE3", "avatar_url": "https://avatars.githubusercontent.com/u/32717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kant", "html_url": "https://github.com/kant", "followers_url": "https://api.github.com/users/kant/followers", "following_url": "https://api.github.com/users/kant/following{/other_user}", "gists_url": "https://api.github.com/users/kant/gists{/gist_id}", "starred_url": "https://api.github.com/users/kant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kant/subscriptions", "organizations_url": "https://api.github.com/users/kant/orgs", "repos_url": "https://api.github.com/users/kant/repos", "events_url": "https://api.github.com/users/kant/events{/privacy}", "received_events_url": "https://api.github.com/users/kant/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20150). All of your documentation changes will be reflected on that endpoint.", "Like in your other PRs, the tests are not run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?", "Done the steps. But failed from this side.", "You might need to push an empty commit to retrigger the tests.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
CONTRIBUTOR
null
# What does this PR do? Typo fixed (line 219) in german translation <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) No fix on previous issue, but related to this #19443 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20150/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20150", "html_url": "https://github.com/huggingface/transformers/pull/20150", "diff_url": "https://github.com/huggingface/transformers/pull/20150.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20150.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20149
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20149/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20149/comments
https://api.github.com/repos/huggingface/transformers/issues/20149/events
https://github.com/huggingface/transformers/pull/20149
1,442,799,900
PR_kwDOCUB6oc5Cj8_Z
20,149
Fix tapas scatter
{ "login": "Bearnardd", "id": 43574448, "node_id": "MDQ6VXNlcjQzNTc0NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bearnardd", "html_url": "https://github.com/Bearnardd", "followers_url": "https://api.github.com/users/Bearnardd/followers", "following_url": "https://api.github.com/users/Bearnardd/following{/other_user}", "gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions", "organizations_url": "https://api.github.com/users/Bearnardd/orgs", "repos_url": "https://api.github.com/users/Bearnardd/repos", "events_url": "https://api.github.com/users/Bearnardd/events{/privacy}", "received_events_url": "https://api.github.com/users/Bearnardd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20149). All of your documentation changes will be reflected on that endpoint.", "Hi @sgugger - Sure, I will remove it as a part of removing all \"scatter\" mentions requested by @NielsRogge.", "Hi @NielsRogge - could you tell me what is the difference between `require_scatter` and `require_torch_scatter` in `transformers/src/transformers/testing_utils.py` since they are calling the same thing.", "Good question, you can remove both ;)", "@NielsRogge - I have removed \"scatter\" mentions from the code base. It will be good to double check the changes :). I have not changed `.circleci/create_circleci_config.py` since removing it is a part of the #20168 PR.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20149). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20149). All of your documentation changes will be reflected on that endpoint.", "Thanks for all the work! Will rebase my PR on yours to finish the job :-)" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Changes the usage of scatter from torch_scatter to PyTorch's scatter thus removing the dependency on third party library * [x] remove torch_scatter dependency * [x] update `_segment_reduce` function in order to work with PyTorch's scatter * [x] update test case `test_reduce_sum_vectorized` Fixes # (issue) https://github.com/huggingface/transformers/issues/20101 ## Who can review? @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20149/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20149", "html_url": "https://github.com/huggingface/transformers/pull/20149", "diff_url": "https://github.com/huggingface/transformers/pull/20149.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20149.patch", "merged_at": 1668405867000 }
https://api.github.com/repos/huggingface/transformers/issues/20148
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20148/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20148/comments
https://api.github.com/repos/huggingface/transformers/issues/20148/events
https://github.com/huggingface/transformers/issues/20148
1,442,595,038
I_kwDOCUB6oc5V_EDe
20,148
Add support for images embeddings as one of the **input parameters
{ "login": "ramanova", "id": 5911161, "node_id": "MDQ6VXNlcjU5MTExNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5911161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ramanova", "html_url": "https://github.com/ramanova", "followers_url": "https://api.github.com/users/ramanova/followers", "following_url": "https://api.github.com/users/ramanova/following{/other_user}", "gists_url": "https://api.github.com/users/ramanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/ramanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ramanova/subscriptions", "organizations_url": "https://api.github.com/users/ramanova/orgs", "repos_url": "https://api.github.com/users/ramanova/repos", "events_url": "https://api.github.com/users/ramanova/events{/privacy}", "received_events_url": "https://api.github.com/users/ramanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @alaradirik and @NielsRogge ", "Hi @ramanova, thanks for the suggestion! \r\n\r\nThis is doable but might be a bit tricky. `OwlViTProcessor` preprocesses images (resizing, cropping, etc.) and doesn't compute embeddings so you would need compute the embeddings using the base `OwlViTModel`. \r\n\r\n@NielsRogge do you know if there are any other models that provide this kind of functionality?", "Several NLP models, like BERT, provides the `inputs_embeds` argument as seen [here](https://github.com/huggingface/transformers/blob/d066c3731bed1755f93ea64f0f00981b805532de/src/transformers/models/bert/modeling_bert.py#L919), which allows you to provide embeddings yourself rather than `input_ids`. So the use case here is similar, I assume.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
### Feature request [Owl-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) **Current flow** for inference for OWL ViT is to accept image and text, run it through a processor and then give that as an input to the Model for inference. Result: Getting detections with some confidences. ```python model = OwlViTForObjectDetection.from_pretrained(...) processor = OwlViTProcessor.from_pretrained(...) inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) ``` **Additional flow** It would be useful to have additional capability to support images embeddings as an input (either for a processor or the model itself), so the additional flow would look like this: take 100 (e.g.) images -> run to calculate images embeddings, save (a dataframe, or files) -> use model(embeddings_dir, text_terms) to infer detections. ```python model = OwlViTForObjectDetection.from_pretrained(...) processor = OwlViTProcessor.from_pretrained(...) # using images_embeddings instead of images inputs = processor(text=texts, images_embeddings=image_embedding, return_tensors="pt") outputs = model(**inputs) ``` ### Motivation Creating a cache, precomputed set of images embeddings for faster inference / search by text. It is possible, that this functionality already exists, but because of the way owl-vit is structured, it might be tricky to perform: currently **inputs contain embeddings for bboxes and text. ### Your contribution Would be difficult, as I'm not familiar with the infra.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20148/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20147
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20147/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20147/comments
https://api.github.com/repos/huggingface/transformers/issues/20147/events
https://github.com/huggingface/transformers/pull/20147
1,442,484,333
PR_kwDOCUB6oc5Ci4sY
20,147
Fix `ImageSegmentationPipelineTests`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Mark as draft in order to debug this super flaky test!", "@sgugger @Narsil @alaradirik and others: ready for your review", "@Narsil \r\n\r\n- no zip now\r\n- I decided to simply use (for now) `.../blob/...` instead of `.../resolve/...` to link to the pages (where we can visualize the images, although not the full size).\r\n ```\r\n https://huggingface.co/datasets/hf-internal-testing/mask-for-image-segmentation-tests/blob/main/mask_0.png\r\n ```\r\n I don't like much the usage of `datasets-server`:\r\n - too long\r\n - link strings not corresponding to file names\r\n\r\nI am going to merge if you are OK 🙏 \r\n\r\n" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? - If the concept is approved, I can apply the same changes to other places) - On `CircleCI`, we get `0.9469921875 not greater than or equal to 0.99`. Maybe I should lower the threshold ..? ------- Fix `ImageSegmentationPipelineTests.test_small_model_pt`. Compare hash of the output/expected masks is too flaky. This PR: - get the current output masks, upload them to Hub - use the uploaded masks as the new expected values - only compare if the output/expected masks match with 99% or above It should be safe to use the current output masks as the new expected masks, as the current output/expected masks seems match close: ```python [ { "label": "LABEL_88", "mask": {"hash": "4e2da4b9a4", "shape": (480, 640), "white_pixels": 11}, "score": None, }, { "label": "LABEL_101", "mask": {"hash": "9ec7310913", "shape": (480, 640), "white_pixels": 8946}, "score": None, }, { "label": "LABEL_215", "mask": {"hash": "21dcfdc10d", "shape": (480, 640), "white_pixels": 298243}, "score": None, }, ], ``` current expected values (before this PR): ```python [ { "label": "LABEL_88", "mask": {"hash": "7f0bf661a4", "shape": (480, 640), "white_pixels": 3}, "score": None, }, { "label": "LABEL_101", "mask": {"hash": "10ab738dc9", "shape": (480, 640), "white_pixels": 8948}, "score": None, }, { "label": "LABEL_215", "mask": {"hash": "b431e0946c", "shape": (480, 640), "white_pixels": 298249}, "score": None, }, ], ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20147/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20147", "html_url": "https://github.com/huggingface/transformers/pull/20147", "diff_url": "https://github.com/huggingface/transformers/pull/20147.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20147.patch", "merged_at": 1668500095000 }
https://api.github.com/repos/huggingface/transformers/issues/20146
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20146/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20146/comments
https://api.github.com/repos/huggingface/transformers/issues/20146/events
https://github.com/huggingface/transformers/pull/20146
1,442,472,918
PR_kwDOCUB6oc5Ci2Nq
20,146
Make DummyObject more robust
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20146). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Use `__getattribute__` instead of `__getattr__` in `DummyObject` to track the attribute access. Compared to `__getattr__` (invoked only if the attribute is missing), `__getattribute__` is invoked for every access, hence more robust. Fixes #20127 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20146/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20146/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20146", "html_url": "https://github.com/huggingface/transformers/pull/20146", "diff_url": "https://github.com/huggingface/transformers/pull/20146.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20146.patch", "merged_at": 1668016648000 }
https://api.github.com/repos/huggingface/transformers/issues/20145
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20145/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20145/comments
https://api.github.com/repos/huggingface/transformers/issues/20145/events
https://github.com/huggingface/transformers/issues/20145
1,442,426,094
I_kwDOCUB6oc5V-azu
20,145
Set task and language tokens for whisper model
{ "login": "bofenghuang", "id": 38185248, "node_id": "MDQ6VXNlcjM4MTg1MjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bofenghuang", "html_url": "https://github.com/bofenghuang", "followers_url": "https://api.github.com/users/bofenghuang/followers", "following_url": "https://api.github.com/users/bofenghuang/following{/other_user}", "gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions", "organizations_url": "https://api.github.com/users/bofenghuang/orgs", "repos_url": "https://api.github.com/users/bofenghuang/repos", "events_url": "https://api.github.com/users/bofenghuang/events{/privacy}", "received_events_url": "https://api.github.com/users/bofenghuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey! That's a good idea and also on my TODO! \nIt is debatable whether we should have this in the `WhisperForConditionalGeneration` or add a new class for multilingual Decoding which would do this automatically. We could also add a `WhisperForSequenceClassification` class which would just detect the language. \n\nCC @sgugger and @patrickvonplaten as this is really a design question. ", "I think this is something we would want to solve with generate use-case specific configurations (cc @gante) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
CONTRIBUTOR
null
### Feature request Hi @ArthurZucker, thanks for the great work on whisper models. I would like to know if it's possible to also have a `set_prefix_tokens` function in `WhisperForConditionalGeneration`, which receives the language/task name and changes the language/task token in `model.config.forced_decoder_ids`, in order to run the ASR inference on languages other than EN. As far as I know it has by default `['<|en|>', '<|transcribe|>', '<|notimestamps|>']`, so I have to get the language token ID first and set it manually before running generate ### Motivation An easy API to set language/task would be useful. ### Your contribution Willing to do if this function doesn't exist
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20145/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20144
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20144/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20144/comments
https://api.github.com/repos/huggingface/transformers/issues/20144/events
https://github.com/huggingface/transformers/pull/20144
1,442,282,328
PR_kwDOCUB6oc5CiNB_
20,144
[OWL-ViT] Make model consistent with CLIP
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20144). All of your documentation changes will be reflected on that endpoint.", "The arguments `return_base_image_embeds`, `use_hidden_state` and `return_projected` are not useful for users, and the number of people that could have used them until now should be marginal (I don't think anyone is using them at all). I really hope we proceed with this PR, I would not add deprecation here as it just cleans up the code, and there are no use cases with those arguments. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20144). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR improves OWL-ViT by removing 3 arguments, to make the model more consistent with CLIP. All integration tests pass with these fixes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20144/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20144", "html_url": "https://github.com/huggingface/transformers/pull/20144", "diff_url": "https://github.com/huggingface/transformers/pull/20144.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20144.patch", "merged_at": 1668162978000 }
https://api.github.com/repos/huggingface/transformers/issues/20143
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20143/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20143/comments
https://api.github.com/repos/huggingface/transformers/issues/20143/events
https://github.com/huggingface/transformers/pull/20143
1,442,233,721
PR_kwDOCUB6oc5CiCh_
20,143
Adding support for LayoutLMvX variants for `object-detection`.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Adding support for `layoutlm` to `object-detection`. LayoutLMv{2,3} can be used for `object-detection`. However the classes are `ForTokenClassification` which not all classes can support a vision + OCR type of inference. (the model needs `bbox` object even if we splitted out the OCR). The current implementation changes `object-detection` to `multimodal` since now the models require `tokenizer` for the layoutlm variants. (This does not affect existing working pipelines). Then it uses reflection at runtime to see if the model is using a `tokenizer`. This is not a great way to go about it but was the "simpler" change I could think of. As long as we don't have support for other model architecture, I'm hesitant to make "cleaner" modifications, since I don't know if other architectures will support the same invariants or not. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20143/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20143", "html_url": "https://github.com/huggingface/transformers/pull/20143", "diff_url": "https://github.com/huggingface/transformers/pull/20143.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20143.patch", "merged_at": 1668076418000 }
https://api.github.com/repos/huggingface/transformers/issues/20142
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20142/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20142/comments
https://api.github.com/repos/huggingface/transformers/issues/20142/events
https://github.com/huggingface/transformers/pull/20142
1,442,194,085
PR_kwDOCUB6oc5Ch596
20,142
[DOCTEST] Fix the documentation of RoCBert
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20142). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20142). All of your documentation changes will be reflected on that endpoint.", "Sorry for the late fix, tests are passing locally. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20142). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Fixes the documetnation test
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20142/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20142", "html_url": "https://github.com/huggingface/transformers/pull/20142", "diff_url": "https://github.com/huggingface/transformers/pull/20142.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20142.patch", "merged_at": 1668663647000 }
https://api.github.com/repos/huggingface/transformers/issues/20141
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20141/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20141/comments
https://api.github.com/repos/huggingface/transformers/issues/20141/events
https://github.com/huggingface/transformers/pull/20141
1,441,987,231
PR_kwDOCUB6oc5ChNFL
20,141
Add `RoCBertTokenizer` to `TOKENIZER_MAPPING_NAMES`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20141). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20141). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,668
1,668
COLLABORATOR
null
# What does this PR do? Add `RoCBertTokenizer` to `TOKENIZER_MAPPING_NAMES`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20141/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20141", "html_url": "https://github.com/huggingface/transformers/pull/20141", "diff_url": "https://github.com/huggingface/transformers/pull/20141.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20141.patch", "merged_at": 1668023937000 }
https://api.github.com/repos/huggingface/transformers/issues/20140
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20140/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20140/comments
https://api.github.com/repos/huggingface/transformers/issues/20140/events
https://github.com/huggingface/transformers/pull/20140
1,441,974,399
PR_kwDOCUB6oc5ChKSi
20,140
[WIP] add the tokenizer for SMALL100 model
{ "login": "alirezamshi-zz", "id": 43453239, "node_id": "MDQ6VXNlcjQzNDUzMjM5", "avatar_url": "https://avatars.githubusercontent.com/u/43453239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alirezamshi-zz", "html_url": "https://github.com/alirezamshi-zz", "followers_url": "https://api.github.com/users/alirezamshi-zz/followers", "following_url": "https://api.github.com/users/alirezamshi-zz/following{/other_user}", "gists_url": "https://api.github.com/users/alirezamshi-zz/gists{/gist_id}", "starred_url": "https://api.github.com/users/alirezamshi-zz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alirezamshi-zz/subscriptions", "organizations_url": "https://api.github.com/users/alirezamshi-zz/orgs", "repos_url": "https://api.github.com/users/alirezamshi-zz/repos", "events_url": "https://api.github.com/users/alirezamshi-zz/events{/privacy}", "received_events_url": "https://api.github.com/users/alirezamshi-zz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20140). All of your documentation changes will be reflected on that endpoint.", "Since the only difference is in the tokenization code, maybe it would be more beneficial to add the custom code directly in the model repo (see [documentation here](https://huggingface.co/docs/transformers/custom_models#sharing-custom-models) ) and not add a new model to the library?", "@sgugger Thanks for the comment. I currently put the model and tokenization code [here](https://huggingface.co/alirezamsh/small100) in model hub. Is it the standard way? as users have to download the code too. \r\nAnother alternative is to add new options to [m2m-100 tokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/m2m_100/tokenization_m2m_100.py)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> We propose SMaLL-100, which is a compact and fast massively multilingual machine translation model covering more than 10K language pairs, that achieves competitive results with M2M-100 while being much smaller and faster. It is introduced in [this paper](https://arxiv.org/abs/2210.11621)(accepted to EMNLP 2022), and initially released in [this repository](https://github.com/alirezamshi/small100). The model architecture and config are the same as [M2M-100](https://huggingface.co/facebook/m2m100_418M/tree/main) implementation, but the tokenizer is modified to adjust language codes. Comparing to M2M-100, target language code is added to the beginning of source sequence (instead of source language code), and target language code is removed from the target side. I've added the usage instruction in [model hub](https://huggingface.co/alirezamsh/small100). Adding this model to transformers, will help NMT community especially for low-resource languages. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20140/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20140", "html_url": "https://github.com/huggingface/transformers/pull/20140", "diff_url": "https://github.com/huggingface/transformers/pull/20140.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20140.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20139
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20139/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20139/comments
https://api.github.com/repos/huggingface/transformers/issues/20139/events
https://github.com/huggingface/transformers/pull/20139
1,441,966,541
PR_kwDOCUB6oc5ChIkX
20,139
Update SwinForMaskedImageModeling doctest values
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,668
1,668
COLLABORATOR
null
# What does this PR do? Doc test was failing because checkpoints were changed in #20034 which have different config values, resulting in different image sizes after preprocessing. * Previous config: https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/blob/main/preprocessor_config.json * New config: https://huggingface.co/microsoft/swin-base-simmim-window6-192/blob/main/preprocessor_config.json Updates the test to reflect the new checkpoints. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20139/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20139", "html_url": "https://github.com/huggingface/transformers/pull/20139", "diff_url": "https://github.com/huggingface/transformers/pull/20139.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20139.patch", "merged_at": 1668005581000 }
https://api.github.com/repos/huggingface/transformers/issues/20138
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20138/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20138/comments
https://api.github.com/repos/huggingface/transformers/issues/20138/events
https://github.com/huggingface/transformers/issues/20138
1,441,957,335
I_kwDOCUB6oc5V8oXX
20,138
Problems with layoutlm language model
{ "login": "mv96", "id": 14794584, "node_id": "MDQ6VXNlcjE0Nzk0NTg0", "avatar_url": "https://avatars.githubusercontent.com/u/14794584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mv96", "html_url": "https://github.com/mv96", "followers_url": "https://api.github.com/users/mv96/followers", "following_url": "https://api.github.com/users/mv96/following{/other_user}", "gists_url": "https://api.github.com/users/mv96/gists{/gist_id}", "starred_url": "https://api.github.com/users/mv96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mv96/subscriptions", "organizations_url": "https://api.github.com/users/mv96/orgs", "repos_url": "https://api.github.com/users/mv96/repos", "events_url": "https://api.github.com/users/mv96/events{/privacy}", "received_events_url": "https://api.github.com/users/mv96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only :-)\r\ncc @NielsRogge ", "Please link your question on the forum, I'll answer there!", "@NielsRogge can you send me the link to access forums, I tried to look at the discord channel of hugging face but I am not exactly sure where to ask questions.\r\n\r\nThanks", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
NONE
null
### Feature request Hi, I was recently reading Layoutlm and it's variants and I figured out that they do exist in hugging face but there are several problems with It or I am not very sure to understand it just by reading the documentation. Q1) [Layoutlm](https://arxiv.org/abs/1912.13318) is a multimodal machine learning transformer then why is it listed in the text transformers category on HuggingFace ? Q2) Even if it is multimodal, layoutlm does not tend to use the image anywhere in the examples of layoutlm [Layoutlm](https://huggingface.co/docs/transformers/main/en/model_doc/layoutlm) ? Q3) For [Layoutlmv2](https://huggingface.co/docs/transformers/main/en/model_doc/layoutlmv2) there is no tensorflow class like TFlayoutlm as it is available for version1 ? Q4) there is no MLM head class as it available for it's version 1 ? so I am not sure if I want to pretrain this model from scratch how do I do that ? Q5) Same as Q4 there is not MLM head class, so in this case if I have my own tokenizer and I want to pretrain from scratch layoutlm and then simply want to change the transformer with one line of code change, that's not possible because they have different heads ? I am a bit new to HF interface so forgive me if I asked something super basic. I don't know if I have something wrong with understanding layoutlm in the first place or these are valid questions, But I would be very happy if anyone can shed some light on this !! Once again thanks for taking the time to read 😊😊 , have a good day !! ### Motivation Just find it very difficult to understand the implementation of the specific model from the transformers library ### Your contribution I can try to look into it, but first I need to know if the problem is really a problem or it is just my wrong understanding of the library.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20138/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20137
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20137/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20137/comments
https://api.github.com/repos/huggingface/transformers/issues/20137/events
https://github.com/huggingface/transformers/pull/20137
1,441,947,756
PR_kwDOCUB6oc5ChEcJ
20,137
Update VisionEncoderDecoder to use an image processor
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @amyeroberts \r\n\r\nNothing extra (other than what Sylvain mentioned) from my side.\r\n\r\nOne thing I think would be great if you can provide a link to the line for\r\n> when checking the type of the feature extractor loaded\r\n\r\nLike:\r\n\r\nSee this line: \r\nhttps://github.com/huggingface/transformers/blob/c4cad8e3018e26f697f4ab0c5926e0c93aa0315b/src/transformers/processing_utils.py#L84\r\n\r\n(For me, this way is easier to know what issue we have, and to see if the fix is good for the issue)\r\n\r\n(Probably not necessary for Sylvain, as they knows everything in mind)", "@ydshieh That's a good point - thanks for the feedback! I'll make sure to add a link next time. " ]
1,667
1,668
1,668
COLLABORATOR
null
# What does this PR do? Loading TrOCR processor failed because when checking the type of the feature extractor loaded, it was an image processor, rather than a feature extractor. This PR replaces: * Replaces the feature extractor with an image processor in `TrOCRProcessor` * Adds `AutoImageProcessor` to `AUTO_TO_BASE_CLASS_MAPPING` for the `ProcessorMixin` checks * Adds backwards compatibility in case `feature_extractor` passed in as a kwarg when creating the processor. * Makes equivalent changes in `VisionEncoderDecoder` model ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20137/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20137", "html_url": "https://github.com/huggingface/transformers/pull/20137", "diff_url": "https://github.com/huggingface/transformers/pull/20137.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20137.patch", "merged_at": 1668011465000 }
https://api.github.com/repos/huggingface/transformers/issues/20136
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20136/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20136/comments
https://api.github.com/repos/huggingface/transformers/issues/20136/events
https://github.com/huggingface/transformers/pull/20136
1,441,909,870
PR_kwDOCUB6oc5Cg8Un
20,136
Adds image-guided object detection support to OWL-ViT
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge @sgugger sorry for the double PR, the upstream of the branch used in the other [PR](https://github.com/huggingface/transformers/pull/18891) points to huggingface/transformers:img_guided_obj_det instead of main and I couldn't change the upstream. \r\n\r\nThe reviews in the other PR are addressed but there are two failing tests I couldn't debug:\r\n```\r\nFAILED tests/pipelines/test_pipelines_zero_shot_object_detection.py::ZeroShotObjectDetectionPipelineTests::test_pt_OwlViTConfig_OwlViTForObjectDetection_CLIPTokenizerFast_OwlViTFeatureExtractor - IndexError: tuple index out of range\r\nFAILED tests/pipelines/test_pipelines_zero_shot_object_detection.py::ZeroShotObjectDetectionPipelineTests::test_pt_OwlViTConfig_OwlViTForObjectDetection_CLIPTokenizer_OwlViTFeatureExtractor - IndexError: tuple index out of range\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "Could you make sure to add @unography as co-author? I'd prefer to merge the original PR, but if it's not possible, I want to make sure the authorship is properly attributed.", "Hi there! Maybe this is not the place to mention this, but just wanted to mention that the original implementation uses stochastic depth (https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/clip/layers.py#L235). They set it to 0.2 and 0.1 for the vision and text encoders (https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/configs/clip_b16.py#L132).\r\n\r\nI guess that's not really important if you guys don't plan to implement the training losses for detection, but if you do, maybe it's something to keep in mind :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "@sgugger @NielsRogge could you do a final review when you're available? All tests are passing and I think all issues are addressed.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20136). All of your documentation changes will be reflected on that endpoint.", "It seems that running the example for image-guided od is still buggy:\r\n\r\n```\r\nimport requests\r\nfrom PIL import Image\r\nimport torch\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\nimport numpy as np\r\nimport cv2 \r\n\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nquery_url = \"http://images.cocodataset.org/val2017/000000001675.jpg\"\r\nquery_image = Image.open(requests.get(query_url, stream=True).raw)\r\ninputs = processor(images=image, query_images=query_image, return_tensors=\"pt\")\r\nwith torch.no_grad():\r\n outputs = model.image_guided_detection(**inputs)\r\n# Target image sizes (height, width) to rescale box predictions [batch_size, 2]\r\ntarget_sizes = torch.Tensor([image.size[::-1]])\r\n# Convert outputs (bounding boxes and class logits) to COCO API\r\nresults = processor.post_process_image_guided_detection(\r\n outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes\r\n)\r\n\r\ni = 0 # Retrieve predictions for the first image\r\nplot_image = np.array(image)\r\nboxes, scores = results[i][\"boxes\"], results[i][\"scores\"]\r\nscore_threshold = 0.2\r\nfor box, score in zip(boxes, scores):\r\n if score < score_threshold:\r\n continue\r\n\r\n box = [int(i) for i in box.tolist()]\r\n plot_image = cv2.rectangle(plot_image, (box[0],box[1]), (box[0]+box[2], box[1]+box[3]), (0, 255, 0), 2)\r\n\r\ncv2.imshow(\"\", plot_image)\r\nq = cv2.waitKey(0)\r\n```\r\n\r\nUpon plotting the boxes, it is very off. This target query pair should work as it works in the scenic repo.\r\n\r\nEdit: tried both patch-16 and 32 model, same results (bad box predictions on target image)", "> Upon plotting the boxes, it is very off. This target query pair should work as it works in the scenic repo.\r\n\r\nWhat's your Pillow version? We've seen that using Pillow==7.1.2 is essential for getting the expected results (and cc @alaradirik we should make sure the model works on any pillow version)", "@NielsRogge , ran `pip install Pillow==7.1.2` and got the same outputs in this example. \r\n\r\noutput of models are as follows:\r\n```\r\nboxes: tensor([[ 7.6539, -0.9177, 646.1529, 474.4720]])\r\nscores: tensor([1.0000])\r\n```\r\n\r\n@alaradirik did you manage to run the example and get an appropriate prediction?\r\n\r\nEdit: You can see that y1 is 0 in this case which is already wrong if you look at the image, image shape is (480,640) so in this case the bbox is just covering the entire image.", "Hey @timothylimyl, thanks for bringing this up. I was able to replicate the issue on my local and confirmed that it's not OpenCV or Pillow related and stems from the post-processing method. I think it's due to changed default behaviour between PyTorch versions, I'll open a fix PR once I confirm this.\r\n\r\nCC @NielsRogge ", "@timothylimyl sorry for the mixup, I thought this was a Pillow versioning issue we previously encountered and didn't realize the query image you are using is different .\r\n\r\nThe post-process method returns coordinates in (x0, y0, x1, y1) format, the correct command to print the boxes is:\r\n`plot_image = cv2.rectangle(plot_image, box[:2], box[2:], (0, 255, 0), 2)`\r\n\r\nNote that this still returns a bounding box that covers the entire image. This is because OWL-ViT is a text-conditioned model that uses CLIP as its backbone, the image-guided object detection method repurposes the trained text-conditioned model with the assumption that the query image contains a single object. In this case, you are just getting results for an image that could be described with more general terms (\"a photo of of a cat sitting on top of a ....\"). \r\n\r\nHere are the results for a cropped version of the query image you are using:\r\n![cropped](https://user-images.githubusercontent.com/8944735/205046498-63bf24d5-e7e0-4b31-8872-09e9300ce3f0.jpeg)\r\n<img width=\"638\" alt=\"new_results\" src=\"https://user-images.githubusercontent.com/8944735/205046902-b53e30b5-8a8f-4bfe-abd6-155624d8e734.png\">\r\n\r\n", "hey @alaradirik in the other old PR I've uploaded an image and query (+ results) used in the official one. Maybe it's worth trying them as well since you can (subjectively) evaluate the result bboxes using the original results. I hope it helps :) ", "Hi @FrancescoSaverioZuppichini, I'm not sure what you mean by subjectively evaluating the bounding boxes or which PR you are referring to? ", "Hi @alaradirik, can you share your code that was used to generate the example?\r\n\r\nI tried cropping and basically I still just received one big bounding box:\r\n\r\n```\r\nimport requests\r\nfrom PIL import Image\r\nimport torch\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\nimport numpy as np\r\nimport cv2 \r\n\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nquery_url = \"http://images.cocodataset.org/val2017/000000001675.jpg\"\r\nquery_image = Image.open(requests.get(query_url, stream=True).raw)\r\nquery_image = np.array(query_image)[:280,:]\r\nquery_image = Image.fromarray(query_image)\r\n\r\n\r\ninputs = processor(images=image, query_images=query_image, return_tensors=\"pt\")\r\nwith torch.no_grad():\r\n outputs = model.image_guided_detection(**inputs)\r\n# Target image sizes (height, width) to rescale box predictions [batch_size, 2]\r\ntarget_sizes = torch.Tensor([image.size[::-1]])\r\n# Convert outputs (bounding boxes and class logits) to COCO API\r\nresults = processor.post_process_image_guided_detection(\r\n outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes\r\n)\r\n\r\ni = 0 # Retrieve predictions for the first image\r\nplot_image = np.array(image)\r\nboxes, scores = results[i][\"boxes\"], results[i][\"scores\"]\r\nscore_threshold = 0.2\r\nfor box, score in zip(boxes, scores):\r\n if score < score_threshold:\r\n continue\r\n\r\n box = [int(i) for i in box.tolist()]\r\n plot_image = cv2.rectangle(plot_image, box[:2], box[2:], (0, 255, 0), 2)\r\n\r\ncv2.imshow(\"\", plot_image)\r\nq = cv2.waitKey(0)\r\n```", "also, I was confused by the comment `COCO API` as I believe that coco bbox are in the format `x,y,w,h` while PASCAL VOC XML is `x1,y1,x2,y2` which is what we are expecting here. ", "@timothylimyl, you are right about the COCO API comment, we will update the docs shortly to reflect the correct returned data format. \r\n\r\nHere is the code I used and the resulting image but keep in mind that different crops can lead to different results and both text-guided and image-guided object detection requires experimentation. There is no need for the `score_threshold` variable, you can directly use the threshold argument of the post-processing method to filter out low probability bounding boxes.\r\n\r\n```\r\nimport requests\r\n\r\nimport cv2 \r\nimport torch\r\nimport numpy as np\r\nfrom PIL import Image\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n\r\n\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nquery_url = \"http://images.cocodataset.org/val2017/000000001675.jpg\"\r\nquery_image = Image.open(requests.get(query_url, stream=True).raw)\r\nquery_image =np.array(query_image)[:340]\r\nquery_image = Image.fromarray(query_image)\r\n\r\ninputs = processor(images=image, query_images=query_image, return_tensors=\"pt\")\r\n\r\nwith torch.no_grad():\r\n outputs = model.image_guided_detection(**inputs)\r\n\r\n# Target image sizes (height, width) to rescale box predictions [batch_size, 2]\r\ntarget_sizes = torch.Tensor([image.size[::-1]])\r\n# Convert outputs (bounding boxes and class logits) to COCO API\r\nresults = processor.post_process_image_guided_detection(\r\n outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes\r\n)\r\n\r\n\r\nimg = cv2.cvtColor(np.array(image), cv2.COLOR_BGR2RGB)\r\nboxes, scores = results[0][\"boxes\"], results[0][\"scores\"]\r\n\r\nfor box, score in zip(boxes, scores):\r\n box = [int(i) for i in box.tolist()]\r\n img = cv2.rectangle(img, box[:2], box[2:], (255, 0, 0), 5)\r\n\r\ncv2.imshow(\"\", img)\r\nq = cv2.waitKey(0)\r\n```\r\n\r\n![result](https://user-images.githubusercontent.com/8944735/205241604-16f45c14-1e25-4b70-8272-39e3055e3e33.jpeg)\r\n", "Oh wow, that is very unexpected. Seems like the model is not very well trained/robust. The difference between your crop and mine is visually minimal yet the result differs by so much:\r\n\r\n\r\n![1](https://user-images.githubusercontent.com/49274721/205543725-e5abb10f-f435-4a9e-8c4e-8ab4c2e78221.jpg)\r\n[Does not work]\r\n\r\nversus\r\n\r\n![2](https://user-images.githubusercontent.com/49274721/205543747-5bf8fc08-a0ef-4bf1-86b0-0bae93b80377.jpg)\r\n[Works]\r\n\r\n\r\nIf you crop slightly further up to `:360` then there will be no bounding boxes again (only the one covering the whole image).\r\n\r\n![2](https://user-images.githubusercontent.com/49274721/205544357-e88d3c4e-9c27-4273-97c5-55f6dd2e7ff3.jpg)\r\n\r\n[Does not work!!!]\r\n\r\n\r\nDo you reckon there could be something buggy with the code or is the model fundamentally not robust and require pretty exact crops for matching? It does not make much sense to me that crops have to be so exact as the feature embedding matching won't be that poor.\r\n", "@alaradirik to the \"original\" one https://github.com/huggingface/transformers/pull/18891", "Any updates?", "Hi @timothylimyl, feel free to open an issue with a reproducable code sample so we can discuss it there", "Hi @NielsRogge @timothylimyl @alaradirik @sgugger \r\n\r\nI have found the issue that causes image conditioning to be so sensitive. There was a small bug in the query selection, please see my PR: https://github.com/huggingface/transformers/pull/23157\r\n\r\nBest,\r\nOrr\r\n\r\n" ]
1,667
1,683
1,668
CONTRIBUTOR
null
# What does this PR do? Adds image-guided object detection method to `OwlViTForObjectDetection` class. This enables users to use a query image to search for similar objects in the input image. Co-Authored-By: Dhruv Karan [k4r4n.dhruv@gmail.com](mailto:k4r4n.dhruv@gmail.com) Fixes #18748 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/18748 - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20136/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20136", "html_url": "https://github.com/huggingface/transformers/pull/20136", "diff_url": "https://github.com/huggingface/transformers/pull/20136.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20136.patch", "merged_at": 1668578867000 }
https://api.github.com/repos/huggingface/transformers/issues/20135
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20135/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20135/comments
https://api.github.com/repos/huggingface/transformers/issues/20135/events
https://github.com/huggingface/transformers/pull/20135
1,441,902,927
PR_kwDOCUB6oc5Cg6yM
20,135
Update tokenizer_summary.mdx
{ "login": "bofenghuang", "id": 38185248, "node_id": "MDQ6VXNlcjM4MTg1MjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bofenghuang", "html_url": "https://github.com/bofenghuang", "followers_url": "https://api.github.com/users/bofenghuang/followers", "following_url": "https://api.github.com/users/bofenghuang/following{/other_user}", "gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions", "organizations_url": "https://api.github.com/users/bofenghuang/orgs", "repos_url": "https://api.github.com/users/bofenghuang/repos", "events_url": "https://api.github.com/users/bofenghuang/events{/privacy}", "received_events_url": "https://api.github.com/users/bofenghuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20135). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,673
1,668
CONTRIBUTOR
null
# What does this PR do? Hi, thanks for this document. But I think the headings here are a little bit misunderstanding. I changed them to: ``` - Introduction - Subword tokenization - Byte-Pair Encoding (BPE) - Byte-level BPE - WordPiece - Unigram - SentencePiece ``` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? cc @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20135/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20135", "html_url": "https://github.com/huggingface/transformers/pull/20135", "diff_url": "https://github.com/huggingface/transformers/pull/20135.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20135.patch", "merged_at": 1668471493000 }
https://api.github.com/repos/huggingface/transformers/issues/20134
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20134/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20134/comments
https://api.github.com/repos/huggingface/transformers/issues/20134/events
https://github.com/huggingface/transformers/pull/20134
1,441,800,059
PR_kwDOCUB6oc5Cgkgh
20,134
Update `CLIPSegModelTester`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20134). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,668
1,668
COLLABORATOR
null
# What does this PR do? To align with other CLIP-like model testers. See #20044.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20134/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20134", "html_url": "https://github.com/huggingface/transformers/pull/20134", "diff_url": "https://github.com/huggingface/transformers/pull/20134.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20134.patch", "merged_at": 1668003713000 }
https://api.github.com/repos/huggingface/transformers/issues/20133
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20133/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20133/comments
https://api.github.com/repos/huggingface/transformers/issues/20133/events
https://github.com/huggingface/transformers/issues/20133
1,441,703,620
I_kwDOCUB6oc5V7qbE
20,133
Compact Transformer
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Are you willing to collaborate to make this available at HF transformers? @astariul . If so, please connect with me\r\n", "Hi @astariul and @navinelahi, are there any updates on this issue? May I start working on this?" ]
1,667
1,680
null
CONTRIBUTOR
null
### Model description # Escaping the Big Data Paradigm with Compact Transformers Abstract : > With the rise of Transformers as the standard for language processing, and their advancements in computer vision, there has been a corresponding growth in parameter size and amounts of training data. Many have come to believe that because of this, transformers are not suitable for small sets of data. This trend leads to concerns such as: limited availability of data in certain scientific domains and the exclusion of those with limited resource from research in the field. In this paper, we aim to present an approach for small-scale learning by introducing Compact Transformers. We show for the first time that with the right size, convolutional tokenization, transformers can avoid overfitting and outperform state-of-the-art CNNs on small datasets. Our models are flexible in terms of model size, and can have as little as 0.28M parameters while achieving competitive results. Our best model can reach 98% accuracy when training from scratch on CIFAR-10 with only 3.7M parameters, which is a significant improvement in data-efficiency over previous Transformer based models being over 10x smaller than other transformers and is 15% the size of ResNet50 while achieving similar performance. CCT also outperforms many modern CNN based approaches, and even some recent NAS-based approaches. Additionally, we obtain a new SOTA result on Flowers-102 with 99.76% top-1 accuracy, and improve upon the existing baseline on ImageNet (82.71% accuracy with 29% as many parameters as ViT), as well as NLP tasks. Our simple and compact design for transformers makes them more feasible to study for those with limited computing resources and/or dealing with small datasets, while extending existing research efforts in data efficient transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper : https://arxiv.org/pdf/2104.05704.pdf Github repository : https://github.com/SHI-Labs/Compact-Transformers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20133/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20133/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/20132
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20132/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20132/comments
https://api.github.com/repos/huggingface/transformers/issues/20132/events
https://github.com/huggingface/transformers/issues/20132
1,441,703,218
I_kwDOCUB6oc5V7qUy
20,132
maskformer sample code error
{ "login": "Tungway1990", "id": 68179274, "node_id": "MDQ6VXNlcjY4MTc5Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/68179274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tungway1990", "html_url": "https://github.com/Tungway1990", "followers_url": "https://api.github.com/users/Tungway1990/followers", "following_url": "https://api.github.com/users/Tungway1990/following{/other_user}", "gists_url": "https://api.github.com/users/Tungway1990/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tungway1990/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tungway1990/subscriptions", "organizations_url": "https://api.github.com/users/Tungway1990/orgs", "repos_url": "https://api.github.com/users/Tungway1990/repos", "events_url": "https://api.github.com/users/Tungway1990/events{/privacy}", "received_events_url": "https://api.github.com/users/Tungway1990/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @alaradirik and @NielsRogge ", "Hi @Tungway1990, thanks for pointing this out! You are right, the ` label_ids_to_fuse` argument is only used for panoptic segmentation and the logic should take None values into account.\r\n\r\nWe'll open a PR to fix this shortly. cc @sgugger @NielsRogge ", "We should actually update that code example, as that particular checkpoint was fine-tuned on ADE20K Semantic Segmentation. Hence, it doesn't make sense to postprocess the outputs for instance or panoptic segmentation.\r\n\r\nThanks for pointing out!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.0 (True) ### Who can help? @sgugger @patil-suraj Try running the sample code in maskformer [https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/maskformer](url) ```python output = feature_extractor.post_process_instance_segmentation(outputs) ``` has a NoneType error. ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade") inputs = feature_extractor(images=image, return_tensors="pt") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to feature_extractor for postprocessing output = feature_extractor.post_process_semantic_segmentation(outputs) output = feature_extractor.post_process_instance_segmentation(outputs) output = feature_extractor.post_process_panoptic_segmentation(outputs) ``` Error Message: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [5], in <cell line: 1>() ----> 1 output = feature_extractor.post_process_instance_segmentation(outputs) File ~\anaconda3\lib\site-packages\transformers\models\maskformer\feature_extraction_maskformer.py:794, in MaskFormerFeatureExtractor.post_process_instance_segmentation(self, outputs, threshold, mask_threshold, overlap_mask_area_threshold, target_sizes, return_coco_annotation) 792 # Get segmentation map and segment information of batch item 793 target_size = target_sizes[i] if target_sizes is not None else None --> 794 segmentation, segments = compute_segments( 795 mask_probs_item, 796 pred_scores_item, 797 pred_labels_item, 798 mask_threshold, 799 overlap_mask_area_threshold, 800 target_size, 801 ) 803 # Return segmentation map in run-length encoding (RLE) format 804 if return_coco_annotation: File ~\anaconda3\lib\site-packages\transformers\models\maskformer\feature_extraction_maskformer.py:163, in compute_segments(mask_probs, pred_scores, pred_labels, mask_threshold, overlap_mask_area_threshold, label_ids_to_fuse, target_size) 161 for k in range(pred_labels.shape[0]): 162 pred_class = pred_labels[k].item() --> 163 should_fuse = pred_class in label_ids_to_fuse 165 # Check if mask exists and large enough to be a segment 166 mask_exists, mask_k = check_segment_validity( 167 mask_labels, mask_probs, k, mask_threshold, overlap_mask_area_threshold 168 ) TypeError: argument of type 'NoneType' is not iterable ### Expected behavior Checking logic shall add if label_ids_to_fuse is None like below. ```python if label_ids_to_fuse is None: should_fuse = False elif pred_class in label_ids_to_fuse: should_fuse = True else: should_fuse = False ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20132/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20131
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20131/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20131/comments
https://api.github.com/repos/huggingface/transformers/issues/20131/events
https://github.com/huggingface/transformers/issues/20131
1,441,305,605
I_kwDOCUB6oc5V6JQF
20,131
Failed to import
{ "login": "ravi160822", "id": 114392296, "node_id": "U_kgDOBtF86A", "avatar_url": "https://avatars.githubusercontent.com/u/114392296?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ravi160822", "html_url": "https://github.com/ravi160822", "followers_url": "https://api.github.com/users/ravi160822/followers", "following_url": "https://api.github.com/users/ravi160822/following{/other_user}", "gists_url": "https://api.github.com/users/ravi160822/gists{/gist_id}", "starred_url": "https://api.github.com/users/ravi160822/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ravi160822/subscriptions", "organizations_url": "https://api.github.com/users/ravi160822/orgs", "repos_url": "https://api.github.com/users/ravi160822/repos", "events_url": "https://api.github.com/users/ravi160822/events{/privacy}", "received_events_url": "https://api.github.com/users/ravi160822/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like a problem in your CUDA installation.", "it works on my local, but it fails when I use it with docker, do I need to give any additional command in docker apart from installing requirements", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@ravi160822 \r\nI'm currently getting \r\n\r\n`libcublas.so.*[0-9] not found in the system path ['/app/src', '/usr/local/lib/python311.zip', '/usr/local/lib/python3.11', '/usr/local/lib/python3.11/lib-dynload', '/usr/local/lib/python3.11/site-packages', '/app/src']`\r\n\r\nDid you find a workaround?\r\n", "I'm facing the same issue as @jagumpert, but only when running Github actions:\r\n\r\n```libcublas.so.*[0-9] not found in the system path [(...)]```\r\n\r\nThis seems to be a PyTorch issue. I was using `torch` version 2.0.1, and downgrading to 2.0.0 fixed the issue.", "I also have the same issue as @jagumpert when trying to build a docker image with --platform=linux/amd64 python:3.11 as the base image. \r\n\r\n@saattrupdan solution did not work for me ( downgrading torch to 2.0.0 did not fix the issue )", "Same here @jose-arguelles when trying to build a docker image. Did you find any workaround?", "@saattrupdan 's solution helped me, thank you.\r\nDowngrading torch to 2.0.0 version also helped me (`poetry add torch=2.0.0`) on Ubuntu without gpu on it!", "> @saattrupdan 's solution helped me, thank you. Downgrading torch to 2.0.0 version also helped me (`poetry add torch=2.0.0`) on Ubuntu without gpu on it!\r\n\r\nCould you please explain what changed between the versions? And why this was a solution. Thanks :)", "> > @saattrupdan 's solution helped me, thank you. Downgrading torch to 2.0.0 version also helped me (`poetry add torch=2.0.0`) on Ubuntu without gpu on it!\r\n> \r\n> Could you please explain what changed between the versions? And why this was a solution. Thanks :)\r\n\r\n\r\n@codingbutstillalive this is due to poetry.\r\nWhen installing torch2.0.0 it also download nvidia packages unlike torch2.1", "still happens, installing torch 2.0.0 did not solve it for me" ]
1,667
1,703
1,671
NONE
null
### System Info python 3.10.7 Target: arm64-apple-darwin21.6.0 Thread model: posix transformers==4.24.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `from transformers import AlbertTokenizer, AlbertModel` -> ``` RuntimeError: Failed to import transformers.models.albert.modeling_albert because of the following error (look up to see its traceback): libcublas.so.11: cannot open shared object file: No such file or directory ``` ### Expected behavior no error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20131/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20131/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20130
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20130/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20130/comments
https://api.github.com/repos/huggingface/transformers/issues/20130/events
https://github.com/huggingface/transformers/issues/20130
1,441,302,699
I_kwDOCUB6oc5V6Iir
20,130
Finetuning m2m100 with run_translation_no_trainer.py using ZERO stage 3 hangs when evaluation after first epoch
{ "login": "cokuehuang", "id": 29472378, "node_id": "MDQ6VXNlcjI5NDcyMzc4", "avatar_url": "https://avatars.githubusercontent.com/u/29472378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cokuehuang", "html_url": "https://github.com/cokuehuang", "followers_url": "https://api.github.com/users/cokuehuang/followers", "following_url": "https://api.github.com/users/cokuehuang/following{/other_user}", "gists_url": "https://api.github.com/users/cokuehuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/cokuehuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cokuehuang/subscriptions", "organizations_url": "https://api.github.com/users/cokuehuang/orgs", "repos_url": "https://api.github.com/users/cokuehuang/repos", "events_url": "https://api.github.com/users/cokuehuang/events{/privacy}", "received_events_url": "https://api.github.com/users/cokuehuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 ", "Hello @cokuehuang, please provide a minimal script along with the minimal dataset in order to reproduce this issue. I am unable to reproduce this using below steps:\r\n1. run `accelerate env` command to see the config being used:\r\n```\r\n- `Accelerate` version: 0.15.0.dev0\r\n- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.4\r\n- Numpy version: 1.23.1\r\n- PyTorch version (GPU?): 1.12.1 (True)\r\n- `Accelerate` default config:\r\n\t- compute_environment: LOCAL_MACHINE\r\n\t- distributed_type: DEEPSPEED\r\n\t- mixed_precision: no\r\n\t- use_cpu: False\r\n\t- dynamo_backend: NO\r\n\t- num_processes: 4\r\n\t- machine_rank: 0\r\n\t- num_machines: 1\r\n\t- gpu_ids: None\r\n\t- main_process_ip: None\r\n\t- main_process_port: None\r\n\t- rdzv_backend: static\r\n\t- same_network: True\r\n\t- main_training_function: main\r\n\t- deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'cpu', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}\r\n\t- fsdp_config: {}\r\n\t- megatron_lm_config: {}\r\n\t- downcast_bf16: no\r\n\t- tpu_name: None\r\n\t- tpu_zone: None\r\n\t- command_file: None\r\n\t- commands: None\r\n``` \r\n\r\n2. Run below command:\r\n```\r\naccelerate launch run_translation_no_trainer.py --model_name_or_path facebook/m2m100_418M --source_lang en --target_lang ro --dataset_name wmt16 --output_dir ./m2m100_418M --max_source_length 128 --max_target_length 128 --per_device_train_batch_size 8 --per_device_eval_batch_size 4 --dataset_config_name ro-en\r\n```\r\nFor this to work, change the following line in `run_translation_no_trainer.py` :\r\n```diff\r\n- if isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):\r\n+ if isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast, M2M100Tokenizer)):\r\n```\r\n\r\n3. Output logs:\r\n```\r\n11/10/2022 07:21:52 - INFO - __main__ - ***** Running training *****\r\n11/10/2022 07:21:52 - INFO - __main__ - Num examples = 610320\r\n11/10/2022 07:21:52 - INFO - __main__ - Num Epochs = 3\r\n11/10/2022 07:21:52 - INFO - __main__ - Instantaneous batch size per device = 8\r\n11/10/2022 07:21:52 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32\r\n11/10/2022 07:21:52 - INFO - __main__ - Gradient Accumulation steps = 1\r\n11/10/2022 07:21:52 - INFO - __main__ - Total optimization steps = 57219\r\n 0%|▎ | 209/57219 [05:48<26:29:02, 1.67s/it]\r\n```\r\n\r\nWhat I think is happening is that at step 32 epoch 1 is over and now the eval loop starts which is using `accelerator.unwrap_model(model).generate()`. Now, this might be taking long time when being offloaded to CPU and as a result one might feel like code has hanged. Can you try ZeRO Stage-3 without offloading anything to `CPU` and let us know if that resolves the issue?\r\n\r\n\r\n ", "@pacman100 \r\n1. accelerate env:\r\n`- `Accelerate` version: 0.12.0\r\n- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- Numpy version: 1.23.3\r\n- PyTorch version (GPU?): 1.12.0+cu113 (True)\r\n- `Accelerate` default config:\r\n - compute_environment: LOCAL_MACHINE\r\n - distributed_type: DEEPSPEED\r\n - mixed_precision: no\r\n - use_cpu: False\r\n - num_processes: 4\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - main_process_ip: None\r\n - main_process_port: None\r\n - main_training_function: main\r\n - deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'cpu', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}\r\n - fsdp_config: {}\r\n - downcast_bf16: no\r\n`\r\n\r\n2. My training script and datas:\r\n[scriptanddatas.zip](https://github.com/huggingface/transformers/files/9979662/scriptanddatas.zip)\r\n\r\n3. Yes you're right, by adding log , at step 32, it's start eval loop and 'hangs' at generate(). I'll try ZeRO Stage-3 without offloading cpu later.", "1. Without cpu offloading :\r\n`- `Accelerate` version: 0.12.0\r\n- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- Numpy version: 1.23.3\r\n- PyTorch version (GPU?): 1.12.0+cu113 (True)\r\n- `Accelerate` default config:\r\n - compute_environment: LOCAL_MACHINE\r\n - distributed_type: DEEPSPEED\r\n - mixed_precision: no\r\n - use_cpu: False\r\n - num_processes: 4\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - main_process_ip: None\r\n - main_process_port: None\r\n - main_training_function: main\r\n - deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}\r\n - fsdp_config: {}\r\n - downcast_bf16: no\r\n`\r\ntrainging output infos:\r\n11/10/2022 18:21:48 - INFO - __main__ - ***** Running training *****\r\n11/10/2022 18:21:48 - INFO - __main__ - Num examples = 1000\r\n11/10/2022 18:21:48 - INFO - __main__ - Num Epochs = 3\r\n11/10/2022 18:21:48 - INFO - __main__ - Instantaneous batch size per device = 8\r\n11/10/2022 18:21:48 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32\r\n11/10/2022 18:21:48 - INFO - __main__ - Gradient Accumulation steps = 1\r\n11/10/2022 18:21:48 - INFO - __main__ - Total optimization steps = 96\r\n 33%|████████████████████████████████████████████████▎ | 32/96 [03:06<06:42, 6.29s/it]\r\n\r\nHangs already 1hour at 33% and eval data size is only 200.", "Hello @cokuehuang, Thank you for giving the minimal script and data for reproducing the issue on our end. When using ZeRO stage-3 following needs to passed to `generate` function call:\r\n```\r\nif accelerator.state.deepspeed_plugin.zero_stage == 3:\r\n gen_kwargs[\"synced_gpus\"] = True #required for ZeRO Stage 3\r\n``` \r\nafter adding it, everything should work just fine when using DS ZeRO-3 with/without cpu offloading\r\n```\r\n11/10/2022 14:09:03 - INFO - __main__ - ***** Running training *****\r\n11/10/2022 14:09:03 - INFO - __main__ - Num examples = 1000\r\n11/10/2022 14:09:03 - INFO - __main__ - Num Epochs = 3\r\n11/10/2022 14:09:03 - INFO - __main__ - Instantaneous batch size per device = 16\r\n11/10/2022 14:09:03 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32\r\n11/10/2022 14:09:03 - INFO - __main__ - Gradient Accumulation steps = 1\r\n11/10/2022 14:09:03 - INFO - __main__ - Total optimization steps = 96\r\n 33%|█████████████████████ | 32/96 [01:14<02:28, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True}\r\n 33%|█████████████████████ | 32/96 [01:14<02:28, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True}\r\n11/10/2022 14:13:04 - INFO - __main__ - {'bleu': 6.697252711851462}\r\n 67%|██████████████████████████████████████████ | 64/96 [05:13<01:14, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True}\r\n 67%|██████████████████████████████████████████ | 64/96 [05:13<01:14, 2.32s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True}\r\n11/10/2022 14:16:52 - INFO - __main__ - {'bleu': 6.944214970589274}\r\n100%|███████████████████████████████████████████████████████████████| 96/96 [09:02<00:00, 2.33s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True}\r\n100%|███████████████████████████████████████████████████████████████| 96/96 [09:02<00:00, 2.33s/it]{'max_length': 128, 'num_beams': None, 'synced_gpus': True}\r\n11/10/2022 14:20:52 - INFO - __main__ - {'bleu': 6.8998500689065}\r\nConfiguration saved in ./m2m100_418M/config.json\r\n100%|███████████████████████████████████████████████████████████████| 96/96 [11:48<00:00, 7.38s/it]\r\nModel weights saved in ./m2m100_418M/pytorch_model.bin\r\ntokenizer config file saved in ./m2m100_418M/tokenizer_config.json\r\nSpecial tokens file saved in ./m2m100_418M/special_tokens_map.json\r\n100%|███████████████████████████████████████████████████████████████| 96/96 [11:48<00:00, 7.38s/it]\r\n```", "@pacman100 Yes! It works! Thanks very much!" ]
1,667
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.4.1 (gpu) - Jax version: 0.3.5 - JaxLib version: 0.3.5 - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <yes> ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. accelerate config Accelerate configs as follows: ``` compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero3_save_16bit_model: true zero_stage: 3 distributed_type: DEEPSPEED downcast_bf16: 'no' fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 4 use_cpu: false ``` 2. Run finetuning script with command: `accelerate launch run_translation_no_trainer.py --model_name_or_path facebook/m2m100_418M --source_lang ro --target_lang zh --train_file teddata/train.json --validation_file teddata/val.json --output_dir ./m2m100_418M --max_source_length 128 --max_target_length 128 --per_device_train_batch_size=8 --per_device_eval_batch_size=4 --forced_bos_token zh` Traing output infos: 11/09/2022 11:02:34 - INFO - __main__ - ***** Running training ***** 11/09/2022 11:02:34 - INFO - __main__ - Num examples = 1000 11/09/2022 11:02:34 - INFO - __main__ - Num Epochs = 3 11/09/2022 11:02:34 - INFO - __main__ - Instantaneous batch size per device = 8 11/09/2022 11:02:34 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 11/09/2022 11:02:34 - INFO - __main__ - Gradient Accumulation steps = 1 11/09/2022 11:02:34 - INFO - __main__ - Total optimization steps = 94 33%|███████████████████████████ 32/94[18:31<39:25, 9.20s/it] Finetuning hangs here, all GPU-Util is almost 100%. While accelerate config set zero stage 2, finetuning is success . ### Expected behavior Success finish finetuning m2m100 with run_translation_no_trainer.py using ZERO stage 3.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20130/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20129
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20129/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20129/comments
https://api.github.com/repos/huggingface/transformers/issues/20129/events
https://github.com/huggingface/transformers/pull/20129
1,441,162,918
PR_kwDOCUB6oc5CeZ-T
20,129
[testing doc-build from fork]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,668
1,668
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20129/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20129", "html_url": "https://github.com/huggingface/transformers/pull/20129", "diff_url": "https://github.com/huggingface/transformers/pull/20129.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20129.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20128
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20128/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20128/comments
https://api.github.com/repos/huggingface/transformers/issues/20128/events
https://github.com/huggingface/transformers/pull/20128
1,441,157,497
PR_kwDOCUB6oc5CeYyI
20,128
[testing doc-build]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,667
1,667
1,667
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20128/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20128", "html_url": "https://github.com/huggingface/transformers/pull/20128", "diff_url": "https://github.com/huggingface/transformers/pull/20128.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20128.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20127
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20127/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20127/comments
https://api.github.com/repos/huggingface/transformers/issues/20127/events
https://github.com/huggingface/transformers/issues/20127
1,440,592,638
I_kwDOCUB6oc5V3bL-
20,127
Improvement to error handling in subclasses
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Interesting. It seems like Python gobbles the error raised by the super class (which does tell you to install sentencepiece) and decides it has no `from_pretrained` attribute instead.\r\n\r\nIf anyone has any idea to fix our metaclass `DummyObject` so it works on subclasses, I'm all ears!", "Pinging here as we are seeing the same issue.\r\n\r\n```\r\nE ImportError: \r\nE XLMRobertaTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the\r\nE installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones\r\nE that match your environment. Please note that you may need to restart your runtime after installation.\r\n```", "No this is not the same issue. The error message is clearly indicating that your need to install `sentencepiece`.", "Yet it doesn't come up as a requirement from transformers - as described in the original post.\r\n\r\nThe sentencepiece library has to be added manually. Did I miss something about having to do that otherwise in the release notes? This was discovered after updating to the latest version of transformers 2.24.0.\r\n\r\nsite-packages\\transformers\\models\\xlm_roberta\\tokenization_xlm_roberta.py\r\n\r\nhttps://github.com/huggingface/transformers/blob/f3d99e49d459f9d1cc7544352041b3a64d68c734/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L22", "Also, just a side note here, this occurred in multiple environments - Linux and Windows based. I hope this helps.", "And when running \"pip show sentencepiece\" locally, it shows as having no requires/requires-by - very odd.", "I think `DummyObject` should override `__getattribute__` instead of `__getattr__` to get the expected error.\r\n\r\nEDIT:\r\n\r\nTested locally, and it works. I've linked a PR with the fix.", "Thanks @mariosasko!\r\n\r\n@jacwalte That is the expected behavior. sentencepiece is not installed by default because not all tokenizers need it. You'll get the error message if you need it for your use-case and then you just have to install it manually. Or you can install transformers with the extra `sentencepiece`, `transformers[sentencepiece]`. The problem in my case was that the error message did not show up. This has now been quickly fixed by @mariosasko!", "> Thanks @mariosasko!\r\n> \r\n> @jacwalte That is the expected behavior. sentencepiece is not installed by default because not all tokenizers need it. You'll get the error message if you need it for your use-case and then you just have to install it manually. Or you can install transformers with the extra `sentencepiece`, `transformers[sentencepiece]`. The problem in my case was that the error message did not show up. This has now been quickly fixed by @mariosasko!\r\n\r\nThanks! - will update the requirements with that" ]
1,667
1,668
1,668
COLLABORATOR
null
### Feature request I encountered a fascinating (though very frustrating :-)) scenario about error handling in `transformers`. When subclassing a tokenizer that relies on `sentencepiece`, and not having it installed, you will get an unhelpful error message that sends you down a lot of rabbit holes. Consider this minimal example: ```python from transformers import MBartTokenizer class CustomMBartTokenizer(MBartTokenizer): @classmethod def from_pretrained(cls, *args, **kwargs): inst = super().from_pretrained(*args, **kwargs) # Do other stuff with it... a = CustomMBartTokenizer.from_pretrained("facebook/mbart-large-cc25") ``` If you run this in a new environment where sentencepiece is not installed, you get the following error: > AttributeError: 'super' object has no attribute 'from_existing' This error message had me comparing Windows vs. Linux and python 3.8 vs 3.9 vs 3.10 because I could not figure out why it was working on my home machine and not on our cluster. In the end, the reason was that `sentencepiece` was not yet installed on the cluster **but the error message does not show that**. It seems that the sentencepiece error does not show or does not stop execution, which then leads the class to not be successfully initialized. Although admittedly I have not dug much farther. ### Motivation The error message does not seem to correctly propagate when subclassing a tokenizer. The error message that indicates that sentencepiece is not installed and needs to be installed is not correctly shown. Instead the user gets a vague error message about the `from_pretrained` call. While this may be an exceptional case, I have found that subclassing tokenizers for a specific task is common in research. ### Your contribution I do not have the time to work on figuring out what the exact cause is unfortunately. Posting this here for posterity. For anyone getting this issue: **you probably just need to make sure all necessary third party libraries (such as sentencepiece) are installed.**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20127/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20127/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20126
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20126/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20126/comments
https://api.github.com/repos/huggingface/transformers/issues/20126/events
https://github.com/huggingface/transformers/issues/20126
1,440,484,647
I_kwDOCUB6oc5V3A0n
20,126
Add pop2piano
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @ArthurZucker could you please share the progress of the model addition?(I am asking because the last time this Issue has had any actions was in Nov 8, 2022). I tried this model on colab and really loved it. I want to add / help adding this model to HF. Is it possible that we can collaborate in this addition? ", "Hey! I did not start at all! Feel free to open a PR and ping me for pointers/help! I won't have time to do it alone but would love to collaborate ! 😉 \r\n", "@ArthurZucker Ok, my term exams are till 3rd March so I will start working from 4th, in meantime I will open a PR. Do you want to continue communicating through that PR or how about communicating through slack/discord if it's possible? ", "Sure, will invite you to slack if you can share your email, arthur@hf.co! ", "> Sure, will invite you to slack if you can share your email, [arthur@hf.co](mailto:arthur@hf.co)!\r\n\r\nMy email is - susnatodhar10@gmail.com\r\n\r\n@ArthurZucker ", "Just invited you! Good luck on your mid terms 😉 ", "Also if anyone want to tackle this before, ping me and will add you to the channel", "@ArthurZucker How to retrain model and how to get datasets to retrain? ! Help me" ]
1,667
1,700
1,692
COLLABORATOR
null
### Model description - Introduce a large amount of paired and synchronised {pop, piano cover} data using an automated pipeline. - Pop2Piano, a Transformer network that generates piano covers given waveforms of pop music. - First model to directly generate a piano cover from pop audio without melody and chord extraction modules. - Uses a T5 model so should be straightforward. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation - Weights : https://github.com/sweetcocoa/pop2piano/releases/download/dpi_2k_epoch/model-1999-val_0.67311615.ckpt - Code : https://github.com/sweetcocoa/pop2piano/ - Paper : https://arxiv.org/abs/2211.00895 - Colab : https://colab.research.google.com/drive/1rBAs2TkryDnnQOhcM-mtlrgtL2h3ekml?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20126/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20125
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20125/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20125/comments
https://api.github.com/repos/huggingface/transformers/issues/20125/events
https://github.com/huggingface/transformers/pull/20125
1,440,330,275
PR_kwDOCUB6oc5CblUJ
20,125
Update github pr docs actions
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20125). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,667
1,667
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20125/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20125", "html_url": "https://github.com/huggingface/transformers/pull/20125", "diff_url": "https://github.com/huggingface/transformers/pull/20125.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20125.patch", "merged_at": 1667921844000 }
https://api.github.com/repos/huggingface/transformers/issues/20124
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20124/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20124/comments
https://api.github.com/repos/huggingface/transformers/issues/20124/events
https://github.com/huggingface/transformers/pull/20124
1,440,293,397
PR_kwDOCUB6oc5Cbdcn
20,124
Remove BertConfig inheritance from RobertaConfig
{ "login": "Saad135", "id": 22683922, "node_id": "MDQ6VXNlcjIyNjgzOTIy", "avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saad135", "html_url": "https://github.com/Saad135", "followers_url": "https://api.github.com/users/Saad135/followers", "following_url": "https://api.github.com/users/Saad135/following{/other_user}", "gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saad135/subscriptions", "organizations_url": "https://api.github.com/users/Saad135/orgs", "repos_url": "https://api.github.com/users/Saad135/repos", "events_url": "https://api.github.com/users/Saad135/events{/privacy}", "received_events_url": "https://api.github.com/users/Saad135/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks again for your contribution!" ]
1,667
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Removes BertConfig dependencies from RobertaConfig Related to https://github.com/huggingface/transformers/issues/19303 @sgugger can I please get some feedback on this. Thanks 😄 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20124/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20124", "html_url": "https://github.com/huggingface/transformers/pull/20124", "diff_url": "https://github.com/huggingface/transformers/pull/20124.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20124.patch", "merged_at": 1668001873000 }
https://api.github.com/repos/huggingface/transformers/issues/20123
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20123/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20123/comments
https://api.github.com/repos/huggingface/transformers/issues/20123/events
https://github.com/huggingface/transformers/issues/20123
1,440,122,072
I_kwDOCUB6oc5V1oTY
20,123
Whisper: incorrect list of non speech tokens
{ "login": "guillaumekln", "id": 4805513, "node_id": "MDQ6VXNlcjQ4MDU1MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4805513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guillaumekln", "html_url": "https://github.com/guillaumekln", "followers_url": "https://api.github.com/users/guillaumekln/followers", "following_url": "https://api.github.com/users/guillaumekln/following{/other_user}", "gists_url": "https://api.github.com/users/guillaumekln/gists{/gist_id}", "starred_url": "https://api.github.com/users/guillaumekln/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guillaumekln/subscriptions", "organizations_url": "https://api.github.com/users/guillaumekln/orgs", "repos_url": "https://api.github.com/users/guillaumekln/repos", "events_url": "https://api.github.com/users/guillaumekln/events{/privacy}", "received_events_url": "https://api.github.com/users/guillaumekln/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey! thanks for pointing that out. \r\nWill open a PR on the online models and on the repo, just gotta make sure this is not backward incompatible 🤗 ", "Thanks for the fix and the configurations update! However, there are still 2 pull requests to merge:\r\n\r\n* https://huggingface.co/openai/whisper-small/discussions/4\r\n* https://huggingface.co/openai/whisper-medium/discussions/5\r\n\r\n", "Thanks for the notice! 🤗 " ]
1,667
1,669
1,669
CONTRIBUTOR
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The lists `NON_SPEECH_TOKENS` and `NON_SPEECH_TOKENS_MULTI` contain the tokens 6 and 12 that are not suppressed by default in the [reference implementation](https://github.com/openai/whisper/). Consider the following example using the reference `whisper` module: ```python import transformers from whisper.tokenizer import get_tokenizer tokenizer = get_tokenizer(multilingual=True, task="transcribe", language="fr") suppress_tokens = list( sorted( tokenizer.non_speech_tokens + (tokenizer.sot, tokenizer.sot_prev, tokenizer.sot_lm, tokenizer.no_speech) ) ) config = transformers.WhisperConfig.from_pretrained("openai/whisper-tiny") print(suppress_tokens == config.suppress_tokens) # prints False config.suppress_tokens.remove(6) config.suppress_tokens.remove(12) print(suppress_tokens == config.suppress_tokens) # prints True ``` ### Expected behavior The list of suppressed tokens should match the reference implementation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20123/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20122
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20122/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20122/comments
https://api.github.com/repos/huggingface/transformers/issues/20122/events
https://github.com/huggingface/transformers/issues/20122
1,440,088,878
I_kwDOCUB6oc5V1gMu
20,122
Why is CLIPImageProcessor not in general init?
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Will be fixed by #20111", "I have the same problem... God...When will the CLIPImageProcessor be set up in general init?", "@ZoeyyHz, which version of transformers are you using? I'm able to run the following without issue:\r\n\r\n```\r\nfrom transformers import CLIPImageProcessor\r\n```", "Hi, there. I have the same problem too.\r\nwhich version of transformers do I have to use?", "@BigTail375 The CLIPImageProcessor has been available to import from the public init from v4.25.1" ]
1,667
1,688
1,668
MEMBER
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - Huggingface_hub version: 0.11.0.dev0 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger maybe ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction In diffusers we load transformers classes according to a model index, which *e.g.* looks as follows: ```bash { "_class_name": "StableDiffusionPipeline", "_diffusers_version": "0.7.0.dev0", "feature_extractor": [ "transformers", "CLIPFeatureExtractor" ], "scheduler": [ "diffusers", "PNDMScheduler" ], "text_encoder": [ "transformers", "CLIPTextModel" ], "tokenizer": [ "transformers", "CLIPTokenizer" ], "unet": [ "diffusers", "UNet2DConditionModel" ], "vae": [ "diffusers", "AutoencoderKL" ] } ``` The important part is: ``` "feature_extractor": [ "transformers", "CLIPFeatureExtractor" ], ``` Now what is happening then is that we load a component that we call `"feature_extractor"` from `"transformers"` and the `"CLIPFeatureExtractor"` class. Then when saving the model we save it with `type(feature_extractor)` which is now though `CLIPImageProcessor` and then we want to load it again from `transformers`, but we cannot import it from transformers. E.g. `from transformers import CLIPImageProcessor` doesn't work. Could we add `CLIPImageProcessor` to the init? ### Expected behavior I think we should put `CLIPImageProcessor` in the init no?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20122/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20121
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20121/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20121/comments
https://api.github.com/repos/huggingface/transformers/issues/20121/events
https://github.com/huggingface/transformers/issues/20121
1,440,077,127
I_kwDOCUB6oc5V1dVH
20,121
Cannot load CLIPProcessor / CLIPFeatureExtractor locally
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Will be fixed by #20111" ]
1,667
1,668
1,668
MEMBER
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - Huggingface_hub version: 0.11.0.dev0 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger maybe ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Is it expected that the following doesn't work on main? ```python from transformers import CLIPFeatureExtractor, AutoFeatureExtractor, AutoProcessor feature_extractor = CLIPFeatureExtractor() id_name = "./clip_feat" feature_extractor.save_pretrained(id_name) print("load from CLIPFeatureExtractor") feature_extractor = CLIPFeatureExtractor.from_pretrained(id_name) #print("load from CLIPImageProcessor") #feature_extractor = CLIPImageProcessor.from_pretrained(id_name) print("load from AutoFeatureExtractor") feature_extractor = AutoFeatureExtractor.from_pretrained(id_name) print("load from AutoProcessor") feature_extractor = AutoProcessor.from_pretrained(id_name) ``` We can see that I can load the feature extractor directly from the class but not from `AutoFeatureExtractor` or `AutoProcessor` even though we save the feature extractor type in the `preprocessor_config.json` file. ### Expected behavior I would have expected that the code snippet above works.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20121/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20120
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20120/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20120/comments
https://api.github.com/repos/huggingface/transformers/issues/20120/events
https://github.com/huggingface/transformers/issues/20120
1,440,051,377
I_kwDOCUB6oc5V1XCx
20,120
ESM esmfold_v1 infer_pdbs method gives TypeError
{ "login": "maxjeblick", "id": 24281881, "node_id": "MDQ6VXNlcjI0MjgxODgx", "avatar_url": "https://avatars.githubusercontent.com/u/24281881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxjeblick", "html_url": "https://github.com/maxjeblick", "followers_url": "https://api.github.com/users/maxjeblick/followers", "following_url": "https://api.github.com/users/maxjeblick/following{/other_user}", "gists_url": "https://api.github.com/users/maxjeblick/gists{/gist_id}", "starred_url": "https://api.github.com/users/maxjeblick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxjeblick/subscriptions", "organizations_url": "https://api.github.com/users/maxjeblick/orgs", "repos_url": "https://api.github.com/users/maxjeblick/repos", "events_url": "https://api.github.com/users/maxjeblick/events{/privacy}", "received_events_url": "https://api.github.com/users/maxjeblick/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Rocketknight1 ", "Hi @maxjeblick, this is caused by those methods being ported directly from `ESMFold` and not being updated to match our implementation. I'm working on a fix now! In the meantime you can use the code from our [example notebook for protein folding](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) to convert model outputs to PDB.", "@maxjeblick fixed on main now!" ]
1,667
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import EsmForProteinFolding model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1").cuda() pdbs = model.infer_pdbs(["MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"]) ``` gives ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [12], line 1 ----> 1 pdbs = model.infer_pdbs(["MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"]) File ~/PycharmProjects/esm/venv/lib/python3.9/site-packages/transformers/models/esm/modeling_esmfold.py:2318, in EsmForProteinFolding.infer_pdbs(self, seqs, *args, **kwargs) 2316 def infer_pdbs(self, seqs: List[str], *args, **kwargs) -> List[str]: 2317 """Returns the pdb (file) string from the model given an input sequence.""" -> 2318 output = self.infer(seqs, *args, **kwargs) 2319 return self.output_to_pdb(output) File ~/PycharmProjects/esm/venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File ~/PycharmProjects/esm/venv/lib/python3.9/site-packages/transformers/models/esm/modeling_esmfold.py:2280, in EsmForProteinFolding.infer(self, seqs, residx, with_mask) 2278 if residx.ndim == 1: 2279 residx = residx.unsqueeze(0) -> 2280 return self.forward( 2281 aatype, 2282 mask, 2283 mask_aa=with_mask is not None, 2284 masking_pattern=with_mask, 2285 residx=residx, 2286 ) TypeError: forward() got an unexpected keyword argument 'mask_aa' ``` ### Expected behavior pdb will be calculated correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20120/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20119
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20119/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20119/comments
https://api.github.com/repos/huggingface/transformers/issues/20119/events
https://github.com/huggingface/transformers/pull/20119
1,440,046,034
PR_kwDOCUB6oc5CanGj
20,119
Improve tiny model creation script
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20119). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,667
1,667
COLLABORATOR
null
# What does this PR do? - Add the option to upload the created tiny models to Hub. - Make tiny config corresponds better to the (reduced) tokenizer. - This gets quite complicated - But basically, just to make sure the `vocab_size` and `xxx_token_ids` in the tiny config correspond to what we have in the (reduced) tokenizer. Once this PR is approved, should I upload them to `hf-internal-testing`? (I remembered you said yes a few months ago, but want to be sure :-))
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20119/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20119", "html_url": "https://github.com/huggingface/transformers/pull/20119", "diff_url": "https://github.com/huggingface/transformers/pull/20119.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20119.patch", "merged_at": 1667990076000 }
https://api.github.com/repos/huggingface/transformers/issues/20118
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20118/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20118/comments
https://api.github.com/repos/huggingface/transformers/issues/20118/events
https://github.com/huggingface/transformers/pull/20118
1,440,010,488
PR_kwDOCUB6oc5CafXq
20,118
[CLIPSeg] Add resources
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20118). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR adds resources for CLIPSeg.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20118/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20118", "html_url": "https://github.com/huggingface/transformers/pull/20118", "diff_url": "https://github.com/huggingface/transformers/pull/20118.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20118.patch", "merged_at": 1668015082000 }
https://api.github.com/repos/huggingface/transformers/issues/20117
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20117/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20117/comments
https://api.github.com/repos/huggingface/transformers/issues/20117/events
https://github.com/huggingface/transformers/pull/20117
1,439,842,729
PR_kwDOCUB6oc5CZ6dJ
20,117
[processor] Add 'model input names' property
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Currently, I've only applied the change to the Wav2Vec2 Processor - once we're happy with the design I'll copy it to all other processor classes (both audio and vision). This should make the preliminary review much easier!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint.", "Thanks for the review!\r\n\r\n> Just one thing: should this be overriden for CLIP models and the like that combine the inputs of the tokenizer and feature extractor?\r\n\r\nThat's a very good point. Perhaps we can add a generic property method to `ProcessorMixin` that returns the `model_input_names` for the first attribute (feature extractor), and override it for the models that combine the inputs of the tokenizer and feature extractor?\r\n\r\nOr, we can add the property method to **each individual** processor class, tailored to return the expected inputs for the given model (as is currently done with Wav2Vec2Processor, and modified accordingly for CLIP etc).", "Great point @sgugger! Have quickly cleaned-up the PR to try and remove any ad-hocery in the tests:\r\n\r\n- Single modality models: assert that the model input names of the processor and feature extractor match\r\n- Multi modal models: assert that the model input names of the processor match the keys of the inputs dict", "Very much agreed - if we implement a common tester for the processor this all collapses into one / two tests max. Will leave this for a follow-up PR as it's quite a significant refactor!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20117). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Adds the `model_input_names` property to the processor class. Related to #20058. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20117/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20117", "html_url": "https://github.com/huggingface/transformers/pull/20117", "diff_url": "https://github.com/huggingface/transformers/pull/20117.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20117.patch", "merged_at": 1668108560000 }
https://api.github.com/repos/huggingface/transformers/issues/20116
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20116/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20116/comments
https://api.github.com/repos/huggingface/transformers/issues/20116/events
https://github.com/huggingface/transformers/pull/20116
1,439,072,971
PR_kwDOCUB6oc5CXSF2
20,116
Fix gradient clipping on XLA device
{ "login": "ymwangg", "id": 19481308, "node_id": "MDQ6VXNlcjE5NDgxMzA4", "avatar_url": "https://avatars.githubusercontent.com/u/19481308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ymwangg", "html_url": "https://github.com/ymwangg", "followers_url": "https://api.github.com/users/ymwangg/followers", "following_url": "https://api.github.com/users/ymwangg/following{/other_user}", "gists_url": "https://api.github.com/users/ymwangg/gists{/gist_id}", "starred_url": "https://api.github.com/users/ymwangg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ymwangg/subscriptions", "organizations_url": "https://api.github.com/users/ymwangg/orgs", "repos_url": "https://api.github.com/users/ymwangg/repos", "events_url": "https://api.github.com/users/ymwangg/events{/privacy}", "received_events_url": "https://api.github.com/users/ymwangg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20116). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
CONTRIBUTOR
null
# What does this PR do? This PR fixed the gradient clipping logic on XLA device. We found all_reduce is wrongfully disabled in fp16 mode when `max_grad_norm=0` on XLA gpu device. To be consistent with native pytorch behavior, all_reduce should be placed immediately after calling `self.training_step`. Tested with the run_mlm.py example script on 8 Nvidia V100 GPU: ```sh GPU_NUM_DEVICES=8 python -m torch_xla.distributed.xla_spawn --num_gpus 8 run_mlm.py \ --model_name_or_path bert-base-uncased \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --overwrite_output_dir true \ --output_dir /tmp/test-mlm \ --per_gpu_train_batch_size 16 \ --do_eval \ --fp16 true \ --max_grad_norm 0 \ --do_train \ --num_train_epochs 3 ``` Results of final training losses: | Backend | fp16 | fp32 | fp16+max_grad_norm=0| fp32+max_grad_norm=0| | --- | ----------- | --------| ----------- | --------| | pytorch cuda| 1.8649 | 1.8575 | 1.8753 | 1.8694 | | torch_xla cuda| 1.86 | 1.8576 | 1.8694 | 1.867| cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20116/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20116", "html_url": "https://github.com/huggingface/transformers/pull/20116", "diff_url": "https://github.com/huggingface/transformers/pull/20116.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20116.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20115
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20115/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20115/comments
https://api.github.com/repos/huggingface/transformers/issues/20115/events
https://github.com/huggingface/transformers/issues/20115
1,439,006,545
I_kwDOCUB6oc5VxX9R
20,115
Roberta model seemingly unable to take embeddings as input
{ "login": "shahbuland", "id": 44281577, "node_id": "MDQ6VXNlcjQ0MjgxNTc3", "avatar_url": "https://avatars.githubusercontent.com/u/44281577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shahbuland", "html_url": "https://github.com/shahbuland", "followers_url": "https://api.github.com/users/shahbuland/followers", "following_url": "https://api.github.com/users/shahbuland/following{/other_user}", "gists_url": "https://api.github.com/users/shahbuland/gists{/gist_id}", "starred_url": "https://api.github.com/users/shahbuland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shahbuland/subscriptions", "organizations_url": "https://api.github.com/users/shahbuland/orgs", "repos_url": "https://api.github.com/users/shahbuland/repos", "events_url": "https://api.github.com/users/shahbuland/events{/privacy}", "received_events_url": "https://api.github.com/users/shahbuland/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Even if you provide input embeddings, the RoBERTa model will still add the token type embeddings and position embeddings.\r\n\r\nAs for the exact reason you get an error, we would need a full reproducer to be able to investigate.", "> Even if you provide input embeddings, the RoBERTa model will still add the token type embeddings and position embeddings.\r\n\r\nIs there any way to circumvent this? In my use-case position embeddings would probably be harmful to training.\r\n\r\n\r\n\r\n\r\n", "You'll need to modify the model code for that.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
NONE
null
### System Info Python 3.9 on linux, transformers 4.24.0 ### Who can help? @lysandrejik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Trying to implement cross encoder with RoBERTa. I embed token ids separate from the model forward call: ```python model = AutoModel.from_pretrained("roberta-base") text_features : TensorType["batch", "sequence_length", "d_model"] = model.embeddings(text.input_ids) embs = .... # same shape as text features ``` AFAIK this incorporates word embeddings, positional embeddings and special token embeddings, since those are all in the models embeddings. Sequence length here is max_posititon_embeddings from the models config. I then add on ViT embeddings to start of sequence and truncate accordingly (shape of tensor stays the same) ```python out = model( inputs_embeds=embs, attention_mask=attn_mask, output_hidden_states=True, return_dict=True ) ``` This gives error: ```python IndexError: index out of range in self ``` The traceback (image linked [here](https://media.discordapp.net/attachments/738882678711123980/1039276109423984711/unknown.png?width=719&height=146)) suggests it is trying to perform word and position embeddings again, and an error is occurring when position embeddings are called. If I'm understanding correctly, since I already did embeddings, and am providing them rather than tokens, it should not be calling on position embeddings again, yes? Running the same code with "bert-base-uncased" functions as expected with no errors. ### Expected behavior Expect a successful forward call through model using the embeddings provided
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20115/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20114
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20114/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20114/comments
https://api.github.com/repos/huggingface/transformers/issues/20114/events
https://github.com/huggingface/transformers/pull/20114
1,438,985,014
PR_kwDOCUB6oc5CW-gu
20,114
Add CV + audio labels to glossary
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20114). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,668
1,668
MEMBER
null
This is a [follow-up PR ](https://github.com/huggingface/transformers/pull/20051#discussion_r1013978694) to expand the `labels` definition to include expected labels for model heads from other modalities.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20114/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20114", "html_url": "https://github.com/huggingface/transformers/pull/20114", "diff_url": "https://github.com/huggingface/transformers/pull/20114.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20114.patch", "merged_at": 1668008416000 }
https://api.github.com/repos/huggingface/transformers/issues/20113
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20113/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20113/comments
https://api.github.com/repos/huggingface/transformers/issues/20113/events
https://github.com/huggingface/transformers/pull/20113
1,438,937,690
PR_kwDOCUB6oc5CW0H3
20,113
Adapt has_labels test when no labels were found
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20113). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,667
1,667
COLLABORATOR
null
# What does this PR do? As #20105 highlights it, the new way we infer default label names for models might not work for models outside of Transformers This PR reverts to the old default for non-`PreTrainedModel` models
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20113/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20113", "html_url": "https://github.com/huggingface/transformers/pull/20113", "diff_url": "https://github.com/huggingface/transformers/pull/20113.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20113.patch", "merged_at": 1667933585000 }
https://api.github.com/repos/huggingface/transformers/issues/20112
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20112/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20112/comments
https://api.github.com/repos/huggingface/transformers/issues/20112/events
https://github.com/huggingface/transformers/pull/20112
1,438,892,616
PR_kwDOCUB6oc5CWqS7
20,112
Pytorch type hints
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.", "Sure, I will add all type hints for pytorch and ping you", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.", "@Rocketknight1 I think I've covered all PyTorch models!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20112). All of your documentation changes will be reflected on that endpoint.", "@IMvision12 Amazing, thank you! I just finished reviewing and it all looks good, so I'm going to merge now. Once we have full type hint coverage we can add tests to ensure that it stays that way in future, and then start using the type hints in library checking, so this should help a lot!" ]
1,667
1,688
1,668
CONTRIBUTOR
null
# What does this PR do? Added Type hints ## Who can review? @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20112/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20112", "html_url": "https://github.com/huggingface/transformers/pull/20112", "diff_url": "https://github.com/huggingface/transformers/pull/20112.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20112.patch", "merged_at": 1668429558000 }
https://api.github.com/repos/huggingface/transformers/issues/20111
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20111/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20111/comments
https://api.github.com/repos/huggingface/transformers/issues/20111/events
https://github.com/huggingface/transformers/pull/20111
1,438,846,831
PR_kwDOCUB6oc5CWgp-
20,111
AutoImageProcessor
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20111). All of your documentation changes will be reflected on that endpoint.", "Still reviewing, but in `processing_auto.py/from_pretrained`, the beginning part\r\n\r\nhttps://github.com/huggingface/transformers/blob/1ebc7bb995c5e43961a7c8079ca3bf29f06f2411/src/transformers/models/auto/processing_auto.py#L197\r\n\r\naround this, there is no `ImageProcessingMixin`. I feel this is a miss and should appear here?\r\n\r\n", "> Still reviewing, but in `processing_auto.py/from_pretrained`, the beginning part\r\n> \r\n> https://github.com/huggingface/transformers/blob/1ebc7bb995c5e43961a7c8079ca3bf29f06f2411/src/transformers/models/auto/processing_auto.py#L197\r\n> \r\n> around this, there is no `ImageProcessingMixin`. I feel this is a miss and should appear here?\r\n\r\n@ydshieh Yes, you're right. I've added a check now [here](https://github.com/amyeroberts/transformers/blob/2e08d16f7758889fe0cb203091d292c968f067b0/src/transformers/models/auto/processing_auto.py#L194). Can you confirm if this matches with what you think should have been added? ", "> Can you confirm if this matches with what you think should have been added?\r\n\r\nYes!\r\n\r\n", "One comment (no need to be done in this PR): I think it would be great if we can remove the `feature_extractor_type` key after loading the image processor.\r\n\r\n```python\r\nfrom transformers import CLIPModel, AutoProcessor, CLIPProcessor, CLIPImageProcessor, CLIPFeatureExtractor, AutoImageProcessor\r\n\r\np = CLIPImageProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\nprint(p.feature_extractor_type)\r\np.save_pretrained(\"temp-clip\")\r\n```\r\ngives\r\n```bash\r\nCLIPFeatureExtractor\r\n```\r\non the terminal, and in the output file `preprocessor_config.json`, we have\r\n\r\n```python\r\n \"feature_extractor_type\": \"CLIPFeatureExtractor\",\r\n \"image_processor_type\": \"CLIPImageProcessor\",\r\n```" ]
1,667
1,669
1,667
COLLABORATOR
null
# What does this PR do? Adds the `AutoImageProcessor` class and makes model image processors available to import. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20111/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20111", "html_url": "https://github.com/huggingface/transformers/pull/20111", "diff_url": "https://github.com/huggingface/transformers/pull/20111.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20111.patch", "merged_at": 1667937282000 }
https://api.github.com/repos/huggingface/transformers/issues/20110
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20110/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20110/comments
https://api.github.com/repos/huggingface/transformers/issues/20110/events
https://github.com/huggingface/transformers/pull/20110
1,438,824,710
PR_kwDOCUB6oc5CWbyc
20,110
Fix AutoTokenizer with subfolder passed
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,667
1,667
COLLABORATOR
null
# What does this PR do? As reported in #20108, the `AutoTokenizer` API does not properly work with the `subfolder` argument, because it is not consumed by the `get_tokenizer_config` function. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20110/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20110", "html_url": "https://github.com/huggingface/transformers/pull/20110", "diff_url": "https://github.com/huggingface/transformers/pull/20110.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20110.patch", "merged_at": 1667861987000 }
https://api.github.com/repos/huggingface/transformers/issues/20109
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20109/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20109/comments
https://api.github.com/repos/huggingface/transformers/issues/20109/events
https://github.com/huggingface/transformers/pull/20109
1,438,802,355
PR_kwDOCUB6oc5CWXAZ
20,109
docs: Replace awkward `timm` link with the expected one
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,667
1,667
MEMBER
null
# What does this PR do? Replace https://github.com/rwightman/pytorch-image-models/tree/master/timm with https://github.com/rwightman/pytorch-image-models. ## Reasoning 1. The URL has the hardcoded `master` branch, despite the `timm` being branch being renamed to `main` nowadays. 2. The URL points to the `timm` folder for some reason, when linking to the root, i.e. where a README is visible, is much more sensible. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @sgugger --- I just keep running into small issues here and there! More than happy to help fix them, though. - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20109/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20109", "html_url": "https://github.com/huggingface/transformers/pull/20109", "diff_url": "https://github.com/huggingface/transformers/pull/20109.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20109.patch", "merged_at": 1667847460000 }
https://api.github.com/repos/huggingface/transformers/issues/20108
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20108/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20108/comments
https://api.github.com/repos/huggingface/transformers/issues/20108/events
https://github.com/huggingface/transformers/issues/20108
1,438,797,834
I_kwDOCUB6oc5VwlAK
20,108
Using `subfolder` with AutoTokenizer.from_pretrained doesn't work
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, thanks for the clear issue. The `subfolder` argument is not properly passed along in the utils that get the tokenizer config, and since that tokenizer config is then not found, `AutoTokenizer` then tries to find a config (which does not exist here).\r\n\r\nWill send a fix shortly." ]
1,667
1,667
1,667
CONTRIBUTOR
null
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? Not sure; tagging @lvwerra and @osanseviero ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using `subfolder` when loading a trained tokenizer seems to fail because it expects a `config.json` whereas saving a tokenizer results in a `tokenizer_config.json`. `tok = AutoTokenizer.from_pretrained("cakiki/bytelevel-dropout-0.1-50K")` This loads **successfully** but the following **fails** even though it's the same exact tokenizer files: `tok = AutoTokenizer.from_pretrained("bigcode/tokenizer", subfolder="bytelevel-dropout-0.1-50K")` The latter results in the following trace: ```python --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:213, in hf_raise_for_status(response, endpoint_name) 212 try: --> 213 response.raise_for_status() 214 except HTTPError as e: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self) 1020 if http_error_msg: -> 1021 raise HTTPError(http_error_msg, response=self) HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/bigcode/tokenizer/resolve/main/bytelevel-dropout-0.1-50K/config.json The above exception was the direct cause of the following exception: EntryNotFoundError Traceback (most recent call last) File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/utils/hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 407 try: 408 # Load from URL or cache if already cached --> 409 resolved_file = hf_hub_download( 410 path_or_repo_id, 411 filename, 412 subfolder=None if len(subfolder) == 0 else subfolder, 413 revision=revision, 414 cache_dir=cache_dir, 415 user_agent=user_agent, 416 force_download=force_download, 417 proxies=proxies, 418 resume_download=resume_download, 419 use_auth_token=use_auth_token, 420 local_files_only=local_files_only, 421 ) 423 except RepositoryNotFoundError: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/file_download.py:1053, in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, legacy_cache_layout) 1052 try: -> 1053 metadata = get_hf_file_metadata( 1054 url=url, 1055 use_auth_token=use_auth_token, 1056 proxies=proxies, 1057 timeout=etag_timeout, 1058 ) 1059 except EntryNotFoundError as http_error: 1060 # Cache the non-existence of the file and raise File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/file_download.py:1359, in get_hf_file_metadata(url, use_auth_token, proxies, timeout) 1350 r = _request_wrapper( 1351 method="HEAD", 1352 url=url, (...) 1357 timeout=timeout, 1358 ) -> 1359 hf_raise_for_status(r) 1361 # Return File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:231, in hf_raise_for_status(response, endpoint_name) 226 message = ( 227 f"{response.status_code} Client Error." 228 + "\n\n" 229 + f"Entry Not Found for url: {response.url}." 230 ) --> 231 raise EntryNotFoundError(message, response) from e 233 elif error_code == "RepoNotFound" or response.status_code == 401: EntryNotFoundError: 404 Client Error. (Request ID: vwMqo-SJymZZUmQ2bMoXA) Entry Not Found for url: https://huggingface.co/bigcode/tokenizer/resolve/main/bytelevel-dropout-0.1-50K/config.json. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) Cell In [18], line 1 ----> 1 tok = AutoTokenizer.from_pretrained("bigcode/tokenizer", subfolder="bytelevel-dropout-0.1-50K") File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py:566, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 564 if config_tokenizer_class is None: 565 if not isinstance(config, PretrainedConfig): --> 566 config = AutoConfig.from_pretrained( 567 pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs 568 ) 569 config_tokenizer_class = config.tokenizer_class 570 if hasattr(config, "auto_map") and "AutoTokenizer" in config.auto_map: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:770, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 768 kwargs["name_or_path"] = pretrained_model_name_or_path 769 trust_remote_code = kwargs.pop("trust_remote_code", False) --> 770 config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 771 if "auto_map" in config_dict and "AutoConfig" in config_dict["auto_map"]: 772 if not trust_remote_code: File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/configuration_utils.py:558, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 556 original_kwargs = copy.deepcopy(kwargs) 557 # Get config dict associated with the base config file --> 558 config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) 559 if "_commit_hash" in config_dict: 560 original_kwargs["_commit_hash"] = config_dict["_commit_hash"] File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/configuration_utils.py:613, in PretrainedConfig._get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 609 configuration_file = kwargs.pop("_configuration_file", CONFIG_NAME) 611 try: 612 # Load from local folder or from cache or download from model Hub and cache --> 613 resolved_config_file = cached_file( 614 pretrained_model_name_or_path, 615 configuration_file, 616 cache_dir=cache_dir, 617 force_download=force_download, 618 proxies=proxies, 619 resume_download=resume_download, 620 local_files_only=local_files_only, 621 use_auth_token=use_auth_token, 622 user_agent=user_agent, 623 revision=revision, 624 subfolder=subfolder, 625 _commit_hash=commit_hash, 626 ) 627 commit_hash = extract_commit_hash(resolved_config_file, commit_hash) 628 except EnvironmentError: 629 # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to 630 # the original exception. File /mnt/1da05489-3812-4f15-a6e5-c8d3c57df39e/env/lib/python3.9/site-packages/transformers/utils/hub.py:454, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 452 if revision is None: 453 revision = "main" --> 454 raise EnvironmentError( 455 f"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout " 456 f"'[https://huggingface.co/{path_or_repo_id}/{](https://huggingface.co/%7Bpath_or_repo_id%7D/%7Brevision)[revision](https://huggingface.co/%7Bpath_or_repo_id%7D/%7Brevision)}' for available files." 457 ) 458 except HTTPError as err: 459 # First we try to see if we have a cached version (not up to date): 460 resolved_file = try_to_load_from_cache(path_or_repo_id, full_filename, cache_dir=cache_dir, revision=revision) OSError: bigcode/tokenizer does not appear to have a file named bytelevel-dropout-0.1-50K/config.json. Checkout 'https://huggingface.co/bigcode/tokenizer/main' for available files. ``` ### Expected behavior The tokenizer to load from the subfolder
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20108/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20108/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20107
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20107/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20107/comments
https://api.github.com/repos/huggingface/transformers/issues/20107/events
https://github.com/huggingface/transformers/pull/20107
1,438,555,799
PR_kwDOCUB6oc5CViZQ
20,107
Fix tapas scatter
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,668
1,668
COLLABORATOR
null
# What does this PR do? Fix tapas scatter
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20107/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20107", "html_url": "https://github.com/huggingface/transformers/pull/20107", "diff_url": "https://github.com/huggingface/transformers/pull/20107.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20107.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20106
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20106/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20106/comments
https://api.github.com/repos/huggingface/transformers/issues/20106/events
https://github.com/huggingface/transformers/pull/20106
1,438,537,763
PR_kwDOCUB6oc5CVekZ
20,106
Give `t5` the `prune_heads`
{ "login": "CaffreyR", "id": 84232793, "node_id": "MDQ6VXNlcjg0MjMyNzkz", "avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaffreyR", "html_url": "https://github.com/CaffreyR", "followers_url": "https://api.github.com/users/CaffreyR/followers", "following_url": "https://api.github.com/users/CaffreyR/following{/other_user}", "gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}", "starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions", "organizations_url": "https://api.github.com/users/CaffreyR/orgs", "repos_url": "https://api.github.com/users/CaffreyR/repos", "events_url": "https://api.github.com/users/CaffreyR/events{/privacy}", "received_events_url": "https://api.github.com/users/CaffreyR/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ArthurZucker ,let's try this one!\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20106). All of your documentation changes will be reflected on that endpoint.", "Hi @ArthurZucker , I have passes the test!\r\n", "Hi @ArthurZucker , this version is clean. Could you please give it a review?", "Gently Pin @patrickvonplaten \r\nMaybe you are also interested. Many thanks!", "Hi @ArthurZucker , could you please give me a review? Many thanks!", "> Thanks for the PR! We'll need tests before we can merge this.\r\n> \r\n> Could you set this to `True`:\r\n> \r\n> https://github.com/huggingface/transformers/blob/07b8f249cdb07a5e6697b379cc6db705a9eb15f1/tests/models/t5/test_modeling_t5.py#L521\r\n> \r\n> and then see if the tests all pass?\r\n\r\nSure, I have modified it manually in my repo", "Hi @patrickvonplaten , it seems some of the test failed, because I did not modify the `T5stack` but `T5forconditionalgeneration`", "> Hi @patrickvonplaten , it seems some of the test failed, because I did not modify the `T5stack` but `T5forconditionalgeneration`\r\n\r\nCould you try to make the tests work - we need those to pass before we're able to merge this PR :-) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
NONE
null
# What does this PR do? Give `t5` the `prune_heads` #19975 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue #19625) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @patrickvonplaten @patil-suraj Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20106/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20106", "html_url": "https://github.com/huggingface/transformers/pull/20106", "diff_url": "https://github.com/huggingface/transformers/pull/20106.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20106.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20105
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20105/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20105/comments
https://api.github.com/repos/huggingface/transformers/issues/20105/events
https://github.com/huggingface/transformers/issues/20105
1,438,519,287
I_kwDOCUB6oc5Vvg_3
20,105
default value for default_label_names incorrectly causes has_label in trainer.py to be true
{ "login": "seirasto", "id": 4257308, "node_id": "MDQ6VXNlcjQyNTczMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/4257308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seirasto", "html_url": "https://github.com/seirasto", "followers_url": "https://api.github.com/users/seirasto/followers", "following_url": "https://api.github.com/users/seirasto/following{/other_user}", "gists_url": "https://api.github.com/users/seirasto/gists{/gist_id}", "starred_url": "https://api.github.com/users/seirasto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seirasto/subscriptions", "organizations_url": "https://api.github.com/users/seirasto/orgs", "repos_url": "https://api.github.com/users/seirasto/repos", "events_url": "https://api.github.com/users/seirasto/events{/privacy}", "received_events_url": "https://api.github.com/users/seirasto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for opening this issue. Could you please explain which code sample stopped working for you with this change?", "This is using `do_eval` in our own code `run_mrc.py` in our PrimeQA code base: https://github.com/primeqa/primeqa/tree/main/primeqa/mrc However, I believe this issue will happen in any instance where labels are not provided through `self.arg.label_names` and `label` is not found in `self.model.__class__`", "This is too vague for us to act upon. Do you have a reproducer of the problem?", "I'm not sure what the best way is to reproduce it for you. It occurs because we have our own model wrapper. I see that in the signature parameters there is a `labels` parameter, but this does not exist in our signature. Perhaps it is expected to always be there? \r\n\r\nUsing AutoModelForSequenceClassification:\r\n```\r\nmappingproxy(OrderedDict([('self', <Parameter \"self\">), ('input_ids', <Parameter \"input_ids: Union[torch.LongTensor, NoneType] = None\">), ('attention_mask', <Parameter \"attention_mask: Union[torch.FloatTensor, NoneType] = None\">), ('token_type_ids', <Parameter \"token_type_ids: Union[torch.LongTensor, NoneType] = None\">), ('position_ids', <Parameter \"position_ids: Union[torch.LongTensor, NoneType] = None\">), ('head_mask', <Parameter \"head_mask: Union[torch.FloatTensor, NoneType] = None\">), ('inputs_embeds', <Parameter \"inputs_embeds: Union[torch.FloatTensor, NoneType] = None\">), ('labels', <Parameter \"labels: Union[torch.LongTensor, NoneType] = None\">), ('output_attentions', <Parameter \"output_attentions: Union[bool, NoneType] = None\">), ('output_hidden_states', <Parameter \"output_hidden_states: Union[bool, NoneType] = None\">), ('return_dict', <Parameter \"return_dict: Union[bool, NoneType] = None\">)]))\r\n```\r\n\r\nOur model wrapper: \r\n\r\n```\r\nmappingproxy(OrderedDict([('self', <Parameter \"self\">), ('input_ids', <Parameter \"input_ids=None\">), ('attention_mask', <Parameter \"attention_mask=None\">), ('token_type_ids', <Parameter \"token_type_ids=None\">), ('position_ids', <Parameter \"position_ids=None\">), ('head_mask', <Parameter \"head_mask=None\">), ('inputs_embeds', <Parameter \"inputs_embeds=None\">), ('output_attentions', <Parameter \"output_attentions=None\">), ('output_hidden_states', <Parameter \"output_hidden_states=None\">), ('return_dict', <Parameter \"return_dict=None\">), ('kwargs', <Parameter \"**kwargs\">)]))\r\n```", "Can you try if the PR above would solve your problem?", "That did not work. Our model inherits from PreTrainedModel so it is still an instance of it.", "Ah in this case, it will need to have the label names in the signature like the models in the library. Or you can always pass along the `label_names` you want in the training arguments to override the defaults.", "Thanks! I am using the training arg to override the default right now. I'm just wondering whether when `self.label_names=[]` causes `has_labels` to be `True` is the expected behavior.\r\n\r\n```\r\nhas_labels = all(inputs.get(k) is not None for k in self.label_names)\r\n``` ", "Ah yes, I understand your concern better, thanks! Will adapt the PR.", "Can you try again the PR above?", "This worked! Thank you!!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Fixed by #20113" ]
1,667
1,670
1,670
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-4.18.0-372.26.1.el8_6.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.11.0+cu113 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @sgugger This is a bug in `trainer.py` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I've upgraded transformers from 4.17.0 to 4.24.0. In the earlier version(s), the default value for `default_label_names` was set to `["labels"]`. In the current version the default is set to an empty array `[]` 4.17.0: ```python default_label_names = ( ["start_positions", "end_positions"] if type(self.model).__name__ in MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES.values() else ["labels"] ) ``` 4.24.0: ```python default_label_names = find_labels(self.model.__class__) ``` `default_label_names` is used to set `self.label_names` if no labels are supplied: ```python self.label_names = default_label_names if self.args.label_names is None else self.args.label_names ``` It will then be used to decide the value of `has_labels` as `True` or `False`: ```python has_labels = all(inputs.get(k) is not None for k in self.label_names) ``` In 4.17.0 with the default value of `["labels"]` has_labels was `False`. In 4.24.0 with the default value of `[]` has_labels is `True` causing the program to fail in `compute_loss` during eval: ``` E ValueError: The model did not return a loss from the inputs, only the following keys: start_logits,end_logits,target_type_logits. For reference, the inputs it received are input_ids,attention_mask. ``` ### Expected behavior The default behavior when no labels are provided should cause `has_labels` to be `False`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20105/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20104
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20104/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20104/comments
https://api.github.com/repos/huggingface/transformers/issues/20104/events
https://github.com/huggingface/transformers/pull/20104
1,438,503,196
PR_kwDOCUB6oc5CVXD9
20,104
Adding chunking for whisper (all seq2seq actually). Very crude matching algorithm.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.", "> Thanks for working on this. Not sure if the PR is ready for (at least core maintainer) review yet?\r\n\r\nYup sorry it was slightly early for you.\r\nThe core idea is still there.\r\n\r\nWe chunk with stride. and we make a hopeful stitch to find the longest sequence from all the subsequences.\r\n\r\nPROs:\r\n- It's extremely generic.\r\n- It should work in a lot of scenarios including repeating tokens\r\n\r\n\r\nCONs:\r\n- It's technically unsound. Meaning if the model infers widely varying tokens, there's no way to reconstruct what the model would actually predict on the whole file.\r\n- I expect it can fail spectacularly in well crafted examples where someone repeats the same word over and over, where the longest match will be MUCH longer than the original voices thing.\r\n\r\n\r\n", "As we discussed offline with @Narsil , will be implementing the `find_conmmon_sequence` in `O(N)` 😉 Will open a new PR! ", "> As we discussed offline with @Narsil , will be implementing the `find_conmmon_sequence` in `O(N)` wink Will open a new PR!\r\n\r\nSeems it's going to be complex because of fault tolerance which does seem to be important.\r\n\r\nYou can try doing something like\r\n```python\r\n#!wget https://www.archive.org/download/around_world_80_days_mfs_librivox/around_world_in_80_days_01_verne.mp3\r\nfrom transformers import pipeline\r\n\r\nspeech_recognizer = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model=\"openai/whisper-small\",\r\n framework=\"pt\",\r\n batch_size=2,\r\n device=0,\r\n chunk_length_s=30,\r\n generate_kwargs={\"max_new_tokens\": 1024},\r\n)\r\n\r\nout = speech_recognizer([\"around_world_in_80_days_01_verne.mp3\"])\r\nprint(out)\r\n```\r\n\r\n\r\nThis will required some suboptimal stitches to work.", "@sgugger it's now ready for review.\r\n\r\nThe TODO is left intentionnally. It might really become relevant on hour+ long files where the current naive algorithm might become too slow. However the code is likely to be orders of magnitude more complex (if a O(n) solution exists, I'm pretty sure we could find an expected O(n) algorithm, but not sure about worst case).\r\nThe current code works correctly, has the fault tolerance we need to be useful.\r\n\r\nI added a warning because the current code **Will** fail in some know circumstances. I updated the PR description to reflect those. If those tradeoffs are not good enough, I'm happy to not merge this PR in this state.\r\n\r\nThe only other option I see is whisper specific with timestamps and it would only alleviate *some* of the issues. ", "Before merging, would love to try a little bit, otherwise LGTM (looking for a solution to the faults) ", "@ArthurZucker What are your conclusions ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.", "I think that including timestamp tokens in the process could help with the error tolerance as they are consistently predicted at the end of pauses in the speech. If the stride is big enough not at least include pauses in speech, it boils down to matching these. \r\nMoreover, given that we know approximately the time between each tokens, we can use this information as some kind of guiding information. I am working on something, but we can merge for now and have an improved PR later on 😉 ", "@sgugger would like your opinion on this if possible.\r\n\r\nThe results are pretty decent imo on regular speech. I'm still mentionning the caveats because they are real.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20104). All of your documentation changes will be reflected on that endpoint.", "Has this been added to the current transformers version? I am getting the \"ValueError: `chunk_length_s` is only valid for CTC models, use other chunking options for other models\".", "There hasn't been a release yet, you must use `main` if you want to use it now.", "Using chunk_length_s=10 without stride_length_s=(4, 2)\r\nlooses a rather large part of the transcription. It works pretty nice with stride :) but I get a lot of repetitions despite setting condition_on_previous_text=0\r\n\r\nIs there an alternative way to transcribe large audio files when I am using a fine-tuned whisper model?", "> looses a rather large part of the transcription. It works pretty nice with stride :) but I get a lot of repetitions despite setting condition_on_previous_text=0\r\n\r\n`condition_on_previous_text` ? What is that ?\r\n\r\nCould you provide an example with the repetitions ? There might be some optimizations to be made on the tying of the chunks.\r\nAs I mentioned in this PR, the tying of inferred audio can definitely create repeitions, but with better examples, we might be able to figure out better heuristics.", "condition_on_previous_text: bool\r\n if True, the previous output of the model is provided as a prompt for the next window;\r\n disabling may make the text inconsistent across windows, but the model becomes less prone to\r\n getting stuck in a failure loop, such as repetition looping or timestamps going out of sync.\r\n\r\nI think it's true by default and it uses something like GPT to \"verify\" the transcription. When I simply use whisper with medium or large model, I prefer to set it to False\r\n\r\nI get a lot of repetitions even with small samples, like the audio from this commercial\r\n\r\nhttps://www.youtube.com/watch?v=LkllgKVgz8o\r\n\r\nor this trailer\r\n\r\nhttps://www.youtube.com/watch?v=xncMdIGR2pk\r\n\r\n\r\nI have [fine tuned whisper for Greek](https://huggingface.co/emilios/whisper-medium-el) and I am trying to use it with the following lines (after of course loading transformers and the model, etc)\r\n\r\n\r\nfrom transformers import pipeline\r\n\r\ntranscript = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model = model,\r\n feature_extractor = feature_extractor,\r\n tokenizer=tokenizer,\r\n framework=\"pt\",\r\n batch_size=16,\r\n device='cuda:0',\r\n #generate_kwargs={\"max_new_tokens\": 1024},\r\n #max_new_tokens = 1024,\r\n chunk_length_s=10, \r\n stride_length_s=(4, 2), # must have with chunk_length_s\r\n condition_on_previous_text=0,\r\n compression_ratio_threshold=2.5\r\n)", "hi, I think `condition_on_previous_text` (and `initial_prompt`) is a decoding option used in the original OpenAI's version, not (yet?) implemented in HF's version. cc @ArthurZucker ", "We use a different decoding strategy here, because `openai/whisper` is not stateless which is kind of a requirement of `pipeline`. (It means you can actually do batching, which is not possible with original whisper.)", "Did you try using `chunk_length_s=30`. By default it uses `1/6=5s` of chunking on each sides, which should be plenty. \r\n\r\nI'm getting for the first example: \r\n```\r\n{'text': \" The dance is like life. You don't need to know the steps. You just need to hear the beat of your heart. You don't need rules to make the right move. Your consciousness is enough. Zagori. We have the good in us.\"}\r\n```\r\nWhich seems corect to me.", "```\r\n{'text': ' The test is ready. Rachel wrote Ross a letter and demanded he read it before they got back together. How many pages was that letter? 18 pages! 18 pages. Front and back! Front and back is correct! Wait, wait, go one more time! Oh my god. Here we go. Where\\'s the tissue box? The cast of Friends. Wow. It\\'s cool. her lines written on the table? We\\'ve literally just slipped right back. We regret. We have such a bond from this show. Were Ross and Rachel on a break? Yes. Yes. Yes. Yes. Bullshit. table read, that\\'s the first time I laid eyes on any of you. Everyone was so perfectly cast. Yeah. This is from the one where everyone finds out. I remember I went to the producer of the show I was on and he he said, \"That show\\'s not gonna make you a star.\" [laughing] I remember one time I happened to have the news on, and on the TV was an aerial shot of each of our houses. - Oh, jeez. - And I remember looking at it, going, \"What the--?\" My roof is a mess. [laughing] It was an incredible time. We became best friends. Yeah, I\\'m going to cry now. When I watch the episodes, I\\'m laughing out loud, because you all make me laugh so hard. I know you know how big the show is. What you have given so many people is an experience of huge comfort. like we had these friends. I love you guys so much.'}\r\n```\r\n\r\nFor the second.", "Yes that was good\r\nWhat value should I use for stride_length_s with chunk 30?\r\n\r\nCan you please tell me if this is the only way to transcribe large audio files with pipeline?\r\n\r\nThank you all :) ", "> What value should I use for stride_length_s with chunk 30?\r\n\r\nThe stride defaults are `chunk_length_s / 6` on each sides so here, 5s, 5s. It's important to have something significant on both sides I think (more overlap will reduce the chances for the algorithm to get it wrong).\r\n", "when I ! pip install git+https://github.com/openai/whisper.git and import whisper, all is fine with medium model\r\n\r\nI have tried pipeline with\r\n\r\n chunk_length_s=30,\r\n stride_length_s=(5, 5),\r\n\r\nand still I get repetitions both with openai/whisper-medium openai/whisper-large and emilios/whisper-medium-el \r\nI 've tried other bigger videos (well, ok audio) and it is not working as supposed to :(\r\n\r\n", "I've just noticed that translated is ok, but the transcription in the original language has repetitions\r\n\r\nhttps://www.youtube.com/watch?v=e_eCryyPRus\r\n\r\nmodel.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = \"el\", task = \"transcribe\")\r\n\r\n\r\ngreek transcript with repetitions, removed\r\n\r\nmodel.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = \"el\", task = \"translate\")\r\n\r\ntranscript translated into english, removed", "It is possible that the model is in cause then ?\r\n\r\nML generative models are know to be repetitive. And the kin dof repetition I'm seeing here really looks like bad model generation more than erroneous stitching.\r\n", "Nope, I think the translation engine fixes (or hides) the repetitive phrases ", "I confirm it is the model.\r\n\r\nTake the audio of the video you linked.\r\n\r\n```\r\nffmpeg -ss 140 -i out.mp3 -c copy -t 20 out_repete.mp3\r\n```\r\n\r\nThen do the inference:\r\n\r\n```python\r\nfrom transformers import pipeline, AutoProcessor\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"emilios/whisper-medium-el\")\r\npipe = pipeline(model=\"emilios/whisper-medium-el\")\r\npipe.model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language=\"el\", task=\"transcribe\")\r\n\r\nout = pipe(\"out_repete.mp3\")\r\nprint(out)\r\n```\r\n\r\nAnd you will see that the model goes looping all by itself. This is not the chunking's doing.\r\n", "But I get the same problem with openai model too\r\n\r\n\r\n[{'text': ' [μουσική] Εμείς σήμερα, εγώ του λαμβάνω,πικά, αλλά φαντάζομαι όσοι από μας προσπαθούν να σκεφτούν σοβαρά, μέσα σε όλο αυτό το χάος του ιστορικού υλικού που έχουμε μπροστά μας, επιλέγουμε μια παράδοση. Αυτό δεν σημαίνει ότι την επιλέγουμε για να σημαίνουμε δούλοι.. Επιλέγουμε ακριβώς την παράδοση εκείνη, δηλαδή αυτήν που ονομάζω Έλληνοδυτική, μέσα στην οποία η αμφισβήτηση της παράδοσης είναι ένα βασικό στοιχείο. Η αμφισβήτηση όχι για την ευχαρίστηση της αμφισβήτησης, Η αμφισβήτηση όταν υπάρχει λόγος, η δυνατότητα της αμφισβήτησης, η δυνατότητα του να σκεφτώ αλλιώς, του να μιλήσω αλλιώς από τη σκέφτετη. Η πλειοψηφία, η εκκλησία, το κράτος, το κόμμα κτλ. Δεν είναι έτσι; Ο δάσκαλος, οι γονείς ενδεχομένως. Και από εκεί και πέρα η δυνατότητα να βάλω σαν άτομο ή να βάλει μια κοινωνική ομάδα ή μια πολιτική κίνηση ερωτήματα σχετικά με το αν η σημερινή θέσμη της κοινωνίας είναι δίκαιη ή δεν είναι δίκαιη, εάν η ισότητα εντός εισαγωγικών, την οποία επαγγέλλεται το Σύνταγμα και ο νόμος για τους πολίτες, τα βασικά χαρακτηριστικά αυτής της παράδοσης, πιο όχι άλλο νόμο. Κάθε κοινωνία δημιουργεί τους θεσμούς της, αλλά η ιδέα ότι η θεσμία αυτή είναι η δική της δημιουργία ακριβώς δε είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρκετά. Είναι αρονομία της κοινωνίας δεν είναι μόνο και δεν είναι τόσο η εκμετάλλευση, η καταπίεση, η υπάρξη μιας εξουσίας χωρισμένης από την κοινωνία. Είναι η ιδέα ότι οι θεσμοί ήρθαν απαλού.ει και σε πρωτόγωνες κοινωνίες, στις οποίες δεν βλέπουμε αυτά τα συνόμενα. Η ετερονομία της κοινωνίας είναι το γεγονός ακριβώς ότι η κοινωνία αλοτριώνεται στους θεσμούς της οποίες η ίδια η δημιούργησε, διότι δεν ξέρει ότι η ίδια τους η δημιούργησε, Αν δεν υπήρχε Θεός, όλα θα ήσουν αυτοί που θα έρθουν.ημειωταίων δεν ανήκει στον Ντοστογεύσκη, αλλά μπορεί να το πάει κανείς πίσω, ως τουλάχιστον με έκακε τον Πλάτονα. Και το οποίον εγώ θεωρώ επιχείρημα υπαστηνό μου βήτα, δηλαδή ότι χρειάζεται ένας Θεός, διότι αλλιώς όλα αυτά τα ρεμάλια θα κάνουν, τους κατεύαιναν, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, εγώ, η αρχαία αθηνή, ναι. Τους κάνουμε τους νόμους μας και όσον δεν τους έχουμε αλλάξει, τους ευώμαστε. Αυτό είναι το πράγμα που πρέπει να δίνει. Από αυτή την άψη, αυτό το οποίο ενεργώ εγώ ως αυτώνομη κοινωνία, είναι μια κοινωνία, όχι οποία είναι διαφανής, αλλά είναι μια κοινωνία η οποία ξέρει ότι δεν υπάρχει υπερβατικότητα, ότι δεν υπάρχει υπερβατική πηγή των θεσμών και των νόμων, ότι δεν υπάρχει μεταθάνατον ζωή αυτό που ξέραν οι αρχαίοι Έλληνες, οι οποίοι δεν επίστευαν σε μεταθάνατο ζωή,υτό μας, στους εαυτούς μας, σαν κοινωνικό σύνολο, κανόνες και νόμιες, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να κάνουμε, να δούμε ότι όσο πρέπει να δούμε, να δούμε ότι όσο πρέπει να δούμε, να δούμε ότι όσο πρέπει να δούμε, να δούμε ότι όσο πρέπει να δούμε, να δούμες έχουμε να το κάνουμε και έχουμε να δώσουμε στον εαυτό μας, στους εαυτούς μας σαν κοινωνικό σύνολο, κανόνες και νόμους που να μας επιτρέπουν να υπάρχουμε σαν αυτώνομη κοινωνία και σαν αυτώνομα άτομα μέσα σε αυτή την κοινωνία.'}]\r\n\r\n\r\n\r\n\r\n\r\nPlease check it with the following code in a notepad when you can\r\n\r\n\r\n\r\n!pip install git+https://github.com/huggingface/transformers\r\n!pip install pytube\r\nfrom pytube import YouTube \r\n\r\nmymodel = \"openai/whisper-medium\"\r\n#mymodel = \"openai/whisper-large\"\r\n#mymodel = \"emilios/whisper-medium-el\"\r\n#lang=\"English\"\r\nlang=\"Greek\"\r\n\r\nfrom transformers import WhisperForConditionalGeneration\r\nmodel = WhisperForConditionalGeneration.from_pretrained( mymodel)\r\nfrom transformers import WhisperTokenizer\r\ntokenizer = WhisperTokenizer.from_pretrained( mymodel, language=lang, task=\"transcribe\")\r\nfrom transformers import WhisperProcessor\r\nprocessor = WhisperProcessor.from_pretrained( mymodel, language=lang, task=\"transcribe\")\r\nfrom transformers import WhisperFeatureExtractor\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained( mymodel, language=lang, task=\"transcribe\")\r\n\r\nlink = 'https://www.youtube.com/watch?v=e_eCryyPRus' \r\n\r\ntry: \r\n yt = YouTube(link) \r\nexcept: \r\n print(\"Connection Error\")\r\nyt.streams.filter(file_extension='mp4')\r\nstream = yt.streams.get_by_itag(139)\r\nstream.download('',\"YouTube.mp4\")\r\n\r\nmodel.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = \"el\", task = \"transcribe\")\r\nmodel.config.suppress_tokens = []\r\n#model.config.max_new_tokens = 1024\r\n\r\nfrom transformers import pipeline\r\n\r\ntranscript = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model = model,\r\n feature_extractor = feature_extractor,\r\n tokenizer=tokenizer,\r\n framework=\"pt\",\r\n batch_size=16,\r\n device='cuda:0',\r\n #generate_kwargs={\"max_new_tokens\": 1024},\r\n #max_new_tokens = 1024,\r\n chunk_length_s=30, # 12\r\n stride_length_s=(5, 5), # must have with chunk_length_s\r\n condition_on_previous_text=0,\r\n compression_ratio_threshold=2.4\r\n)\r\n\r\nout = transcript([\"YouTube.mp4\"])\r\nprint(out)\r\n", "Yes, this is what I'm saying. The model is repeating itself, there's not much we can do about it.\r\n\r\nIf you could fine tune it even more, or on more data, or more diverse data, that could probably help.\r\n\r\nFor faster solutions, you could try and reduce amount of repetition, with `repetition_penalty` (there's actually several options for it) https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate\r\n\r\nThat should help you get started. But please bear in mind it's only a temporary solution, the real solution is fixing the model itself I'm afraid. (But all models end up doing repetition when out of domain)." ]
1,667
1,669
1,668
CONTRIBUTOR
null
# What does this PR do? This adds `chunk_length_s` to `seq2seq` algorithms. ## Approach Since we have no way of finding a matching between output and input with `seq2seq` this is an alternative route. This runs the pipeline on the various chunks and finds all generated output. Then it tries to find the longest sequence of non special ids that could correspond to the subsequences within the batch. ## Pros - It should work on *any* seq2seq models - It should work decently when the stride is long enough to have good overlapping of tokens so that the stitching can work correctly - It should be slightly robust to few token errors - It should perform best on mostly continuous talk (so that there is model output that can overlap) ## Cons - This method is **unsound** and will fail under some circumstances - It will fail when there is silence in the overlap. If there is silence then there is no overlapping tokens, and the stitching might get lost during the stitching process. By default it will concatenate, but it might be put off by boundaries in the stride. - It will fail spectacularly when something repeats a single word over and over. Then, we will have overlap that might be TOO large. This is impossible to distinguish without getting access to the timestamps (which only `whisper` can currently do, and it does come with caveats). The currently algorithm will favor long chain of matching tokens. - It will have issues with capitalization and out of domain areas. For instance "Yes, sir." , "Sir Thomas" might be 2 chunks, which have different capitalization. Since the current algorithm works at the token level, the 2 tokens `"sir"` and `¨Sir"` are different and will fail to match leading to some `¨Yes, sir. Sir Thomas" stitching instead of the intended "Yes, Sir Thomas.". <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20104/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20104", "html_url": "https://github.com/huggingface/transformers/pull/20104", "diff_url": "https://github.com/huggingface/transformers/pull/20104.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20104.patch", "merged_at": 1668461571000 }
https://api.github.com/repos/huggingface/transformers/issues/20103
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20103/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20103/comments
https://api.github.com/repos/huggingface/transformers/issues/20103/events
https://github.com/huggingface/transformers/pull/20103
1,438,455,044
PR_kwDOCUB6oc5CVM65
20,103
Fix generate_dummy_inputs for ImageGPTOnnxConfig
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,667
1,667
COLLABORATOR
null
# What does this PR do? `ImageGPT` ONNX tests fail with ```bash TypeError: __call__() takes 2 positional arguments but 3 were given ``` This is due to the way of calling `preprocessor(input_image, framework)` is not working with the changes introduced in Image Process PR #19796. This PR updates the calling way by using keyword arguments.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20103/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20103", "html_url": "https://github.com/huggingface/transformers/pull/20103", "diff_url": "https://github.com/huggingface/transformers/pull/20103.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20103.patch", "merged_at": 1667835087000 }
https://api.github.com/repos/huggingface/transformers/issues/20102
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20102/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20102/comments
https://api.github.com/repos/huggingface/transformers/issues/20102/events
https://github.com/huggingface/transformers/issues/20102
1,438,437,305
I_kwDOCUB6oc5VvM-5
20,102
Can't load FSMT model after resizing token embedding
{ "login": "alex96k", "id": 117656804, "node_id": "U_kgDOBwNM5A", "avatar_url": "https://avatars.githubusercontent.com/u/117656804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex96k", "html_url": "https://github.com/alex96k", "followers_url": "https://api.github.com/users/alex96k/followers", "following_url": "https://api.github.com/users/alex96k/following{/other_user}", "gists_url": "https://api.github.com/users/alex96k/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex96k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex96k/subscriptions", "organizations_url": "https://api.github.com/users/alex96k/orgs", "repos_url": "https://api.github.com/users/alex96k/repos", "events_url": "https://api.github.com/users/alex96k/events{/privacy}", "received_events_url": "https://api.github.com/users/alex96k/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the clear reproducer. Looking at the code, it looks like FSMT in general does not properly support the `resize_token_embeddings` API: it's not using the same config names for the vocab size (easily fixable) but also the method resizes both the encoder and decoder embeddings and in this case, it should only resize the encoder embedding probably.\r\n\r\nIn any case, I don't know the model as well as @stas00 so let's wait for him to chime in and advise on the best fix!", "@alex96k, would you by chance would like to tackle that?\r\n\r\nThe main difficulty with FSMT is that it has 2 unique dictionaries for many models, so some generic functionality is either not possible out of the box or requires some very careful thinking in order not to break other things. I think it's the only model of this kind in HF models.\r\n\r\nThere is an outstanding PR that was trying to bring FSMT in sync with the rest of the models:\r\nhttps://github.com/huggingface/transformers/pull/11218\r\nbut it proved to cause a speed regression so it was never merged, but perhaps it had this resolved already?\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
NONE
null
### System Info **Environment info:** - transformers: 4.19.2 - Platform: Linux elementary OS 6.1 Jólnir - Python version: 3.8.10 - PyTorch version: 1.12.1+cu113 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @stas00 ### Who can help? @stas00 ### Expected behavior / Issue I am having issues to reload a saved FSMT model when the token embedding has been resized. This error doesn't appear with other models such as T5 or MT5. The similar error occured previously for other models as well but has been fixed (-> #9055 or #8706). However it doesn't seem to be fixed for the FSMT model. Currently I receive the following error: ``` RuntimeError: Error(s) in loading state_dict for FSMTForConditionalGeneration: size mismatch for model.encoder.embed_tokens.weight: copying a param with shape torch.Size([42026, 1024]) from checkpoint, the shape in current model is torch.Size([42024, 1024]). size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([42026, 1024]) from checkpoint, the shape in current model is torch.Size([42024, 1024]). ``` Any idea how to solve this? Thanks a lot and all the best! ### Reproduction ``` from transformers import FSMTForConditionalGeneration, FSMTTokenizer SAVING_PATH = "/tmp/test_model_fsmt" model_class = FSMTForConditionalGeneration model_path = "facebook/wmt19-de-en" model = model_class.from_pretrained(model_path) tokenizer = FSMTTokenizer.from_pretrained(model_path) tokenizer.add_tokens(['test1', 'test2']) model.resize_token_embeddings(len(tokenizer)) model.save_pretrained(SAVING_PATH) tokenizer.save_pretrained(SAVING_PATH) new_model = model_class.from_pretrained(SAVING_PATH) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20102/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20101
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20101/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20101/comments
https://api.github.com/repos/huggingface/transformers/issues/20101/events
https://github.com/huggingface/transformers/issues/20101
1,438,414,861
I_kwDOCUB6oc5VvHgN
20,101
Replace scatter operations in TAPAS by native PyTorch
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" }, { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
closed
false
null
[]
[ "@NielsRogge I would love to pick up this issue from where you left if that is okay for you", "Awesome, feel free to take over my branch and see whether you can make all tests pass", "@Bearnardd Thank you!\r\n\r\nYou can start from this (updated) branch/PR (which is updated with the `main` branch with a tiny fix).\r\nIf you want to start from @NielsRogge branch, you have to rebase on (updated) `main` first - there will be a few conflicts to resolve.\r\n\r\nhttps://github.com/huggingface/transformers/pull/20107\r\nhttps://github.com/huggingface/transformers/tree/fix_tapas_scatter\r\n", "@ydshieh Thanks for the guidance! I will start working on this problem after the work :)", "Hi @NielsRogge - I have done a bit of debugging today and I have found the following roots of failing tests:\r\n\r\n1. `tests/models/tapas/test_modeling_tapas.py::TapasUtilitiesTest::test_reduce_sum_vectorized` fails because in pytorch version of `scatter_reduce` the `src` and `index` tensors are required to have the same number of dimensions which is not the case in the above test.\r\n2. `tests/models/tapas/test_modeling_tapas.py::TapasModelTest::test_training` fails in backward pass with the following error: `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation` which is happening because inside `segment_reduce` function we are making a view of the `segment_means` variable (which does not copy it) which results in modification in place. Simply cloning segment_means before viewing it seems to be working but I do not know what is your policy about cloning tensors in regard to memory management", "Hi @Bearnardd - that's awesome! Thanks a lot for looking into it.\r\n\r\nRegarding point 1 - you can update that test accordingly to account for the PyTorch version of `scatter_reduce`.\r\n\r\nRegarding point 2 - you can clone the tensor and open a PR, we can discuss it there ;)", "Hi @NielsRogge - I guess this issue should be closed?", "Yes, thanks for reminding us :-) @Bearnardd \r\n\r\nClosed by #20149 " ]
1,667
1,668
1,668
CONTRIBUTOR
null
### Feature request TAPAS (my first 🤗 contribution 😄 ) still relies on the [torch_scatter](https://github.com/rusty1s/pytorch_scatter) library, as the model uses some scatter operations on tensors. Back then PyTorch didn't have these operations available. Now they have: https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html#torch.Tensor.scatter_. So we should replace [this line](https://github.com/huggingface/transformers/blob/b8112eddecfd524038e3c10970c06a444a32aa9d/src/transformers/models/tapas/modeling_tapas.py#L1800) by native PyTorch. To confirm everything is working fine, one should run the following tests and make sure they pass (to be run from the root of this repository): ``` RUN_SLOW=yes pytest tests/models/tapas/test_modeling_tapas.py ``` Subsequently, all `is_scatter_available` mentions can be removed from the code base. ### Motivation By replacing this, our TAPAS implementation doesn't rely on a third-party library anymore. ### Your contribution I can look into this, but marking it as a good first/second issue. Update: I have an attempt here: https://github.com/huggingface/transformers/compare/main...NielsRogge:transformers:fix_tapas_scatter?expand=1. However, training tests didn't pass due to an issue in the backward pass.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20101/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20101/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20100
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20100/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20100/comments
https://api.github.com/repos/huggingface/transformers/issues/20100/events
https://github.com/huggingface/transformers/pull/20100
1,438,331,226
PR_kwDOCUB6oc5CUydx
20,100
Fix MaskformerFeatureExtractor
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20100). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20100). All of your documentation changes will be reflected on that endpoint.", "@sgugger could you approve this PR, so that I can merge this critical fix to MaskFormerFeatureExtractor?\r\n\r\nI'll open a separate PR for improving the docs around image segmentation." ]
1,667
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR fixes MaskFormer's feature extractor. PR #18997 introduced a bug which made the feature extractor create the same binary mask for all segments/instances in an image, hence making it impossible to fine-tune the model. This PR fixes it and makes sure the model can be properly fine-tuned on instance, semantic and panoptic segmentation datasets. To do: - [x] add tests
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20100/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20100", "html_url": "https://github.com/huggingface/transformers/pull/20100", "diff_url": "https://github.com/huggingface/transformers/pull/20100.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20100.patch", "merged_at": 1668524437000 }
https://api.github.com/repos/huggingface/transformers/issues/20099
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20099/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20099/comments
https://api.github.com/repos/huggingface/transformers/issues/20099/events
https://github.com/huggingface/transformers/pull/20099
1,438,305,244
PR_kwDOCUB6oc5CUs64
20,099
Add RetroPrompt in research_projects as example
{ "login": "OE-Heart", "id": 60204373, "node_id": "MDQ6VXNlcjYwMjA0Mzcz", "avatar_url": "https://avatars.githubusercontent.com/u/60204373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OE-Heart", "html_url": "https://github.com/OE-Heart", "followers_url": "https://api.github.com/users/OE-Heart/followers", "following_url": "https://api.github.com/users/OE-Heart/following{/other_user}", "gists_url": "https://api.github.com/users/OE-Heart/gists{/gist_id}", "starred_url": "https://api.github.com/users/OE-Heart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OE-Heart/subscriptions", "organizations_url": "https://api.github.com/users/OE-Heart/orgs", "repos_url": "https://api.github.com/users/OE-Heart/repos", "events_url": "https://api.github.com/users/OE-Heart/events{/privacy}", "received_events_url": "https://api.github.com/users/OE-Heart/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20099). All of your documentation changes will be reflected on that endpoint.", "Hi there! Thanks for using Transformers in your research!\r\nIt looks like your proposed example contains too many modifications of the library with 47 new files, so it's probably best to leave it in your repo? We're happy to link to it from our community page.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,671
1,671
NONE
null
# What does this PR do? - This PR adds example code for NeurIPS 2022 paper "[Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning](https://arxiv.org/pdf/2205.14704.pdf)" using huggingface's `transformers` library. - The folder `GLUE_task` includes three single sentence tasks (SST-2, MR, CR), three sentence pair classification tasks (MNLI, QNLI, QQP) and one information extraction task (Few-NERD), and the folder `RE_task` includes two information extraction tasks (SemEval, TACRED). - Our original implemention can be viewed at [https://github.com/zjunlp/PromptKG/tree/main/research/RetroPrompt](https://github.com/zjunlp/PromptKG/tree/main/research/RetroPrompt). ## Overview RetroPrompt constructs an open-book knowledge-store from training instances and implements a retrieval mechanism during the process of input, training and inference, thus equipping the model with the ability to retrieve related contexts from the training corpus as cues for enhancement. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20099/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20099", "html_url": "https://github.com/huggingface/transformers/pull/20099", "diff_url": "https://github.com/huggingface/transformers/pull/20099.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20099.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20098
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20098/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20098/comments
https://api.github.com/repos/huggingface/transformers/issues/20098/events
https://github.com/huggingface/transformers/pull/20098
1,438,297,883
PR_kwDOCUB6oc5CUrVM
20,098
Skip 2 tests in `VisionTextDualEncoderProcessorTest`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,667
1,667
COLLABORATOR
null
# What does this PR do? These 2 tests in `VisionTextDualEncoderProcessorTest` will be fixed when we add the new `AutoImageProcessor`. Current error is ```bash AttributeError: 'NoneType' object has no attribute 'from_dict ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20098/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20098", "html_url": "https://github.com/huggingface/transformers/pull/20098", "diff_url": "https://github.com/huggingface/transformers/pull/20098.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20098.patch", "merged_at": 1667829065000 }
https://api.github.com/repos/huggingface/transformers/issues/20097
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20097/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20097/comments
https://api.github.com/repos/huggingface/transformers/issues/20097/events
https://github.com/huggingface/transformers/pull/20097
1,438,250,652
PR_kwDOCUB6oc5CUg7X
20,097
README in Hindi 🇮🇳
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hello @sgugger, we are good to go?\r\n ", "Done." ]
1,667
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? Fixes PR #19903
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20097/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20097", "html_url": "https://github.com/huggingface/transformers/pull/20097", "diff_url": "https://github.com/huggingface/transformers/pull/20097.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20097.patch", "merged_at": 1670268881000 }
https://api.github.com/repos/huggingface/transformers/issues/20096
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20096/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20096/comments
https://api.github.com/repos/huggingface/transformers/issues/20096/events
https://github.com/huggingface/transformers/pull/20096
1,438,190,270
PR_kwDOCUB6oc5CUT4b
20,096
Generate: move generation_*.py src files into generation/*.py
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger now with `~/generation/__init__.py` populated with lazy references (as in most `__init__.py` files). \r\n\r\nI've updated all references to objects outside `/generation` from `generation.file_name.ObjectName` to `generation.ObjectName`, including in the docs. That does make tracking external references easier -- if an object is in `~/generation/__init__.py` it means that it is likely used somewhere else, and should be treated with extra care.", "Could you please update `optimum` to reflect these changes?\r\n```\r\nfrom optimum.onnxruntime import ORTModelForSeq2SeqLM\r\nsite-packages/transformers/generation_utils.py:27: FutureWarning: Importing `GenerationMixin` from `src/transformers/generation_utils.py` is deprecated and will be removed in Transformers v5. Import as `from transformers import GenerationMixin` instead.\r\n FutureWarning,\r\n```", "> Could you please update `optimum` to reflect these changes?\r\n> \r\n> ```\r\n> from optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n> site-packages/transformers/generation_utils.py:27: FutureWarning: Importing `GenerationMixin` from `src/transformers/generation_utils.py` is deprecated and will be removed in Transformers v5. Import as `from transformers import GenerationMixin` instead.\r\n> FutureWarning,\r\n> ```\r\n\r\nThe PR https://github.com/huggingface/optimum/pull/536 has just been merged and solves this issue." ]
1,667
1,669
1,668
MEMBER
null
# What does this PR do? Moves `generation_*.py` source files into `generation/*.py`. I tried a few slow tests locally, no problems were raised. ⚠️ the link to the docs seems broken, can't validate their correctness 🤔
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20096/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20096", "html_url": "https://github.com/huggingface/transformers/pull/20096", "diff_url": "https://github.com/huggingface/transformers/pull/20096.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20096.patch", "merged_at": 1668008049000 }
https://api.github.com/repos/huggingface/transformers/issues/20095
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20095/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20095/comments
https://api.github.com/repos/huggingface/transformers/issues/20095/events
https://github.com/huggingface/transformers/issues/20095
1,438,180,264
I_kwDOCUB6oc5VuOOo
20,095
Transformers documentation translation to Chinese (Simplified)
{ "login": "bfss", "id": 31245245, "node_id": "MDQ6VXNlcjMxMjQ1MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/31245245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bfss", "html_url": "https://github.com/bfss", "followers_url": "https://api.github.com/users/bfss/followers", "following_url": "https://api.github.com/users/bfss/following{/other_user}", "gists_url": "https://api.github.com/users/bfss/gists{/gist_id}", "starred_url": "https://api.github.com/users/bfss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bfss/subscriptions", "organizations_url": "https://api.github.com/users/bfss/orgs", "repos_url": "https://api.github.com/users/bfss/repos", "events_url": "https://api.github.com/users/bfss/events{/privacy}", "received_events_url": "https://api.github.com/users/bfss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,674
1,671
CONTRIBUTOR
null
Hi! Let's bring the documentation to all the Chinese-speaking community :) Who would want to translate? Please follow our [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know here if you'd like to translate any and we'll add your name to the list. Some notes: - Add your translations to the folder [source/zh/](https://github.com/huggingface/transformers/blob/main/docs/source/zh/) - Register your translation in [zh/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/zh/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). - Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. - 🙋 If you'd like others to help you with the translation, you can also post in our [forums](https://discuss.huggingface.co/) or tag [@espejelomar](https://twitter.com/espejelomar) on Twitter to gain some visibility. ## Get Started section - - [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx). @bfss - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.mdx). @bfss
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20095/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20094
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20094/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20094/comments
https://api.github.com/repos/huggingface/transformers/issues/20094/events
https://github.com/huggingface/transformers/pull/20094
1,438,119,345
PR_kwDOCUB6oc5CUEjr
20,094
[Don't merge] Check CircleCI against PyTorch 1.13
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20094). All of your documentation changes will be reflected on that endpoint." ]
1,667
1,668
1,668
COLLABORATOR
null
# What does this PR do? **[Don't merge]** Check CircleCI against PyTorch 1.13
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20094/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20094", "html_url": "https://github.com/huggingface/transformers/pull/20094", "diff_url": "https://github.com/huggingface/transformers/pull/20094.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20094.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20093
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20093/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20093/comments
https://api.github.com/repos/huggingface/transformers/issues/20093/events
https://github.com/huggingface/transformers/pull/20093
1,438,041,243
PR_kwDOCUB6oc5CTz6i
20,093
docs: Replace unsupported `facebookresearch/bitsandbytes`
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,667
1,667
MEMBER
null
# What does this PR do? * Replace unsupported https://github.com/facebookresearch/bitsandbytes with https://github.com/TimDettmers/bitsandbytes, which is by the same author and still being maintained and updated. For reference, the latter repository is the one mentioned in your blog post https://huggingface.co/blog/hf-bitsandbytes-integration, which was co-written by Tim Dettmers. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @sgugger @TimDettmers --- - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20093/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20093/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20093", "html_url": "https://github.com/huggingface/transformers/pull/20093", "diff_url": "https://github.com/huggingface/transformers/pull/20093.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20093.patch", "merged_at": 1667829123000 }
https://api.github.com/repos/huggingface/transformers/issues/20092
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20092/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20092/comments
https://api.github.com/repos/huggingface/transformers/issues/20092/events
https://github.com/huggingface/transformers/pull/20092
1,437,978,496
PR_kwDOCUB6oc5CTm4k
20,092
[wip doc buidler test]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20092). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,700
1,671
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20092/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20092", "html_url": "https://github.com/huggingface/transformers/pull/20092", "diff_url": "https://github.com/huggingface/transformers/pull/20092.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20092.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20091
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20091/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20091/comments
https://api.github.com/repos/huggingface/transformers/issues/20091/events
https://github.com/huggingface/transformers/issues/20091
1,437,969,777
I_kwDOCUB6oc5Vta1x
20,091
OWL-ViT training / fine-tuning code
{ "login": "ekazakos", "id": 20310086, "node_id": "MDQ6VXNlcjIwMzEwMDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20310086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekazakos", "html_url": "https://github.com/ekazakos", "followers_url": "https://api.github.com/users/ekazakos/followers", "following_url": "https://api.github.com/users/ekazakos/following{/other_user}", "gists_url": "https://api.github.com/users/ekazakos/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekazakos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekazakos/subscriptions", "organizations_url": "https://api.github.com/users/ekazakos/orgs", "repos_url": "https://api.github.com/users/ekazakos/repos", "events_url": "https://api.github.com/users/ekazakos/events{/privacy}", "received_events_url": "https://api.github.com/users/ekazakos/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 5769473378, "node_id": "LA_kwDOCUB6oc8AAAABV-MtYg", "url": "https://api.github.com/repos/huggingface/transformers/labels/Vision", "name": "Vision", "color": "C079EF", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @alaradirik ", "Hi @ekazakos, thanks for the suggestion! And yes, we are planning to integrate it to transformers shortly.\r\n\r\n@sgugger @NielsRogge in the paper, authors first train a base CLIP model using an image size of 224 × 224 and then resize the image position embeddings with linear interpolation to 768 x 768 before fine-tuning the whole model on the object detection task. Is it a good idea to have separate sections in the configuration file for the inference and training modes?", "Thank you so much!! ", "> in the paper, authors first train a base CLIP model using an image size of 224 × 224 and then resize the image position embeddings with linear interpolation to 768 x 768 before fine-tuning the whole model on the object detection task. Is it a good idea to have separate sections in the configuration file for the inference and training modes?\r\n\r\nNo I wouldn't do that, I'd just refer to [this script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) if people are interested in training a CLIP model themselves. Afterwards, they can load the weights into the `OwlViTForObjectDetection` model using a conversion script. This conversion script should include the interpolation of the position embeddings.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "May I ask for about how long will you release the training and/or finetuning code of OWL-ViT? I personally think this feature will boost the usage of the model very much.", "@alaradirik yeah, we need the fine-tuning code :)", "I also would like the fine-tuning code example.", "If anyone is interested I have a repo here: https://github.com/stevebottos/owl-vit-object-detection which is based on the huggingface implementation. It's still a WIP though and more of an experiment, but it works. If @alaradirik is interested I can help get this in.", "cc @rafaelpadilla ", "Hey there,\r\n\r\nI've been following this issue, and it seems like there haven't been any updates for a while. Just wondering, what's the current status on this? If there's been any progress, could you share the latest code link? It'd really help me out.\r\n\r\nThanks!", "@Itto1992 I haven't worked on my code for a while, I've been busy with life stuff, but feel free to poke around with what's there and/or follow the repo - I may pick it up again one day." ]
1,667
1,703
null
NONE
null
### Feature request Hi, I've noticed that recently Google Research added the training and fine-tuning code for OWL-ViT in Scenic. Are you planning to integrate it in HuggingFace Transformers? Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20091/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20091/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/20090
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20090/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20090/comments
https://api.github.com/repos/huggingface/transformers/issues/20090/events
https://github.com/huggingface/transformers/pull/20090
1,437,833,348
PR_kwDOCUB6oc5CTI2Q
20,090
Fix overflow images in layoutxlm and layoutlmv2
{ "login": "rogerdehe", "id": 6434311, "node_id": "MDQ6VXNlcjY0MzQzMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6434311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rogerdehe", "html_url": "https://github.com/rogerdehe", "followers_url": "https://api.github.com/users/rogerdehe/followers", "following_url": "https://api.github.com/users/rogerdehe/following{/other_user}", "gists_url": "https://api.github.com/users/rogerdehe/gists{/gist_id}", "starred_url": "https://api.github.com/users/rogerdehe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rogerdehe/subscriptions", "organizations_url": "https://api.github.com/users/rogerdehe/orgs", "repos_url": "https://api.github.com/users/rogerdehe/repos", "events_url": "https://api.github.com/users/rogerdehe/events{/privacy}", "received_events_url": "https://api.github.com/users/rogerdehe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20090). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Thanks for adding! Can you update LayoutLMv3's processor as well?\r\n\r\nof course", "> Thanks for adding! Can you update LayoutLMv3's processor as well?\r\n\r\n@NielsRogge I have fix LayoutLMv3's processor", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,667
1,675
1,675
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> `LayoutXLMProcessor` and `LayoutLMv2Processor` accept `return_tensors` as a parameter, so it should return different type depends on `return_tensors` instead of list at all time, but now `get_overflowing_images` always return `list` for now. <!-- Remove if not applicable --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20090/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20090", "html_url": "https://github.com/huggingface/transformers/pull/20090", "diff_url": "https://github.com/huggingface/transformers/pull/20090.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20090.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20089
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20089/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20089/comments
https://api.github.com/repos/huggingface/transformers/issues/20089/events
https://github.com/huggingface/transformers/issues/20089
1,437,763,643
I_kwDOCUB6oc5Vsog7
20,089
Regression: TorchIterableDataset doesn't have __len__
{ "login": "maxkrieger", "id": 2660634, "node_id": "MDQ6VXNlcjI2NjA2MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2660634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxkrieger", "html_url": "https://github.com/maxkrieger", "followers_url": "https://api.github.com/users/maxkrieger/followers", "following_url": "https://api.github.com/users/maxkrieger/following{/other_user}", "gists_url": "https://api.github.com/users/maxkrieger/gists{/gist_id}", "starred_url": "https://api.github.com/users/maxkrieger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxkrieger/subscriptions", "organizations_url": "https://api.github.com/users/maxkrieger/orgs", "repos_url": "https://api.github.com/users/maxkrieger/repos", "events_url": "https://api.github.com/users/maxkrieger/events{/privacy}", "received_events_url": "https://api.github.com/users/maxkrieger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for opening the issue. What exactly is the regression here? On which version of Transformers did it work and when did it stop working?\r\n\r\nAs the error clearly states (copying the full error message would be helpful by the way), you need to use `max_steps` in your training arguments instead of `num_train_epochs` since your dataset doesn't have a length.", "Hey @sgugger, sorry for the missing information. According to [this forum post](https://discuss.huggingface.co/t/using-iterabledataset-with-trainer-iterabledataset-has-no-len/15790/2), the fix for this exact error is to cast the dataset to a torch dataset as described above. However, the error persists. I have not bisected to figure out if it really is a regression, but it seems so from the code online.", "Hi @maxkrieger - in the forum post that you have provided you can see that the `training_args` already contains argument `max_steps=1e6`. In order for your sample to work correctly, you need to both set the `max_steps` argument as well as format your dataset for Pytorch. ", "Additionally if I am not mistaken after specifying `max_steps` argument you can drop `num_train_epochs` since it will be override anyways .", "Aahh 🤦 somehow missed that parameter while reading the snippets @Bearnardd. Apologies for the lack of diligence, this is resolved." ]
1,667
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no (n/a) - Using distributed or parallel set-up in script?: no ### Who can help? * Git blame suggests @sgugger, @anton-l * Would be great if @patil-suraj tries the whole pipeline linked because there are some other issues downstream ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the `Trainer` with a dataset with `streaming=True`, making it iterable. To make the `Trainer` `train_dataset` work with streaming, use `.with_format("torch")` (as suggested in https://github.com/huggingface/datasets/issues/2583#issuecomment-874078780 and [here](https://discuss.huggingface.co/t/using-iterabledataset-with-trainer-iterabledataset-has-no-len/15790/2?u=maxkriegers)). A simple repro is below and in [**this colab**](https://colab.research.google.com/drive/1V5F5ut410hiJgS6c5us5iVz9Fk6JwOF1?usp=sharing) ``` model_name = "gpt2" output_dir = "." from transformers import GPT2Tokenizer, GPT2Model from transformers import Trainer, TrainingArguments from datasets import load_dataset from transformers.data.data_collator import DataCollatorForLanguageModeling dataset = load_dataset("rotten_tomatoes", split="train", streaming=True).shuffle(seed=42) tokenizer = GPT2Tokenizer.from_pretrained(model_name) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) model = GPT2Model.from_pretrained(model_name) training_args = TrainingArguments( output_dir=output_dir, num_train_epochs=5, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset.with_format("torch") ) trainer.train() trainer.save_model() ``` which yields ``` ValueError Traceback (most recent call last) <ipython-input-8-de14de894c00> in <module> 8 args=training_args, 9 data_collator=data_collator, ---> 10 train_dataset=dataset.with_format("torch") 11 ) /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics) 504 505 if train_dataset is not None and not has_length(train_dataset) and args.max_steps <= 0: --> 506 raise ValueError("train_dataset does not implement __len__, max_steps has to be specified") 507 508 if ( ValueError: train_dataset does not implement __len__, max_steps has to be specified ``` ### Expected behavior `TorchIterableDataset` should implement `__len__` but doesn't. It instead has a `.dataset_size` method.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20089/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20088
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20088/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20088/comments
https://api.github.com/repos/huggingface/transformers/issues/20088/events
https://github.com/huggingface/transformers/pull/20088
1,437,525,227
PR_kwDOCUB6oc5CSJax
20,088
docs: Resolve many typos in the English docs
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,667
1,667
1,667
MEMBER
null
# What does this PR do? * Fixes typo in `python -m transformers.onnx --help`. * Fixes many typos in the English documentation via `codespell`. - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @sgugger - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20088/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20088/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20088", "html_url": "https://github.com/huggingface/transformers/pull/20088", "diff_url": "https://github.com/huggingface/transformers/pull/20088.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20088.patch", "merged_at": 1667830744000 }
https://api.github.com/repos/huggingface/transformers/issues/20087
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20087/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20087/comments
https://api.github.com/repos/huggingface/transformers/issues/20087/events
https://github.com/huggingface/transformers/pull/20087
1,437,516,386
PR_kwDOCUB6oc5CSHtx
20,087
docs: Fixed variables in f-strings
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Done & done!", "Thanks again for your contribution!" ]
1,667
1,667
1,667
MEMBER
null
# What does this PR do? Fixes unknown variables in some documentation code blocks' f-strings. - [x] This PR fixes a typo or improves the docs ## In addition... The following code block, which can be found at https://huggingface.co/docs/transformers/custom_models#writing-a-custom-model & [custom_models.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/custom_models.mdx), uses `torch` without `import torch` being performed in any prior docstring. ```python class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` If preferred, I can add `import torch` in this code block in this PR, or make a new one for it. ## Who can review? Documentation: @sgugger - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20087/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20087", "html_url": "https://github.com/huggingface/transformers/pull/20087", "diff_url": "https://github.com/huggingface/transformers/pull/20087.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20087.patch", "merged_at": 1667845090000 }
https://api.github.com/repos/huggingface/transformers/issues/20086
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20086/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20086/comments
https://api.github.com/repos/huggingface/transformers/issues/20086/events
https://github.com/huggingface/transformers/pull/20086
1,437,376,637
PR_kwDOCUB6oc5CRsqo
20,086
autogenerated files based on vit model
{ "login": "caleb-vicente", "id": 71934065, "node_id": "MDQ6VXNlcjcxOTM0MDY1", "avatar_url": "https://avatars.githubusercontent.com/u/71934065?v=4", "gravatar_id": "", "url": "https://api.github.com/users/caleb-vicente", "html_url": "https://github.com/caleb-vicente", "followers_url": "https://api.github.com/users/caleb-vicente/followers", "following_url": "https://api.github.com/users/caleb-vicente/following{/other_user}", "gists_url": "https://api.github.com/users/caleb-vicente/gists{/gist_id}", "starred_url": "https://api.github.com/users/caleb-vicente/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/caleb-vicente/subscriptions", "organizations_url": "https://api.github.com/users/caleb-vicente/orgs", "repos_url": "https://api.github.com/users/caleb-vicente/repos", "events_url": "https://api.github.com/users/caleb-vicente/events{/privacy}", "received_events_url": "https://api.github.com/users/caleb-vicente/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I opened a pull request in the library when I was supposed to do it directly in the fork. Closing this one and doing it in my fork" ]
1,667
1,667
1,667
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20086/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20086", "html_url": "https://github.com/huggingface/transformers/pull/20086", "diff_url": "https://github.com/huggingface/transformers/pull/20086.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20086.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20085
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20085/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20085/comments
https://api.github.com/repos/huggingface/transformers/issues/20085/events
https://github.com/huggingface/transformers/issues/20085
1,437,348,372
I_kwDOCUB6oc5VrDIU
20,085
ZeroDivisionError: integer division or modulo by zero
{ "login": "mingmin95", "id": 23714171, "node_id": "MDQ6VXNlcjIzNzE0MTcx", "avatar_url": "https://avatars.githubusercontent.com/u/23714171?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mingmin95", "html_url": "https://github.com/mingmin95", "followers_url": "https://api.github.com/users/mingmin95/followers", "following_url": "https://api.github.com/users/mingmin95/following{/other_user}", "gists_url": "https://api.github.com/users/mingmin95/gists{/gist_id}", "starred_url": "https://api.github.com/users/mingmin95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mingmin95/subscriptions", "organizations_url": "https://api.github.com/users/mingmin95/orgs", "repos_url": "https://api.github.com/users/mingmin95/repos", "events_url": "https://api.github.com/users/mingmin95/events{/privacy}", "received_events_url": "https://api.github.com/users/mingmin95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Full error log\r\n```\r\n File \"/data/min.ming/project_user_gender_optimization/main/model_further_pretrain.py\", line 437, in <module>\r\n main()\r\n File \"/data/min.ming/project_user_gender_optimization/main/model_further_pretrain.py\", line 411, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/transformers/trainer.py\", line 1498, in train\r\n return inner_training_loop(\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/transformers/trainer.py\", line 1714, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/dataloader.py\", line 681, in __next__\r\n data = self._next_data()\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/dataloader.py\", line 721, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2165, in __getitem__\r\n return self._getitem(\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2149, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 491, in query_table\r\n pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices)\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 57, in _query_table_with_indices_mapping\r\n return _query_table(table, key)\r\n File \"/data/min.ming/.conda/envs/mm_py39/lib/python3.9/site-packages/datasets/formatting/formatting.py\", line 81, in _query_table\r\n return table.fast_slice(key % table.num_rows, 1)\r\nZeroDivisionError: integer division or modulo by zero\r\n```", "@mm1352363 how did you solved the problem? I'm facing the same issue.", "I got the exact same error when I wrapped the model in `model = torch.nn.DataParallel(model)`, passed the wrapped model to `Trainer` and called `trainer.evaluate()` or `trainer.train()`. Turned out the `Trainer` handles multiple GPUs automatically and wrapping the model in `DataParallel` is not necessary. Removing `model = torch.nn.DataParallel(model)` solved the issue." ]
1,667
1,678
1,667
NONE
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-42-shopee-generic-x86_64-with-glibc2.23 - Python version: 3.9.7 - Huggingface_hub version: 0.2.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I used the scripts (https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim.py) to further pretraining swintransformer in my dataset. I just modified the way to read the data like below ``` ds = {} if 'train' in data_args.data_files.keys(): train_images = os.listdir(data_args.data_files['train']) train_images_files = [os.path.join(data_args.data_files['train'], image) for image in train_images] ds['train'] = Dataset.from_dict({'image': train_images_files}).cast_column("image", Image()) if 'validation' in data_args.data_files.keys(): val_images = os.listdir(data_args.data_files['validation']) val_images_files = [os.path.join(data_args.data_files['validation'], image) for image in val_images] ds['validation'] = Dataset.from_dict({'image': val_images_files}).cast_column("image", Image()) ``` and the scripts is ``` python /data/min.ming/project_user_gender_optimization/main/model_further_pretrain.py \ --model_name_or_path /data/min.ming/project_user_gender_optimization/res/swin-tiny-patch4-window7-224 \ --train_dir /data/min.ming/project_user_gender_optimization/data/avatar/train/ID_TRAIN_100000 \ --do_train \ --per_device_train_batch_size 8 \ --num_train_epochs 5 \ --output_dir /data/min.ming/project_user_gender_optimization/tmp/swintransformer_test \ --overwrite_output_dir \ --report_to none ``` but get ZeroDivisionError: integer division or modulo by zero ### Expected behavior Do you know where is the problem
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20085/timeline
completed
null
null