url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/20284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20284/comments
https://api.github.com/repos/huggingface/transformers/issues/20284/events
https://github.com/huggingface/transformers/issues/20284
1,452,089,737
I_kwDOCUB6oc5WjSGJ
20,284
Transformer cannot tokenize Chinese words
{ "login": "fivehills", "id": 40301946, "node_id": "MDQ6VXNlcjQwMzAxOTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/40301946?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fivehills", "html_url": "https://github.com/fivehills", "followers_url": "https://api.github.com/users/fivehills/followers", "following_url": "https://api.github.com/users/fivehills/following{/other_user}", "gists_url": "https://api.github.com/users/fivehills/gists{/gist_id}", "starred_url": "https://api.github.com/users/fivehills/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fivehills/subscriptions", "organizations_url": "https://api.github.com/users/fivehills/orgs", "repos_url": "https://api.github.com/users/fivehills/repos", "events_url": "https://api.github.com/users/fivehills/events{/privacy}", "received_events_url": "https://api.github.com/users/fivehills/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "There is nothing we can do without a code reproducer and, in particular, knowing which model you're using.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,674
1,674
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20283/comments
https://api.github.com/repos/huggingface/transformers/issues/20283/events
https://github.com/huggingface/transformers/pull/20283
1,452,080,841
PR_kwDOCUB6oc5DDMJ_
20,283
Optimizes DonutProcessor token2json method for speed
{ "login": "michaelnation26", "id": 14008434, "node_id": "MDQ6VXNlcjE0MDA4NDM0", "avatar_url": "https://avatars.githubusercontent.com/u/14008434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelnation26", "html_url": "https://github.com/michaelnation26", "followers_url": "https://api.github.com/users/michaelnation26/followers", "following_url": "https://api.github.com/users/michaelnation26/following{/other_user}", "gists_url": "https://api.github.com/users/michaelnation26/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelnation26/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelnation26/subscriptions", "organizations_url": "https://api.github.com/users/michaelnation26/orgs", "repos_url": "https://api.github.com/users/michaelnation26/repos", "events_url": "https://api.github.com/users/michaelnation26/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelnation26/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20283). All of your documentation changes will be reflected on that endpoint.", "Hi @NielsRogge . Do I need to make any changes to this PR? If not, can I please get an approval so we can merge the PR.", "Hi @sgugger and @NielsRogge , after updating the `DONUT_PRETRAINED_MODEL_NAME` value and committing, the `check_repository_consistency` was failing on CircleCI so I decided to run `git fetch upstream` and `git rebase upstream/main` by following these [instructions](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request). I may have botched the next steps. The git message said my branch diverged. I did a `git pull` and `git config pull.rebase true`, then another `git pull`. Finally, a `git push`. I did not use the `--force` flag. I'm thinking now that I should have since my PR is already open. Now, my PR includes everyone else's changes from the rebase. Do I leave these new changes in my PR or remove them? I only know the basics of Git so I'm not sure what to do next. Also, a couple of tests are now failing that are not related to my changes. Not sure how to resolve those.", "Can you fix the last conflict and force-push this time? It might work and remove the huge diff. Otherwise you'll need to close this PR and open a fresh one." ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Speeds up the `token2json` method in `DonutProcessor` by calling `self.tokenizer.get_added_vocab()` once and reusing the `added_vocab` results in subsequent recursive calls. Fixes # (issue) [Issue #20238](https://github.com/huggingface/transformers/issues/20238) The `self.tokenizer.get_added_vocab()` call is somewhat expensive. It takes 50 - 70ms to complete. In the initial implementation of the `token2json` method, `self.tokenizer.get_added_vocab()` is called for every XML tag. If there are 10 XML tags, it would take 500 - 700ms in total. This PR makes the `token2json` method run time constant at 50 - 70ms, regardless of the number of XML tags. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [Issue #20238](https://github.com/huggingface/transformers/issues/20238) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20283/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20283", "html_url": "https://github.com/huggingface/transformers/pull/20283", "diff_url": "https://github.com/huggingface/transformers/pull/20283.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20283.patch", "merged_at": 1669131660000 }
https://api.github.com/repos/huggingface/transformers/issues/20282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20282/comments
https://api.github.com/repos/huggingface/transformers/issues/20282/events
https://github.com/huggingface/transformers/pull/20282
1,451,948,676
PR_kwDOCUB6oc5DCwEL
20,282
[bnb] Let's warn users when saving 8-bit models
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.", "Ah ah made a typo in one of my suggestions, sorry. `getattr` has less `t`s ;-)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20282). All of your documentation changes will be reflected on that endpoint.", "Thanks for implementing this. This fix would absolutely have prevented the confusion explained in: [#20247](https://github.com/huggingface/transformers/issues/20247).", "Thank you very much @peregilk ! 🙏 " ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR warns users that try to save 8-bit loaded models closes #20247 closes #19480 cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20282/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20282", "html_url": "https://github.com/huggingface/transformers/pull/20282", "diff_url": "https://github.com/huggingface/transformers/pull/20282.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20282.patch", "merged_at": 1668669397000 }
https://api.github.com/repos/huggingface/transformers/issues/20281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20281/comments
https://api.github.com/repos/huggingface/transformers/issues/20281/events
https://github.com/huggingface/transformers/pull/20281
1,451,917,081
PR_kwDOCUB6oc5DCpcd
20,281
[bnb] We should be able to run 8-bit models on CPU & GPU
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20281). All of your documentation changes will be reflected on that endpoint.", "Yes, this is true, maybe I can add a warning telling the user about the underlying behaviour? (weights offloaded on CPU will remain in their native precision)", "Or we could just leave the error?", "Closing this PR as it will bring confusion to users, we should probably wait until `bitsandbytes` supports weights offloading in 8-bit to add this feature\r\nThanks!", "Currently we can pass `load_in_8bit_skip_modules` into `model_kwargs`. This will allow to not convert certain layers/modules/weights into 8-bit.\r\n\r\nHowever, there's a problem here:\r\nhttps://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/utils/bitsandbytes.py#L113-L117\r\n\r\nThe `name` is not the full path to the module, e.g. for `transformer.h.0.ln1` it can be `0` or `ln1`, etc, depending on the recursion level. So currently it's impossible to ignore a specific layer or a group of layers. For example, if `transformer.h.0` is on CPU, then I don't want it (and any of its sub-layers) to be converted to 8-bit, but I can't specify this layer in `load_in_8bit_skip_modules`. Furthermore, even specifying `0` won't help, because child sub-layers (e.g. `*.0.ln1`) are processed first.\r\n\r\nNow the thing is - this PR actually almost solves this problem. It modifies `replace_8bit_linear` so that it can handle ignoring modules by the full path, not just by the immediate name.\r\n\r\nSo, what I propose: convert this PR into a different one that will allow specifying a full path in `load_in_8bit_skip_modules`. This will allow to manually ignore non-GPU layers when needed and it will not confuse the users.", "So, if anyone is still interested, this is how I implemented the above solution in my project (via monkey-patching):\r\n\r\nhttps://github.com/alkatrazstudio/neodim-server/blob/93e4819d3633841ca4f42246f51e28f355ed6cf5/src/bnb_override.py#L9-L37\r\n\r\n```python\r\n# This is a modified version of replace_8bit_linear in transformers/utils/bitsandbytes.py\r\n# The following changes were made:\r\n# 1. modules_to_not_convert can contain full module paths instead of just immediate names\r\n# 2. the default value for modules_to_not_convert is effectively a list instead of a string\r\n# 3. \"model\" is renamed to \"parent_module\" to not confuse it with the actual model\r\n# 4. removed redundant check for len(modules)\r\ndef replace_8bit_linear(parent_module, threshold=6.0, modules_to_not_convert=None, parent_layer_path=\"\"):\r\n modules_to_not_convert = [\"lm_head\"] if modules_to_not_convert is None else modules_to_not_convert\r\n\r\n parent_layer_prefix = \"\" if parent_layer_path == \"\" else parent_layer_path + \".\"\r\n for name, module in parent_module.named_children():\r\n layer_path = parent_layer_prefix + name\r\n\r\n if layer_path in modules_to_not_convert:\r\n continue\r\n\r\n replace_8bit_linear(module, threshold, modules_to_not_convert, layer_path)\r\n\r\n if isinstance(module, nn.Linear) and name not in modules_to_not_convert:\r\n with bitsandbytes.init_empty_weights():\r\n parent_module._modules[name] = bnb.nn.Linear8bitLt(\r\n module.in_features,\r\n module.out_features,\r\n module.bias is not None,\r\n has_fp16_weights=False,\r\n threshold=threshold,\r\n )\r\n\r\n return parent_module\r\n```\r\n\r\nI implemented it a little bit differently than @younesbelkada did, though, and also applied some other small modifications.\r\n\r\nThen here's the code that gets the layers that need to be ignored:\r\n\r\nhttps://github.com/alkatrazstudio/neodim-server/blob/93e4819d3633841ca4f42246f51e28f355ed6cf5/src/dev_map.py#L142-L147\r\n\r\n```python\r\ndef get_modules_to_skip_for_int8(device_map: DeviceMap) -> Optional[list[str]]:\r\n layer_paths = [path for path, device in device_map.items() if device == DEVICE_CPU]\r\n\r\n # adding lm_head based on comment from get_keys_to_not_convert in transformers/utils/bitsandbytes.py\r\n # which says \"for CausalLM modules we may want to keep the lm_head in full precision\"\r\n return layer_paths + [\"lm_head\"]\r\n```\r\n\r\nIn my case I only offload to CPU, not disk.\r\n\r\nThese layers will then be passed as ` load_in_8bit_skip_modules` to the `from_pretrained` method.\r\n\r\nI tested everything and it works well. The only thing I'm not sure about is if the new `replace_8bit_linear` actually backwards-compatible with the old version. It's compatible when `modules_to_not_convert=[\"lm_head\"]`, but I'm not sure about the generic use-case.", "@younesbelkada good job!!! I used your PR + @z80maniac tips and code samples and I managed to load a big model and run 8bit inference using some gpus and big amount of cpu RAM. I think your PR must be merged or at least mainatained in a separate branch into HF transfomers because I don't believe `bitsandbytes` will ever implement CPU offloading in their project. I read such opinions among the issues list in their project....\r\n\r\nEverything works great although the inference was kinda slow which is expected when using both GPUs and CPU RAM?\r\n\r\nI have 2 ideas on how to speed things up a little:\r\n1. It looks like the fp16 weights (offloaded to the CPU RAM) get copied back and forth on every pass into the VRAM of the first? GPU to do the calcultions? If true, then we may as well store those weights into 8bit on the CPU RAM from the start in order to avoid converting from fp16 into int8 and to also decrease the CPU RAM requirements by half?\r\n2. Another approach would be to perform the calculations on those fp16 weights right using the available CPU cores and thus avoid copying back and forth all of the weights into GPU VRAM on every pass?\r\nDoes any of the above make any sense?", "I tried @z80maniac 's suggestions and while I didn't run into any runtime errors, weights that were supposed to be in fp32 ended up in fp16 (Flan T5 automatically keeps `wo` layers in fp32, which didn't happen after applying the monkey patches). Is this expected?", "> I tried @z80maniac 's suggestions and while I didn't run into any runtime errors, weights that were supposed to be in fp32 ended up in fp16 (Flan T5 automatically keeps `wo` layers in fp32, which didn't happen after applying the monkey patches). Is this expected?\r\n\r\nHow do you know that they're in fp16 and not in fp32? What dtype did you specify if any? \r\nBTW fp16 should be OK too. there will barely be any performance degradation especially given that most of the weights are stored in int8 anyway", "I manually checked the dtypes with\r\n\r\n```model.encoder.block[1].layer[1].DenseReluDense.wo.weight.dtype, model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype```\r\n\r\nIt stays in fp16 whether I pass in `torch_dtype=torch.float32` or leave the kwarg untouched.\r\n\r\nAlso unfortunately I don't think Flan T5 XXL can handle fp16 or 8-bit precision (see https://github.com/huggingface/transformers/pull/20683). `T5ForConditionalGeneration` has a `_keep_in_fp32_modules` attribute that's supposed to help the `wo` layers stay in fp32, but I noticed that the suggested monkey patches might be interfering.\r\n\r\nI'll follow up a bit later because I noticed there was a bug in the code I was testing, though I don't think it should have affected whether or not the weights stayed in fp32 (in fact the bug, if tripped, would have just caused an OOM error instead).", "Ah I wasn't aware of that issue with Flan T5 though I am sure that i have loaded it with torch_dtype=torch.float16 in the past and have not noticed any serios performance degradation though the difference maybe was too subtle to notice....\r\nWhat is your plan when loading it? `wo` layers in fp32 into CPU RAM and any other weights in int8 into VRAM? What does ur device map look like?\r\nBTW i am only using the code changes by @younesbelkada from this pr\r\n@z80maniac are a bit different though still helpful", "Yeah iirc the issue is only with with XXL variant, I think the other variants should run with 16-bit/8-bit quantization just fine.\r\n\r\nMy plan is very close to what you mentioned: I was going to offload `wo` layers onto CPU RAM/disk (though I've only been experimenting with disk so far) and keep the others in int8 on GPU. I'll share my device map soon but I put myself into a bit of a situation rn. After some code changes I'm currently unable to reproduce the whole \"`wo`-layers-are-offloaded-onto-disk-in-fp32 \" thing, so I need to fix that first. I'll follow up once I figure that out.\r\n\r\nI should point out though that 1) even when I did get them offloaded in fp32, I got `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, meta and cuda:0!` anyway and 2) there's a chance the problem is on my end and not with any of the monkey patches, so I really should fix a few things first.", "from my experience some of those errors can be turned into just a warning (by monkey-patching the python source code) and everything will still work properly", "BTW the Flan T5 XXL HF page also has examples of using fp16 and int8. It's possible that the Google team hasn't really tested its performance using quantization though...\r\nIf you give me some example prompts, I will try to reproduce the results locally?", "I just tried `device_map=\"auto\", torch_dtype=torch.float16` on multiple gpus:\r\n`translate English to German: How old are you?`\r\n\r\n`<pad> Wie alt sind Sie?</s>`\r\n", "> I just tried `device_map=\"auto\", torch_dtype=torch.float16` on multiple gpus: `translate English to German: How old are you?`\r\n> \r\n> `<pad> Wie alt sind Sie?</s>`\r\n\r\nOh that's great! Personally haven't tried fp16 myself; I can attest to poor results on int8, but I was just going off of other issues/discussions regarding the performance in fp16 (e.g. https://github.com/huggingface/transformers/issues/20287#issuecomment-1317691635) (EDIT: this issue is from before the relevant patches were merged/when all the weights were in fp16).\r\n\r\nJust curious though, could you check what dtype the fp16 Flan T5 XXL has its `wo` layers in? If I'm not mistaken, unless you manually disable it (i.e. `T5ForConditionalGeneration.__keep_fp32_modules = None`) it should set the `wo` layers to fp32. But if they're really all in fp16 now that's great!\r\n\r\nOn my end I'll still going to take a look into my issues with offloading into fp32 for completeness' sake.", "I now loaded T5 XXL using int8.\r\n`print(model.encoder.block[1].layer[1].DenseReluDense.wo.weight.dtype)\r\ntorch.float32\r\nprint(model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype)\r\ntorch.float32`\r\n\r\nAnd again got a satisfying response\r\n`<pad> Wie alt sind Sie?</s>`\r\n", "Yup, this is actually expected. There's no problem loading the other weights in 8-bit, as long as the `wo` layers are in fp32 as shown above.\r\n\r\nMy personal problem is that `device_map=\"auto\"` is acting strangely for me (perhaps it's calculating on the assumption that the `wo` layers are in 8-bit when in fact they'll be loaded in 32-bit, which causes the OOM error) so I've been making custom device maps in the meantime. The farthest I've gotten is the runtime error I mentioned previously regarding the different devices (one of them being a meta device), but I've yet to recreate that because I changed my code at some point and need to figure out how to get it back to how it used to be. I thought the monkey patches here might help, but I'm starting to think the breaking changes I made were done before I tried the monkey patches, resulting the persistent fp16 offloaded weights that come up even without the monkey patches.", "After running a few simple prompts, I can't see any difference in int8 output when compared to fp32 or fp16. If you have a more sophisticated prompt you wish to try, lmk", "> Yup, this is ...\r\n\r\n\r\nAre you sure you're using the latest versions of `transformers, accelerate and bnb`? Perhaps, first uninstall everything you currently have, then install the above python packages and reapply the monkey patches on top of the latest versions.\r\nAlso, what is your hardware setup like? How much GPU VRAM in total, CPU RAM? You may want to try also setting the max_memory map per GPU device (but that requires some tweaking and is card / model dependent). Also, even if you get it running, keep in mind that offloading to SSD makes things reaaaaly slow. Offloading to just CPU RAM is a bit better\r\n", "I've been installing `transformers` and `accelerate` from source, but yeah I haven't tried installing `bnb` from source, I'll try that.\r\n\r\nI'm working with around 12.7 GB CPU RAM and 15 GB VRAM (standard Colab GPU runtime, Tesla T4). I'll play around with the kwargs to `infer_auto_device_map` but I still have a hunch that `infer_auto_device_map` has no way of knowing that the `wo` modules will end up being larger than what it currently is accounting for. And yeah you're right I definitely should offload to CPU, I've just been spending the past few days trying to get the weights to be stored in fp32 first (even before the monkey patches, I previously kept on having the `wo` weights in fp16. I fixed it earlier but then ended up breaking it again).", "Ah sorry after rereading my comments I think I've been unclear with what I meant by performance degradation in int8.\r\n\r\nWhen we say that Flan T5 XXL can't handle 8-bit, we mean that we can't quantize every single parameter to 8-bit precision the way we traditionally would with a standard T5 (a little misleading to say that since I think `lm_head` layers also can't be in 8-bit); doing so leads to poor performance. The solution `transformers` implemented was to do standard 8-bit quantization everywhere except the `wo` layers, since those were the only layers that needed to be in 32-bit. If you do that, you get the expected full performance, as you've demonstrated with your examples.\r\n\r\nMy personal problem is that my VRAM isn't large enough to host the 8-bit quantized non-`wo` modules alongside the 32-bit `wo` modules (if the entire model was 8-bit quantized, I could though). Because of that I need to offload some weights. As you probably already know, the main branches of `bitsandbytes` and `transformers` are currently a little weird when it comes to using 8-bit quantization alongside offloading. You can offload but the offloaded weights won't be in 8-bit. That's why I decided to just offload the `wo` layers only, since they shouldn't be in 8-bit anyway.\r\n\r\nTwo further issues arise from this:\r\n\r\n1. `auto_device_map=True` doesn't work well because when `infer_auto_device_map` receives `dtype=torch.int8`, it calculates device allocation as if everything will be in int8, consequently underestimating how much space a `wo` module will take up, which ultimately leads to memory errors.\r\n2. We can avoid the above problem if we define our own device map. The furthest I've gotten with this however is the runtime error about the different devices.\r\n\r\nOn a side note, while writing this I tried loading the model with a device map that put the `wo` layers on CPU instead of the disk. Surprisingly it fit, and unsurprisingly the session crashed (out of RAM) when I tried to use the model. On a somewhat more positive note though, before it crashed I noticed that it was back in fp32, which was nice. Still unsure what's causing the jump back-and-forth between fp32 and fp16, but I'm still looking into it.", "got it. thanks. \r\nIn this PR there is a piece of code which specifies which modules to skip. Just specify `lm_head` and the `wo` layers there (+ any others as needed) or use a custom device_map. Then set `max_memory` for your single(?) GPU to the GPU VRAM - ~1.3-2.2GB (it will take a few attempts to get this amount to an optimal point).\r\nFrom what I gather, you should be able to host around 13.4 GB of int8 weights on the Tesla GPU and the rest (in fp32) onto CPU RAM + SSD. \r\nFlan XXL may turn out to be too big for your setup though - meaning that at least 4-5 GB will likely end up on the SSD", "Thanks for this! I'll take a look into it. I appreciate your help with all of this.", "Actually I was trying to use mT0 XXL for a different research project a while back, but I had difficulties just trying to load it into memory. But thanks for prompting me to take another look; I was reviewing my notebook to try to refresh my memory on what I tried and I'm only now seeing I never set `load_in_8bit` to `True`, so I'll try again soon. Thanks again!", "FYI here is what the model card of `mt0-xxl` states:\r\n\r\n```\r\nPrompt Engineering: The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. \r\nFor example, the prompt \"Translate to English: Je t'aime\" without the full stop (.) at the end, may result in the model trying to continue the French sentence.\r\n Better prompts are e.g. \"Translate to English: Je t'aime.\", \"Translate to English: Je t'aime. Translation:\" \"What is \"Je t'aime.\" in English?\", where it is clear for the model when it should answer. \r\nFurther, we recommend providing the model as much context as possible. \r\nFor example, if you want it to answer in Telugu, then tell the model, e.g. \"Explain in a sentence in Telugu what is backpropagation in neural networks.\".\r\n```", "cc @Muennighoff if you have any idea what might be wrong here 🙏 ", "@younesbelkada BTW did you read my comment above with some questions / suggestions? What do you think?\r\nhttps://github.com/huggingface/transformers/pull/20281#issuecomment-1409894311", "@alexconstant9108 Do you also get the same pad output for mT0 in FP32?", "> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n> \r\n> checkpoint = \"bigscience/mt0-xxl\"\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype=\"auto\", device_map=\"auto\")\r\n> \r\n> inputs = tokenizer.encode(\"Translate to English: Je t’aime.\", return_tensors=\"pt\").to(\"cuda\") outputs = model.generate(inputs)\r\n\r\nThat's very odd - Do you get the same with mt0-small? Here's what I get:\r\n\r\n```python\r\n!pip install -q transformers accelerate\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\ncheckpoint = \"bigscience/mt0-small\"\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype=\"auto\", device_map=\"auto\", offload_folder=\"./\")\r\ninputs = tokenizer.encode(\"Translate to English: Je t’aime.\", return_tensors=\"pt\").to(\"cuda\")\r\noutputs = model.generate(inputs)\r\nprint(tokenizer.decode(outputs[0]))\r\n\r\n<pad> I love you.</s>\r\n```" ]
1,668
1,677
1,668
CONTRIBUTOR
null
# What does this PR do? This PR adds the possibility of using a custom device map containing CPU and GPU devices when loading and running 8-bit models. This is useful in the context of large models, if someone wants to offload part of the model on `cpu` or on the `disk`. Added also slow tests to test this feature, let me know if you think that I am missing any corner case. cc @sgugger closes #19090
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20281", "html_url": "https://github.com/huggingface/transformers/pull/20281", "diff_url": "https://github.com/huggingface/transformers/pull/20281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20281.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20280/comments
https://api.github.com/repos/huggingface/transformers/issues/20280/events
https://github.com/huggingface/transformers/pull/20280
1,451,812,665
PR_kwDOCUB6oc5DCS-3
20,280
[Proposal] Breaking change `zero-shot-object-detection` for improved consistency.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sahamrit Happy to hear your thoughts on this since you are the original implementor.\r\n\r\nI'm proposing many breaking changes here which mostly are due to me being not enforcing them enough during the PR (well I think it was better to get the pipeline out sooner with some flaws rather than too late).\r\nSince I started rewriting some docs I found the inconsistencies quite harmful in how to describe pipelines to users imo.", "_The documentation is not available anymore as the PR was closed or merged._", "I will wait a few days to give @sahamrit a chance to give his opinion before merging." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This is a proposal to modify the output of `zero-shot-object-detection` to provide better alignment with other pipelines. The output is now strictly the same as `object-detection` whereas before it would output lists of lists. The name `candidate_labels` is used throughout for consistency with other `zero-shot` pipelines. The pipeline is changed to `ChunkPipeline` to support batching cleanly. This removes all the lists and list of lists shenanigans, it's now a matter of the base pipeline handling all this not this specific one. ### **Breaking change** It did remove complex calls potentials `pipe(images = [image1, image2], text_queries=[candidates1, candidates2])` to support only `pipe([{"image": image1, "candidate_labels": candidates1}, {"image": image2, "candidate_labels": candidates2}])` when dealing with lists and/or datasets. We could keep them, but it will add a lot of complexity to the code base, since the pipeline is rather young, I'd rather break to keep the code simpler, but we can revert this. ### **Breaking change** The name of the argument is now `image` instead of `images` since it expects by default only 1 image. This is revertable like the previous one. ### **Breaking change** The types is now simplified and flattened: `pipe(inputs) == [{**object1}, {**object2}]` instead of the previous `pipe(inputs) == [[{**object1}, {**object1}], [{**object2}]]` Where the different instances would be grouped by candidate labels within lists. IMHO this is not really desirable, since it would output empty lists and is only adding superflous indirection compared to `object-detection`. It is relatively change free, meaning the results are the same for large models. It does change computation however since now the batching is handled by the pipeline itself. It **did** change the results for the small models so there seems to be a real difference in how the models handles this. Since it didn't affect any of the large tests I think it's acceptable. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20280/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20280", "html_url": "https://github.com/huggingface/transformers/pull/20280", "diff_url": "https://github.com/huggingface/transformers/pull/20280.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20280.patch", "merged_at": 1668783448000 }
https://api.github.com/repos/huggingface/transformers/issues/20279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20279/comments
https://api.github.com/repos/huggingface/transformers/issues/20279/events
https://github.com/huggingface/transformers/pull/20279
1,451,796,701
PR_kwDOCUB6oc5DCPe8
20,279
[SegFormer] Add support for segmentation masks with one label
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "How soon will this PR be done? And why using BCEWithLogitsLoss not Dice loss?", "when is this going to be closed. Our team would like to use it", "when this becomes active, how would one use it.\r\n\r\nthank you nielsrogge for sending for review.", "hello @NielsRogge \r\n\r\nhope I didnt disturb\r\n\r\nI tried to peel off the classifier in pytorch and change the output channels to one, then manually compute the loss instead of getting it from the segformer huggingface object, so basically I just got the output and then did a dice loss myself.\r\n\r\nSo i tried to write bin seg myself kinda, and I started to get a bunch of negative values.\r\n\r\nAny idea why that happened, and how I could fix it? I mean, I replaced the last conv2d layer.", "Is this now automatically using dice loss when we set `num_labels = 1`? Maybe I missed it but it seems the documentation doesn't explain it.", "HHi @aegonwolf,\r\n\r\nWhen config.num_labels = 1, the binary cross-entropy loss is used, as can be seen here: https://github.com/huggingface/transformers/blob/1689aea73346816b936b84932e12b774974e61a6/src/transformers/models/segformer/modeling_segformer.py#L813-L817.", "What about a sigmoid function at the end or any way of scaling outputs to [0, 1] range?", "@nikolaJovisic the BCE loss uses sigmoid.", "@NielsRogge Thank you for your amazing work. However I am still struggling with this task, can you make an example notebook on how to finetune Segformer for 1 class only please? Thank you a lot! ", "@NielsRogge modeling_segformer.py script is changed, but modeling_tf_segformer.py seems to be unchanged, I get an error there when I try to train model for 1 class only." ]
1,668
1,692
1,671
CONTRIBUTOR
null
# What does this PR do? This PR makes it possible to fine-tune SegFormer in case you have a mask containing only a single value, i.e. your mask could look like [[255, 0], [0, 255]]. In this case, config.num_labels = 1 and the ignore index is 255. If this works fine, then we can add it to any other model supported by `AutoModelForSemanticSegmentation`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20279", "html_url": "https://github.com/huggingface/transformers/pull/20279", "diff_url": "https://github.com/huggingface/transformers/pull/20279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20279.patch", "merged_at": 1671551211000 }
https://api.github.com/repos/huggingface/transformers/issues/20278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20278/comments
https://api.github.com/repos/huggingface/transformers/issues/20278/events
https://github.com/huggingface/transformers/pull/20278
1,451,766,796
PR_kwDOCUB6oc5DCJCh
20,278
Image transforms functionality used instead
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20278). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20278). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? * Removes reimplementations of `center_to_corners` format * Removes `# Copied from` statements and imports directly instead ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20278", "html_url": "https://github.com/huggingface/transformers/pull/20278", "diff_url": "https://github.com/huggingface/transformers/pull/20278.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20278.patch", "merged_at": 1668683774000 }
https://api.github.com/repos/huggingface/transformers/issues/20277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20277/comments
https://api.github.com/repos/huggingface/transformers/issues/20277/events
https://github.com/huggingface/transformers/pull/20277
1,451,735,584
PR_kwDOCUB6oc5DCCOt
20,277
Generate: general TF XLA constrastive search are now slow tests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
MEMBER
null
# What does this PR do? TF's XLA contrastive search tests were time-consuming because of the conversion to XLA, so this PR moves them to more powerful slow tests. Making these tests faster would imply creating smaller model configs for each model, which seems like overkill.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20277/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20277", "html_url": "https://github.com/huggingface/transformers/pull/20277", "diff_url": "https://github.com/huggingface/transformers/pull/20277.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20277.patch", "merged_at": 1668688487000 }
https://api.github.com/repos/huggingface/transformers/issues/20276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20276/comments
https://api.github.com/repos/huggingface/transformers/issues/20276/events
https://github.com/huggingface/transformers/pull/20276
1,451,711,854
PR_kwDOCUB6oc5DB9Dn
20,276
Fix result saving errors of pytorch examples
{ "login": "li-plus", "id": 39846316, "node_id": "MDQ6VXNlcjM5ODQ2MzE2", "avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/li-plus", "html_url": "https://github.com/li-plus", "followers_url": "https://api.github.com/users/li-plus/followers", "following_url": "https://api.github.com/users/li-plus/following{/other_user}", "gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}", "starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li-plus/subscriptions", "organizations_url": "https://api.github.com/users/li-plus/orgs", "repos_url": "https://api.github.com/users/li-plus/repos", "events_url": "https://api.github.com/users/li-plus/events{/privacy}", "received_events_url": "https://api.github.com/users/li-plus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20276). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #20079. This PR fixed potential `KeyError`s on saving results in most PyTorch examples, by prefixing metric keys instead of directly accessing metrics by key. In addition, this PR replaced the argument `--max_length` with `--max_seq_length` in `run_swag_no_trainer.py`, to make the no-trainer version consistent with the trainer version and the README instructions. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/20079 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20276", "html_url": "https://github.com/huggingface/transformers/pull/20276", "diff_url": "https://github.com/huggingface/transformers/pull/20276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20276.patch", "merged_at": 1668610265000 }
https://api.github.com/repos/huggingface/transformers/issues/20275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20275/comments
https://api.github.com/repos/huggingface/transformers/issues/20275/events
https://github.com/huggingface/transformers/issues/20275
1,451,672,988
I_kwDOCUB6oc5WhsWc
20,275
Convert LongT5 to ONNX
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @lewtun ", "Same silent issue on longformer when `global_attention_mask` is zero everywhere and running with ONNX Runtime. When at least one value is non-zero, it's fine.", "Anything I can do on my side to help fixing this issue? :)", "@jplu Do you have any warning during the onnx conversion? Things like `TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!`? In my case there were control flows in the model, leading to the issue.", "@fxmarty Yes I do have several of these `TracerWarning`:\r\n```\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:180: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n _global_block_ids_lower_bound = torch.tensor(-1.0, dtype=global_block_ids.dtype, device=global_block_ids.device)\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:188: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n num_globals = seq_len // global_block_size\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:190: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if num_globals > 0:\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:217: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n block_ids >= 0, torch.tensor(global_seq_len, dtype=block_ids.dtype, device=block_ids.device)\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:217: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n block_ids >= 0, torch.tensor(global_seq_len, dtype=block_ids.dtype, device=block_ids.device)\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:84: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if x.shape[dim] % block_len != 0:\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:67: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if not all(x.shape):\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:86: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n num_blocks = x.shape[dim] // block_len\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/models/longt5/modeling_longt5.py:89: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if 0 in output_shape:\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/modeling_utils.py:769: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.\r\n warnings.warn(\r\n/data/conda/beir/lib/python3.8/site-packages/transformers/modeling_utils.py:781: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if causal_mask.shape[1] < attention_mask.shape[1]:\r\nIn-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode\r\nIn-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode\r\nIn-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode\r\nWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.\r\nWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.\r\nWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.\r\nWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.\r\nWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied.\r\n```", "Could be a duplicate https://github.com/huggingface/transformers/issues/19297\r\n\r\nIn my opinion, you are falling into the issue that the example input provided during the conversion for `torch.onnx.convert` takes a path different than the one you wish to use during inference with the exported ONNX.\r\n\r\nA good reference is https://pytorch.org/docs/stable/generated/torch.jit.trace.html .\r\n\r\nWe should export along the `model.onnx` probably an `onnx_config.json` detailing which cases are supported by the exported ONNX.", "OK I see. Should-I test to change the input in the export code in order to see if it goes better with a more appropriate input? How did you manage to convert and validate this model as it appears in the list of available models to be exported? You have tested on the official Google one?", "Pinging @echarlaix , is longt5 the model you were dealing with? To me given the trace warnings, especially `num_globals = seq_len // global_block_size` that is a constant,if you want a quick and dirty fix you can indeed change the input length in the export code to match your input length for your use case using the .onnx.\r\n\r\nThe validation for onnx conversion is currently lacking, as it tests only on a very similar sequence length (typically 9 while the export is done with a sequence length = 8, see https://github.com/huggingface/transformers/blob/v4.24-release/src/transformers/onnx/convert.py#L382-L397 ). So if later on there is controlflow dependent on the sequence length, a single path is recorded during the export and you are screwed. We are looking for a clean solution for this.\r\n\r\nedit: although here `global_block_size = 16`, and `seq_len = 8` then `seq_len = 9` so I would expect the path to be the same in the two warnings concerning `num_globals`. The issue could come from an other of the following warnings.", "> Pinging @echarlaix , is longt5 the model you were dealing with? \r\n\r\nYes we had the same [issue](https://github.com/huggingface/optimum/issues/285#issuecomment-1191409005) for LongT5 model with transient-global attention (models with local attention were not causing problem). Now transferred to huggingface/transformers#18243.\r\n", "I have tried multiple `seq_len` for the input. As long as the value is not too lower than the \"original\" value it seems working, but if it goes too lower, I start to get a big \"max absolute difference\" and the validation doesn't pass. So indeed, it is not really usable and seems too unstable as you said @fxmarty. Thanks a lot anyway for your lights on this, I let the issue open, don't hesitate to ping me here if I can do something to help fixing on my side.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "not stale", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "not stale", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "still not", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,678
1,678
CONTRIBUTOR
null
### System Info transformers-cli env - `transformers` version: 4.24.0 - Platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) - onnxruntime-gpu: 1.13.1 - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? ONNX model conversion: @morgan ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This command line: ``` python -m transformers.onnx --model pszemraj/long-t5-tglobal-base-16384-book-summary --feature seq2seq-lm-with-past --preprocessor tokenizer --framework pt . ``` Gives me the following error during export validation: ``` Validating ONNX model... Floating point exception (core dumped) ``` ### Expected behavior Having a usable and validated ONNX model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20275/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20274/comments
https://api.github.com/repos/huggingface/transformers/issues/20274/events
https://github.com/huggingface/transformers/pull/20274
1,451,617,590
PR_kwDOCUB6oc5DBoWW
20,274
Adding `zero-shot-object-detection` pipeline doctest.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20274", "html_url": "https://github.com/huggingface/transformers/pull/20274", "diff_url": "https://github.com/huggingface/transformers/pull/20274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20274.patch", "merged_at": 1668678956000 }
https://api.github.com/repos/huggingface/transformers/issues/20273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20273/comments
https://api.github.com/repos/huggingface/transformers/issues/20273/events
https://github.com/huggingface/transformers/pull/20273
1,451,604,939
PR_kwDOCUB6oc5DBlp-
20,273
[Doctest] Add configuration_deformable_detr.py
{ "login": "Saad135", "id": 22683922, "node_id": "MDQ6VXNlcjIyNjgzOTIy", "avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saad135", "html_url": "https://github.com/Saad135", "followers_url": "https://api.github.com/users/Saad135/followers", "following_url": "https://api.github.com/users/Saad135/following{/other_user}", "gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saad135/subscriptions", "organizations_url": "https://api.github.com/users/Saad135/orgs", "repos_url": "https://api.github.com/users/Saad135/repos", "events_url": "https://api.github.com/users/Saad135/events{/privacy}", "received_events_url": "https://api.github.com/users/Saad135/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Adds configuration_deformable_detr.py to utils/documentation_tests.txt Based on https://github.com/huggingface/transformers/issues/19487 @ydshieh can you please have a look? thanks :D <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20273/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20273", "html_url": "https://github.com/huggingface/transformers/pull/20273", "diff_url": "https://github.com/huggingface/transformers/pull/20273.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20273.patch", "merged_at": 1668619206000 }
https://api.github.com/repos/huggingface/transformers/issues/20272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20272/comments
https://api.github.com/repos/huggingface/transformers/issues/20272/events
https://github.com/huggingface/transformers/pull/20272
1,451,586,237
PR_kwDOCUB6oc5DBhgX
20,272
Adding doctest for `zero-shot-image-classification` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20272). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20272/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20272", "html_url": "https://github.com/huggingface/transformers/pull/20272", "diff_url": "https://github.com/huggingface/transformers/pull/20272.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20272.patch", "merged_at": 1668615289000 }
https://api.github.com/repos/huggingface/transformers/issues/20271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20271/comments
https://api.github.com/repos/huggingface/transformers/issues/20271/events
https://github.com/huggingface/transformers/pull/20271
1,451,555,508
PR_kwDOCUB6oc5DBasw
20,271
Add TF protein notebook to notebooks doc
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,668
1,668
1,668
MEMBER
null
Add a link to the new TF protein LM notebook
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20271/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20271", "html_url": "https://github.com/huggingface/transformers/pull/20271", "diff_url": "https://github.com/huggingface/transformers/pull/20271.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20271.patch", "merged_at": 1668614932000 }
https://api.github.com/repos/huggingface/transformers/issues/20270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20270/comments
https://api.github.com/repos/huggingface/transformers/issues/20270/events
https://github.com/huggingface/transformers/pull/20270
1,451,447,017
PR_kwDOCUB6oc5DBCc4
20,270
Add StdScaler for time series Transformer model
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "thanks! will add... moving this to draft for now as i have to check this before adding", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20270). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20270). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
CONTRIBUTOR
null
# What does this PR do? - [x] Add `loc` and `scale` outputs from current scalers and use both as static real-valued covariates - [ ] double check training/inference works as usual - [ ] add the StdScaler ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20270", "html_url": "https://github.com/huggingface/transformers/pull/20270", "diff_url": "https://github.com/huggingface/transformers/pull/20270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20270.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20269/comments
https://api.github.com/repos/huggingface/transformers/issues/20269/events
https://github.com/huggingface/transformers/issues/20269
1,451,415,935
I_kwDOCUB6oc5Wgtl_
20,269
cannot load bart encoder
{ "login": "icedpanda", "id": 45261170, "node_id": "MDQ6VXNlcjQ1MjYxMTcw", "avatar_url": "https://avatars.githubusercontent.com/u/45261170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/icedpanda", "html_url": "https://github.com/icedpanda", "followers_url": "https://api.github.com/users/icedpanda/followers", "following_url": "https://api.github.com/users/icedpanda/following{/other_user}", "gists_url": "https://api.github.com/users/icedpanda/gists{/gist_id}", "starred_url": "https://api.github.com/users/icedpanda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/icedpanda/subscriptions", "organizations_url": "https://api.github.com/users/icedpanda/orgs", "repos_url": "https://api.github.com/users/icedpanda/repos", "events_url": "https://api.github.com/users/icedpanda/events{/privacy}", "received_events_url": "https://api.github.com/users/icedpanda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@icedpanda I am not able to reproduce this error, Can you try to reproduce this in a colab notebook and share the notebook here ? ", "was working on the terminal but not in the notebook. Works now after reinstalling the notebook. " ]
1,668
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.0 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Failed to load BartEncoder, BartDecoder ```python from transformers.models.bart.modeling_bart import BartEncoder, BartDecoder ``` Error ``` ImportError: cannot import name 'SAFE_WEIGHTS_INDEX_NAME' from 'transformers.utils' (/home/usr/miniconda3/envs/mkg/lib/python3.10/site-packages/transformers/utils/__init__.py) ``` ### Expected behavior Able to load BartEncoder and BartDecoder
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20269/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20268/comments
https://api.github.com/repos/huggingface/transformers/issues/20268/events
https://github.com/huggingface/transformers/pull/20268
1,451,384,963
PR_kwDOCUB6oc5DA05m
20,268
Adding doctest for `zero-shot-classification` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20268). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20268", "html_url": "https://github.com/huggingface/transformers/pull/20268", "diff_url": "https://github.com/huggingface/transformers/pull/20268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20268.patch", "merged_at": 1668615302000 }
https://api.github.com/repos/huggingface/transformers/issues/20267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20267/comments
https://api.github.com/repos/huggingface/transformers/issues/20267/events
https://github.com/huggingface/transformers/pull/20267
1,451,370,508
PR_kwDOCUB6oc5DAxwW
20,267
Complete doc migration
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20267). All of your documentation changes will be reflected on that endpoint.", "LGTM, and thanks for the awesome work. I bag my courage to approve.\r\n", "well, I forgot to approve after leaving a comment, sorry." ]
1,668
1,668
1,668
CONTRIBUTOR
null
Reverts https://github.com/huggingface/transformers/pull/20125 Everything is handled on the doc-builder side now 😊
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20267", "html_url": "https://github.com/huggingface/transformers/pull/20267", "diff_url": "https://github.com/huggingface/transformers/pull/20267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20267.patch", "merged_at": 1668606217000 }
https://api.github.com/repos/huggingface/transformers/issues/20266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20266/comments
https://api.github.com/repos/huggingface/transformers/issues/20266/events
https://github.com/huggingface/transformers/pull/20266
1,451,364,516
PR_kwDOCUB6oc5DAwZb
20,266
Adding doctest for `visual-question-answering` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20266). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20266", "html_url": "https://github.com/huggingface/transformers/pull/20266", "diff_url": "https://github.com/huggingface/transformers/pull/20266.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20266.patch", "merged_at": 1668615326000 }
https://api.github.com/repos/huggingface/transformers/issues/20265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20265/comments
https://api.github.com/repos/huggingface/transformers/issues/20265/events
https://github.com/huggingface/transformers/pull/20265
1,451,346,607
PR_kwDOCUB6oc5DAsgd
20,265
Adding doctest for `token-classification` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20265). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20265/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20265", "html_url": "https://github.com/huggingface/transformers/pull/20265", "diff_url": "https://github.com/huggingface/transformers/pull/20265.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20265.patch", "merged_at": 1668615720000 }
https://api.github.com/repos/huggingface/transformers/issues/20264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20264/comments
https://api.github.com/repos/huggingface/transformers/issues/20264/events
https://github.com/huggingface/transformers/pull/20264
1,451,318,342
PR_kwDOCUB6oc5DAmRY
20,264
Adding doctest for `text-generation` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20264). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20264/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20264", "html_url": "https://github.com/huggingface/transformers/pull/20264", "diff_url": "https://github.com/huggingface/transformers/pull/20264.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20264.patch", "merged_at": 1668614267000 }
https://api.github.com/repos/huggingface/transformers/issues/20263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20263/comments
https://api.github.com/repos/huggingface/transformers/issues/20263/events
https://github.com/huggingface/transformers/pull/20263
1,451,317,639
PR_kwDOCUB6oc5DAmHw
20,263
Improve pipeline testing
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20263). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,669
1,669
COLLABORATOR
null
# What does this PR do? Improve pipeline testing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20263/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20263", "html_url": "https://github.com/huggingface/transformers/pull/20263", "diff_url": "https://github.com/huggingface/transformers/pull/20263.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20263.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20262/comments
https://api.github.com/repos/huggingface/transformers/issues/20262/events
https://github.com/huggingface/transformers/pull/20262
1,451,297,996
PR_kwDOCUB6oc5DAhxJ
20,262
Adding doctest for `text-classification` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20262). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20262). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20262/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20262", "html_url": "https://github.com/huggingface/transformers/pull/20262", "diff_url": "https://github.com/huggingface/transformers/pull/20262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20262.patch", "merged_at": 1668615334000 }
https://api.github.com/repos/huggingface/transformers/issues/20261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20261/comments
https://api.github.com/repos/huggingface/transformers/issues/20261/events
https://github.com/huggingface/transformers/pull/20261
1,451,287,639
PR_kwDOCUB6oc5DAfgh
20,261
Adding doctest for `text2text-generation` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20261). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20261/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20261", "html_url": "https://github.com/huggingface/transformers/pull/20261", "diff_url": "https://github.com/huggingface/transformers/pull/20261.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20261.patch", "merged_at": 1668614229000 }
https://api.github.com/repos/huggingface/transformers/issues/20260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20260/comments
https://api.github.com/repos/huggingface/transformers/issues/20260/events
https://github.com/huggingface/transformers/pull/20260
1,451,258,586
PR_kwDOCUB6oc5DAZKW
20,260
Adding a doctest for `table-question-answering` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20260). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20260/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20260", "html_url": "https://github.com/huggingface/transformers/pull/20260", "diff_url": "https://github.com/huggingface/transformers/pull/20260.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20260.patch", "merged_at": 1668613542000 }
https://api.github.com/repos/huggingface/transformers/issues/20259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20259/comments
https://api.github.com/repos/huggingface/transformers/issues/20259/events
https://github.com/huggingface/transformers/pull/20259
1,451,238,895
PR_kwDOCUB6oc5DAU-l
20,259
Adding doctest for `question-answering` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20259). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20259). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20259", "html_url": "https://github.com/huggingface/transformers/pull/20259", "diff_url": "https://github.com/huggingface/transformers/pull/20259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20259.patch", "merged_at": 1668615380000 }
https://api.github.com/repos/huggingface/transformers/issues/20258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20258/comments
https://api.github.com/repos/huggingface/transformers/issues/20258/events
https://github.com/huggingface/transformers/pull/20258
1,451,229,833
PR_kwDOCUB6oc5DATAr
20,258
Adding doctest for `object-detection` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> I still think the `candidate_labels` argument/naming is not very aligned with zero-shot object detection but the PR looks good to me otherwise.\r\n\r\nHere it's just an alias.\r\nI'm really not sure I understand how it could be a bad name.\r\n\r\nThe content end up being within `label` section of the output, so saying they are `labels` doesn't seem all that shocking to me.\r\nThe fact that they are `candidate` is because there is no guarantee that you will be able to detect them.\r\n\r\nOverall I find `candidate_labels` not that bad.\r\n\r\nWhy I think `text_queries` is not the best choice:\r\n- It is not aligned with other `zero-shot` pipelines\r\n- `text_queries` implies `text` but you're not looking for text within an image but readers might understand this:\r\n ```pipe(image, text_query=\"Restaurant\")``` Are we looking for places where the text restaurant is written ?\r\n- `queries` implies asking for information so I'm somehow expecting to always receive something, I feel it conveys less well the idea of \"potentiality\" than `canddiate`.\r\n\r\nThe main counterargument in the original PR was that `candidate_labels` was using `hypothesis_template`. I feel like it's not really a strong stance, since `hypothesis_template` isn't really meant to be used so much (it's kind of a power user tool). Even then, I don't think just because there is no `hypothesis_template` should prevent us from using the same name, if the name is readable, and serves the same purpose, then it's a good name, the rest is implementation detail imo.\r\n\r\nHappy to hear your thoughts on why you think it fits better.", "\r\n> Here it's just an alias. I'm really not sure I understand how it could be a bad name.\r\n> \r\n> Happy to hear your thoughts on why you think it fits better.\r\n\r\n@Narsil my main concern is that zero-shot-classification pipeline selects one of the `candidate_labels` to assign a final class / label to the image, whereas zero-shot-object-detection queries the image for each query and might find instances of all query objects. It'd be less confusing to users if we add a better docstring example -> using an image where both candidate_labels=[\"head\", \"bird\"] are detected instead of just \"bird\".\r\n", "> @Narsil my main concern is that zero-shot-classification pipeline selects one of the candidate_labels to assign a final class / label to the image, whereas zero-shot-object-detection queries the image for each query and might find instances of all query objects. It'd be less confusing to users if we add a better docstring example -> using an image where both candidate_labels=[\"head\", \"bird\"] are detected instead of just \"bird\".\r\n\r\nOh, but no, `zero-shot-classification` does multi label classification actually. The default is to normalize scores across labels, but it's not mandatory, by default the models can handle multiple label just fine (results tend to be more noisy though).\r\n\r\nhttps://huggingface.co/facebook/bart-large-mnli?candidateLabels=space+%26+cosmos%2C+scientific+discovery%2C+microbiology%2C+robots%2C+archeology&multiClass=true&text=A+new+model+offers+an+explanation+for+how+the+Galilean+satellites+formed+around+the+solar+system%E2%80%99s+largest+world.+Konstantin+Batygin+did+not+set+out+to+solve+one+of+the+solar+system%E2%80%99s+most+puzzling+mysteries+when+he+went+for+a+run+up+a+hill+in+Nice%2C+France.+Dr.+Batygin%2C+a+Caltech+researcher%2C+best+known+for+his+contributions+to+the+search+for+the+solar+system%E2%80%99s+missing+%E2%80%9CPlanet+Nine%2C%E2%80%9D+spotted+a+beer+bottle.+At+a+steep%2C+20+degree+grade%2C+he+wondered+why+it+wasn%E2%80%99t+rolling+down+the+hill.+He+realized+there+was+a+breeze+at+his+back+holding+the+bottle+in+place.+Then+he+had+a+thought+that+would+only+pop+into+the+mind+of+a+theoretical+astrophysicist%3A+%E2%80%9COh%21+This+is+how+Europa+formed.%E2%80%9D+Europa+is+one+of+Jupiter%E2%80%99s+four+large+Galilean+moons.+And+in+a+paper+published+Monday+in+the+Astrophysical+Journal%2C+Dr.+Batygin+and+a+co-author%2C+Alessandro+Morbidelli%2C+a+planetary+scientist+at+the+C%C3%B4te+d%E2%80%99Azur+Observatory+in+France%2C+present+a+theory+explaining+how+some+moons+form+around+gas+giants+like+Jupiter+and+Saturn%2C+suggesting+that+millimeter-sized+grains+of+hail+produced+during+the+solar+system%E2%80%99s+formation+became+trapped+around+these+massive+worlds%2C+taking+shape+one+at+a+time+into+the+potentially+habitable+moons+we+know+today." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20258/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20258", "html_url": "https://github.com/huggingface/transformers/pull/20258", "diff_url": "https://github.com/huggingface/transformers/pull/20258.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20258.patch", "merged_at": 1668682799000 }
https://api.github.com/repos/huggingface/transformers/issues/20257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20257/comments
https://api.github.com/repos/huggingface/transformers/issues/20257/events
https://github.com/huggingface/transformers/pull/20257
1,451,219,309
PR_kwDOCUB6oc5DAQry
20,257
Adding doctest for `image-to-text` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20257). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20257", "html_url": "https://github.com/huggingface/transformers/pull/20257", "diff_url": "https://github.com/huggingface/transformers/pull/20257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20257.patch", "merged_at": 1668615461000 }
https://api.github.com/repos/huggingface/transformers/issues/20256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20256/comments
https://api.github.com/repos/huggingface/transformers/issues/20256/events
https://github.com/huggingface/transformers/pull/20256
1,451,195,035
PR_kwDOCUB6oc5DALSE
20,256
Adding doctest for `image-segmentation` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20256). All of your documentation changes will be reflected on that endpoint.", "Would it be possible here to add additional examples for SegFormer (semantic segmentation) and MaskFormer (panoptic segmentation)?", "> Would it be possible here to add additional examples for SegFormer (semantic segmentation) and MaskFormer (panoptic segmentation)?\r\n\r\nWould it make a difference in what the example looks like ? I don't think so... no ?" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20256/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20256", "html_url": "https://github.com/huggingface/transformers/pull/20256", "diff_url": "https://github.com/huggingface/transformers/pull/20256.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20256.patch", "merged_at": 1668614215000 }
https://api.github.com/repos/huggingface/transformers/issues/20255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20255/comments
https://api.github.com/repos/huggingface/transformers/issues/20255/events
https://github.com/huggingface/transformers/issues/20255
1,451,186,593
I_kwDOCUB6oc5Wf1mh
20,255
Bug in MobileViTForSemanticSegmentation output shape
{ "login": "kevinpl07", "id": 18429675, "node_id": "MDQ6VXNlcjE4NDI5Njc1", "avatar_url": "https://avatars.githubusercontent.com/u/18429675?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevinpl07", "html_url": "https://github.com/kevinpl07", "followers_url": "https://api.github.com/users/kevinpl07/followers", "following_url": "https://api.github.com/users/kevinpl07/following{/other_user}", "gists_url": "https://api.github.com/users/kevinpl07/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevinpl07/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinpl07/subscriptions", "organizations_url": "https://api.github.com/users/kevinpl07/orgs", "repos_url": "https://api.github.com/users/kevinpl07/repos", "events_url": "https://api.github.com/users/kevinpl07/events{/privacy}", "received_events_url": "https://api.github.com/users/kevinpl07/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nMobileViT's semantic segmentation is indeed very low-resolution, as seen in this Space: https://huggingface.co/spaces/Matthijs/mobilevit-deeplab-demo. cc @hollance \r\n\r\nIf you want to look into models that output more fine-grained segmentation results, I'd recommend checking out [SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer) and [MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer).", "The original model will do a bilinear upsampling to 513x513 which makes the mask larger but also more blurry. In the Transformers library we don't do this upsampling in the model as we don't know ahead of time how you want to use the results, and we want to avoid doing multiple upsampling operations. That's why the Transformers model outputs the low resolution version.", "Closing this issue as it has been resolved, feel free to reopen." ]
1,668
1,669
1,669
CONTRIBUTOR
null
### System Info Python 3.9.6 transformers==4.24.0 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-small") model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_mask = logits.argmax(1).squeeze(0) ``` ### Expected behavior When running inference on pretrained MobileViTForSemanticSegmentation as in [here](https://huggingface.co/apple/deeplabv3-mobilevit-small), I get very bad segmentation mask in the huggingface online inference. When running locally I see that the output mask has a shape of (1, 21, 32, 32). I don't think outputting a 32x32 mask is intended or am I wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20255/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20254/comments
https://api.github.com/repos/huggingface/transformers/issues/20254/events
https://github.com/huggingface/transformers/pull/20254
1,451,176,648
PR_kwDOCUB6oc5DAHUM
20,254
Adding doctest example for `image-classification` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20254). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20254). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20254/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20254", "html_url": "https://github.com/huggingface/transformers/pull/20254", "diff_url": "https://github.com/huggingface/transformers/pull/20254.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20254.patch", "merged_at": 1668614998000 }
https://api.github.com/repos/huggingface/transformers/issues/20253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20253/comments
https://api.github.com/repos/huggingface/transformers/issues/20253/events
https://github.com/huggingface/transformers/pull/20253
1,451,164,323
PR_kwDOCUB6oc5DAEl3
20,253
Rephrasing the link.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20253). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20253). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/pull/20226#discussion_r1023296368 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20253/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20253", "html_url": "https://github.com/huggingface/transformers/pull/20253", "diff_url": "https://github.com/huggingface/transformers/pull/20253.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20253.patch", "merged_at": 1668614985000 }
https://api.github.com/repos/huggingface/transformers/issues/20252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20252/comments
https://api.github.com/repos/huggingface/transformers/issues/20252/events
https://github.com/huggingface/transformers/issues/20252
1,451,130,122
I_kwDOCUB6oc5Wfn0K
20,252
Running the run_mlm_flax on TPU v4 pods
{ "login": "peregilk", "id": 9079808, "node_id": "MDQ6VXNlcjkwNzk4MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peregilk", "html_url": "https://github.com/peregilk", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "organizations_url": "https://api.github.com/users/peregilk/orgs", "repos_url": "https://api.github.com/users/peregilk/repos", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "received_events_url": "https://api.github.com/users/peregilk/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "Also cc @sanchit-gandhi ", "Hey @peregilk! Cool to see that you're using the Flax training scripts! Nice that you have TPU v4 pods as well 🚀\r\n\r\nThe scripts are only tested on single TPU devices (i.e. TPU v2-8, v3-8 and v4-8), however they can be made to work in a multi-host set-up.\r\n\r\nHow are you launching the script on a TPU v4-16/32? Are you SSH'd into worker 0? You'll need to launch the same command on all 2/4 TPU workers for a v4-16/32 respectively.", "Hi @sanchit-gandhi. I am running a slightly altered version of the scripts, based on the run_mlm_stream.py. I am both installing the software and starting the training simultaneously on all the TPU VMs. I am using a script Ive made ([ttconnect](https://github.com/peregilk/ttconnect)) for experiments like this. \r\n\r\nThe script runs also without any issues. Both on individual TPUs and on any sized pods. However, the result from training on a TPU v4-8 and on a TPU Pod v4-32 is **exactly** the same. Meaning the loss is the same, the training time is the same, etc. I really want the batches to scale across the pods. I am doing additional training of XLM-RoBERTa here, and it is trained with batch sizes around 3k. Then you need multiple TPUs. I want to increase batch size, not speed. My theory is that currently the batches do not span across the TPUs. \r\n\r\nI made an attempt to simply multiplying the batch size in the script with jax.process_count(). That did not work. ", "Hey @peregilk,\r\n\r\nThanks for sharing those details. Your set-up looks good - the script you've made with `ttconnect` is super nice! The important thing is to run the same command across devices, which you are doing with that set-up.\r\n\r\nThe behaviour you have described seems to suggest that you're replicating exactly the same training across all four of your TPU devices. The batch size **should** scale with number of TPU devices to give you appropriate data parallelism:\r\nhttps://github.com/huggingface/transformers/blob/4bb07647504a277398856e828fa48ddbec97678e/examples/flax/language-modeling/run_mlm_flax.py#L654\r\n\r\nCould you verify that the number of devices is indeed 32?\r\n```python\r\nimport jax\r\nprint(jax.device_count())\r\n```", "This is returning 16 on the v4-32. This is correct according to the [user guide](https://cloud.google.com/tpu/docs/v4-users-guide) since the v4 have 4 double chips. Could that be the cause of any problems?\r\n\r\nMultiplying by jax.device_count() as I suggested is then definitively wrong.\r\n\r\nFYI: I did run this code both with v3-8 and with v4-8. I then did double my **per_device_batch_size** before getting OOM errors.", "Okay, this could well be part of the problem! Could you try printing out all the different calls from this sub-section of the guides on `pmap` (except `pmap`) https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap:\r\n\r\n```python\r\nimport jax\r\n\r\nprint(jax.devices())\r\nprint(jax.local_devices())\r\n...\r\nprint(jax.process_count())\r\n```\r\nJust to see what the right one is!", "The TPU v4-32 returns the following\r\n```\r\njax.devices():\r\n[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0), TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0), TpuDevice(id=2, process_index=0, coords=(0,1,0), core_on_chip=0), TpuDevice(id=3, process_index=0, coords=(1,1,0), core_on_chip=0), TpuDevice(id=4, process_index=1, coords=(0,0,1), core_on_chip=0), TpuDe\r\nvice(id=5, process_index=1, coords=(1,0,1), core_on_chip=0), TpuDevice(id=6, process_index=1, coords=(0,1,1), core_on_chip=0), TpuDevice(id=7, process_index=1, coords=(1,1,1), core_on_chip=0), TpuDevice(id=8, process_index=2, coords=(0,0,2), core_on_chip=0), TpuDevice(id=9, process_index=2, coords=(1,0,2), core_on_chip=0), TpuDevice(i\r\nd=10, process_index=2, coords=(0,1,2), core_on_chip=0), TpuDevice(id=11, process_index=2, coords=(1,1,2), core_on_chip=0), TpuDevice(id=12, process_index=3, coords=(0,0,3), core_on_chip=0), TpuDevice(id=13, process_index=3, coords=(1,0,3), core_on_chip=0), TpuDevice(id=14, process_index=3, coords=(0,1,3), core_on_chip=0), TpuDevice(id=15, process_index=3, coords=(1,1,3), core_on_chip=0)]\r\n\r\njax.local_devices():\r\n[TpuDevice(id=12, process_index=3, coords=(0,0,3), core_on_chip=0), TpuDevice(id=13, process_index=3, coords=(1,0,3), core_on_chip=0), TpuDevice(id=14, process_index=3, coords=(0,1,3), core_on_chip=0), TpuDevice(id=15, process_index=3, coords=(1,1,3), core_on_chip=0)]\r\n\r\njax.process_index():\r\nworker-0: 1\r\nworker-1: 2\r\nworker-2: 3\r\nworker-3: 4\r\n\r\njax.device_count():\r\n16\r\n\r\njax.local_device_count():\r\n4\r\n\r\njax.process_count():\r\n4\r\n```\r\n\r\nThe TPU v4-8 returns the following:\r\n```\r\njax.devices():\r\n[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0), TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0), TpuDevice(id=2, process_index=0, coords=(0,1,0), core_on_chip=0), TpuDevice(id=3, process_index=0, coords=(1,1,0), core_on_chip=0)]\r\n\r\njax.local_devices():\r\n[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0), TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0), TpuDevice(id=2, process_index=0, coords=(0,1,0), core_on_chip=0), TpuDevice(id=3, process_index=0, coords=(1,1,0), core_on_chip=0)]\r\n\r\njax.process_index():\r\n0\r\n\r\njax.device_count():\r\n4\r\n\r\njax.local_device_count():\r\n4\r\n\r\njax_process_count():\r\n1\r\n```\r\n\r\nMore info:\r\n```\r\njax.print_environment_info() \r\njax: 0.3.23 \r\njaxlib: 0.3.22 \r\nnumpy: 1.22.4 \r\npython: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] \r\njax.devices (16 total, 4 local): [TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0) TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0) ... TpuDevice(id=14, process_index=3, coords=(0,1,3), core_on_chip=0) TpuDevice(id=15, process_index=3, coords=(1,1,3), core_on_chip=0)] \r\nprocess_count: 4 \r\n``` \r\n\r\n \r\n\r\n\r\n\r\n", "@sanchit-gandhi I found something very interesting that might be the source of most of my confusion here. When inserting a breakpoint into my code here: [breakpoint](https://huggingface.co/pere/roberta-debug-32/blob/main/run_mlm_flax_stream.py#L454)\r\n\r\nI notice that the value of jax.device_count() actually is 4(!!), and the jax_process_count() returns 1. Starting python from the command line, importing jax, and then printing the same \"jax.device_count()\", the value is 16.\r\n\r\nI do not have time to dig more into this right now. Just thought that I should mention this in case you decide to look more into this.\r\n", "@sanchit-gandhi I think I have been able to isolate the problem. This can be run directly from the command line on a v4-32:\r\n```python\r\n>>> import jax \r\n>>> jax.device_count()\r\n16 \r\n``` \r\n \r\nHowever, importing `TrainingArguments` seem to change the number of visible devices:\r\n```python\r\n>>> import jax \r\n>>> from transformers import TrainingArguments \r\n>>> jax.device_count() \r\n4 \r\n``` \r\nI can not see why this should happen. I also see the following error that might give a hint about what is going on:\r\n```python\r\n>>> import jax \r\n>>> jax.device_count() \r\n16 \r\n>>> from transformers import TrainingArguments \r\n[percpu.cc : 557] RAW: rseq syscall failed with errno 22 \r\n>>> from transformers import TrainingArguments \r\n>>> jax.device_count() \r\n16 \r\n``` \r\n ", "Great job at tracing it to a JAX-Transformers interaction! That's super weird - does this happen with just `TrainingArguments`, or other Transformers modules too (i.e. `AutoConfig`)? Does swapping the order of your imports change the behaviour? \r\n```python\r\n>>> from transformers import TrainingArguments \r\n>>> import jax \r\n>>> jax.device_count()\r\n```\r\n(we need to get a TPU v4 to test these issues!)", "Seem to be a bit of a Schrödinger's cat-problem. Whether you look at it determines if it is dead...;) \"Looking\" at jax.device_count() (that probably activates the device) seems to let you import TrainingArguments without breaking the pods.\r\n\r\nSwitching transformers and jax imports does not help. It still reports 4. \r\n\r\nI think I have tried all the other transformer modules, and I have not been able to reproduce this with any of them.\r\n\r\n\r\n\r\n", "I'm not able to reproduce this. Running on a v4-16:\r\n```python\r\nIn [1]: import jax\r\n\r\nIn [2]: from transformers import TrainingArguments\r\n\r\nIn [3]: jax.device_count()\r\nOut[3]: 8\r\n```\r\n(v4-16 = 8 chips = 8 jax devices)\r\n\r\n@peregilk can you share your jax, jaxlib, libtpu-nightly, and transformers versions? Also make sure you're creating the TPUv4 with `--version=tpu-vm-v4-base`\r\n", "Thanks @skye. \r\nFor reference, in the reported error I was using `--runtime-version=v2-alpha-tpuv4-pod` with the following libraries.\r\n```\r\njax: 0.3.23 \r\njaxlib: 0.3.22\r\nlibtpu-nightly: 0.1.dev20221109 \r\ntransformers: 4.24.0\r\n```\r\n\r\nNot reported above, when debugging I actually also tried using `--runtime-version=tpu-vm-v4-base` but did get:\r\n```python\r\nIn [1]: import jax\r\nIn [2]: jax.device_count()\r\nOut[2]: 4\r\n```\r\n\r\nI might have done a mistake when creating this pod. I will try from scratch again using `--runtime-version=tpu-vm-v4-base`. \r\n\r\nThanks.\r\n", "Thank you @skye! 🙌", "Ah yeah, `v2-alpha-tpuv4-pod` confusingly was only for running TF on a pod slice, and would prevent jax from running across the slice. So that explains it. You should always use `tpu-vm-v4-base` with jax now (or `tpu-vm-base` for v2 and v3).\r\n\r\nYou can always check the Cloud TPU docs for the latest gcloud commands (I like https://cloud.google.com/tpu/docs/run-calculation-jax and https://cloud.google.com/tpu/docs/jax-pods). I understand it's hard to know when things change; hopefully they won't change very frequently moving forward :)", "Thanks a lot @skye! I can now see the devices after loading Transformers. I have also verified that is calculates the batch size correctly:\r\n`train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count()` \r\n\r\nWith `per_device_train_batch_size=62` on a v4-8, this means `batch_size=248`. This runs on the single TPU.\r\n\r\nOn a v4-32 this becomes `batch_size=992`. Here I am still getting OOM-errors. I also reduced the batch size but I still get OOM errors. \r\n\r\nAre there any other changes that needs to be done here?\r\n", "I'm not very familiar with using `transformers`, but you may need to use `jax.local_device_count()` instead of `jax.device_count()` somewhere? See https://jax.readthedocs.io/en/latest/multi_process.html. Let me know if you still have questions, this can be tricky!", "Thanks a lot @skye and @sanchit-gandhi for assisting in this. Really useful comments. It seems like splitting between the nodes simply isnt implemented in the code I am using. @agemagician actually implemented this in [pull #16527](https://github.com/huggingface/transformers/pull/16527) but it is only added to [run_mlm_flax_t5.py](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py). It is not implemented for the other [run_mlm-scripts](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) and not in [run_mlm_flax_streaming.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py) that is the one I am using.\r\n\r\nI can make a pull request to the other scripts, basically doing [this change](https://github.com/huggingface/transformers/pull/16527/commits/32d450552dd51729de1af336ab9705d9ca4df9e5). However, there is one remaining issue that needs to be resolved first.\r\n\r\nFor me (at least when I am using the streaming script), this turns out being extremely slow on the pods. Here is a speed comparison. All running `seq_length=512` and `per_device_train_batch_size=56`.\r\n\r\n| device | batch_size | seconds per iteration |\r\n|--------|------------|-----------------------|\r\n| v4-8 | 224 | 1 |\r\n| v4-64 | 1792 | 32 |\r\n| v4-128 | 3584 | 220 |\r\n\r\nCurrently this is way too slow to do real training. I have not been able to test this on the non-streaming scripts, and have not done any attempts at trying to understand where the slowdown is. Maybe any of you have theories about what could be wrong here? It is also worth noting that starting up training (initialising from a pretrained checkpoint) typically takes 4-5 hours (same time for both single TPUs and pods). This is however not a showstopper for doing pretraining.\r\n\r\n\r\n\r\n\r\n\r\n", "@sanchit-gandhi I have not been able to fix this yet, but I think that I at least have been able to pin down the bottleneck here.\r\n\r\n[This iteration](https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py#L292-L294) is **extremely slow**. The entire iteration takes a couple of minutes per training step. Not sure why it is so slow though, and I do not see why \"id\" and \"text\" are excluded here. The grouping is [done differently](https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/examples/flax/language-modeling/run_mlm_flax.py#L580-L593) in the non-streaming dataset and these scripts seem to run a lot faster.\r\n\r\nThis actually also turns out to be the reason for the long startup time. The entire evaluation set is pre-tokenized and grouped, and then iterated over. With 50k steps in the evaluation set, this takes several hours. When reducing the eval set to just a few samples, the startup is almost instant. \r\n", "@sanchit-gandhi: I now have a working version that runs decently fast on the pods! I am down from 220 sec/it to around 10s/it on a v4-128.\r\n\r\nI made the following change to the streaming code:\r\n```python\r\n# samples = {\r\n# k: samples[k] + tokenized_samples[k] for k in [\"input_ids\", \"attention_mask\", \"special_tokens_mask\"]\r\n# }\r\nsamples[\"input_ids\"] += tokenized_samples[\"input_ids\"]\r\nsamples[\"attention_mask\"] += tokenized_samples[\"attention_mask\"]\r\nsamples[\"special_tokens_mask\"] += tokenized_samples[\"special_tokens_mask\"]\r\n```\r\nFor some reason this is a lot faster, and fast enough to be \"useful\". I still do not think this is optimal though. Tokenising and grouping is still slowing down the training considerably when you are using a streaming dataset.\r\n\r\n", "I'm guessing using `+=` is a lot faster because Python is smart enough to extend the `samples` lists in-place, whereas the original implementation will end up completely rewriting each list. If that's right, I think using `+=` is the best you can do short of multi-threading (I'm not a Python performance expert though).", "There are a few other things in the script that seem suboptimal. For instance are the tokenization not split across the VMs. \r\n\r\n@skye: Do you have an estimate of what performance that should ideally be expected here? Lets say one training step takes 1 second on a v4-8. How long should it take to run it on a v4-128? I guess there are some overhead in dividing the job across the TPUs, right? Just looking for an estimate on how much the current performance depends on the CPUs.\r\n\r\n\r\n", "Hey @peregilk! Sorry for the delayed response. \r\n\r\nWe can't use multiple processes with Datasets' map method when using a streaming dataset. This is because we read the raw dataset's tar file and iterate over the bytes incrementally, iterating over the dataset samples and loading them into memory under a single file at a time. This is why we don't pass the `num_proc` arg to `.map` when tokenising the dataset.\r\n\r\nIf your dataset is small, it might be worth downloading the dataset, pre-processing it and saving it to cache (all done under the hood by Datasets for a non-streaming dataset)? Otherwise this is part of the trade-off for using streaming datasets! We have no disk space constraints but have to load data on the fly.", "Thanks @sanchit-gandhi. In my case, storing the dataset locally is not an option. I would then have to attach a disk to each of the pods, and for the large pods that is not an option.\r\n\r\nI understand the samples needs to be tokenized before it is possible to shard them across the tpus, and I also understand that this in reality needs to be done on a single TPU VM. However, I still see more than 10 seconds per step here - it just seems to be a lot. \r\n\r\nDo you know if it is possible to pre-tokenize (or even pre-shard) a dataset and keep it streaming? Is it worth looking into that, or do you think it is better looking closer into what is taking time here?\r\n\r\nEach TPU VM is quite a capable machine (200 CPU cores). Even if it is hard to split this over multiple VMs, are there better ways of using the VM that need to do the processing?\r\n\r\n\r\n\r\n\r\n\r\n", "> Do you have an estimate of what performance that should ideally be expected here? Lets say one training step takes 1 second on a v4-8. How long should it take to run it on a v4-128?\r\n\r\nSorry missed this earlier. Not sure it's still useful, but for batch parallelism, you should expect near linear scaling if you keep the per-device batch size the same. I.e. if you increase the global batch size 16-fold going from v4-8 -> v4-128, the step time should remain constant. If you keep the global batch size the same (i.e. decrease the per-device batch size as you increase devices), the speedup should be roughly linear until you reach a certain minimum per-device batch size.", "Thanks a lot @skye. Great to get this confirmed. Basically the script today runs 10X slower than it potentially should. Or....put another way... 90% of the time is used for preparing the dataset and 10% is used efficiently for training.\r\n\r\nIf I understand correctly, @sanchit-gandhi, there will soon be a flax implementation for Whisper with the streaming dataset. I will test this as well, and see if I get the same issues here.\r\n\r\nI have a few ideas about how to figure out what is really going on here, and I will start looking into this more thoroughly early next year. \r\n\r\nHope it is OK that I am also tagging @lhoestq .\r\n\r\n", "You can use something like a torch DataLoader with num_workers > 0 with your streaming dataset. This way you load and collate the data in parallel to your forward and backward passes.", "Thanks a lot @lhoestq. If I understand correctly, the way this works on streaming datasets is that the DataLoader is starting a worker for each of the dataset shards. So if you have the compute capacity, the optimal setting is `num_workers=dataset.n_shards` (With my test dataset this is 85).\r\n\r\nI tried implementing this like:\r\n```python\r\n# Replace\r\n# training_iter = iter(tokenized_datasets)\r\ntraining_iter = iter(torch.utils.data.DataLoader(tokenized_datasets.with_format(\"torch\"), batch_size=1, shuffle=False, num_workers=dataset.n_shards, collate_fn=lambda x: x))\r\n````\r\n\r\nMy reference is **1 sek/iteration on a v4-8**. According to @skye this should continue to be **1 sek/iteration on a v4-128** with my setup. As shown above, I started at **220 sek/iteration on a v4-128**. Before the suggestion from @lhoestq, I was down to **11 sek/iteration**. After adding the Torch DataLoader this is reduced to **5 sek/iteration**.\r\n\r\nEven if things are looking way better, I still think this can be improved further. I took a look at the load of the VMs CPUs, and the load is still very low: Approx 10% with some very short peaks. All cores are used. \r\n\r\nI am willing to share what I have so far here. @patrickvonplaten: Are you interested in merging the support for the tpu v4-pods into [run_mlm_flax_stream.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py)? Maybe others can contribute and improve on this as well? \r\n\r\n", "Sorry for dropping the ball here @peregilk! I understand that your MLM experiments are working quite well currently?\r\n\r\n> Are you interested in merging the support for the tpu v4-pods into [run_mlm_flax_stream.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py)? Maybe others can contribute and improve on this as well?\r\n\r\nThis would be super! We could start with your working MLM streaming script? Feel free to open a PR on transformers if you're interested and tag me 🤗 happy to iterate with you here!", "Yes, @sanchit-gandhi, the training is running at acceptable speed. I am currently training some larger models. When I get the results from these, and are certain that everything really works, Ill open a PR. " ]
1,668
1,681
null
CONTRIBUTOR
null
### System Info transformers 4.24.0 ### Who can help? @patil-suraj I am having problems scaling the run_mlm_flax scripts so that they run on TPU VM v4 Pods (ie the v4-16, v4-32 etc). When running "out of the box", the performance is exactly the same as when running on a v4-8. To me this indicates that I am feeding a lot of empty data. The max `per_device_train_batch_size` for 512 sequences in RoBERTa is 62 in both cases, but since the output is identical, it is obviously not scaling. From trying to understand the code, it seems to be logical to multiply the batch size here with the `jax.process_count()` ([src example](https://huggingface.co/pere/roberta-base-exp-32B/blob/main/run_mlm_flax_stream.py#L452)). However, this does not seem to be the way to approach it. Any ideas about how to approach this? Is the script tested on v4s? ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction See explanation above. ### Expected behavior Expect the batch size to scale automatically.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20252/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/20251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20251/comments
https://api.github.com/repos/huggingface/transformers/issues/20251/events
https://github.com/huggingface/transformers/issues/20251
1,450,864,768
I_kwDOCUB6oc5WenCA
20,251
Consistently getting lower log probabilities for more probable sequences
{ "login": "hxiaoyang", "id": 98200137, "node_id": "U_kgDOBdpqSQ", "avatar_url": "https://avatars.githubusercontent.com/u/98200137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hxiaoyang", "html_url": "https://github.com/hxiaoyang", "followers_url": "https://api.github.com/users/hxiaoyang/followers", "following_url": "https://api.github.com/users/hxiaoyang/following{/other_user}", "gists_url": "https://api.github.com/users/hxiaoyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/hxiaoyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hxiaoyang/subscriptions", "organizations_url": "https://api.github.com/users/hxiaoyang/orgs", "repos_url": "https://api.github.com/users/hxiaoyang/repos", "events_url": "https://api.github.com/users/hxiaoyang/events{/privacy}", "received_events_url": "https://api.github.com/users/hxiaoyang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @xiaoyangnickhu 👋 \r\n\r\nI believe your snippet has an issue with indexing when appending to `input_logprobs`. `all_tokens_logprobs` contains the log probs for the NEXT token, not for the current token. i.e. accessing `all_tokens_logprobs[batch_idx, 0, token]` has the log probs for the token at position `1`, not for the token at position `0`.\r\n\r\nFactoring that in, you get the outcomes that you are expecting. For instance, in the first example, `professor` is the most likely token for the last slot 🤗 \r\n\r\n_______________________________________________\r\nSnippet with the indexing correction:\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nfrom torch.nn.functional import log_softmax\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"facebook/opt-6.7b\",\r\n device_map='auto')\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/opt-6.7b\", use_fast=False)\r\n\r\ndef score(prompt):\r\n input_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\r\n input_tokens = [tokenizer.decode(id) for id in input_ids[0]]\r\n input_logprobs = []\r\n logits = model(input_ids).logits\r\n all_tokens_logprobs = log_softmax(logits.double(), dim=2)\r\n for k in range(1, input_ids.shape[1]):\r\n input_logprobs.append(all_tokens_logprobs[:, k-1, input_ids[0,k]])\r\n input_logprobs = [input_logprobs[k].detach().numpy()[0] for k in range(len(input_logprobs))]\r\n return input_tokens, input_logprobs\r\n\r\ndef display(prompt):\r\n input_tokens, input_logprobs = score(prompt)\r\n out_str = \"\"\r\n for i in range(len(input_logprobs)):\r\n out_str = out_str + str(input_tokens[i+1]) + \": \" + str(input_logprobs[i]) + \" \"\r\n print(out_str)\r\n```", "The fix works. Thanks!" ]
1,668
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten @gante @nars ### Information - [X] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have been using OPT models (125m~66b) to score sequences (see #20008) from a set of multiple choice problems (8 choices). I found that, on average, the wrong sequences are ~10% (slightly below 1/8) likely to have the maximum probability (among the 8 choices), but the correct sequence is ~1% likely to have the maximum probability. GPT-3 displays the exact opposite (expected) behaviour. My current hypotheses are 1. I calculated log probabilities incorrectly; see `score` function below. Although I think we figured it out here #20008, with the help of @gante. 2. `model(input).logits` might have returned the wrong thing. To reproduce the error (not on my specific task but in general): 1. Define ```python model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", device_map='auto') tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False) def score(prompt): input_ids = tokenizer(prompt, return_tensors="pt").input_ids input_tokens = [tokenizer.decode(id) for id in input_ids[0]] input_logprobs = [] logits = model(input_ids).logits all_tokens_logprobs = log_softmax(logits.double(), dim=2) for k in range(input_ids.shape[1]): input_logprobs.append(all_tokens_logprobs[:,k,input_ids[0,k]]) input_logprobs = [input_logprobs[k].detach().numpy()[0] for k in range(len(input_logprobs))] return input_tokens, input_logprobs def display(prompt): input_tokens, input_logprobs = score(prompt) out_str = "" for i in range(len(input_logprobs)): out_str = out_str + str(input_tokens[i]) + ": " + str(input_logprobs[i]) + " " print(out_str) ``` 2. Run `display` on pairs of prompts and compare probabilities. For example: - Run `display("he works at the university as a professor")` - Output: ` </s>: -9.622528113218403 he: -3.4190597141055896 works: -11.342809676330889 at: -7.323995369804066 the: -7.797857269329157 university: -8.355341028220346 as: -8.410616662546053 a: -7.980906483021017 professor: -10.740861657554035 ` - Run `display("he works at the university as a clown")` - Output: `</s>: -9.584896765616625 he: -3.4396850232969243 works: -11.341423123411294 at: -7.338279654205086 the: -7.808466620856404 university: -8.343803691603435 as: -8.401746292912188 a: -7.94869412338276 clown: -6.500715861445045` - Note that the `LogProb("professor" | "he works at the university as a") < LogProb("clown" | "he works at the university as a")`. 3. Additional examples: - `display("he could not play due to injury")` - `</s>: -9.609577798790037 he: -3.4115336573055703 could: -9.250618499206704 not: -6.846035151696857 play: -9.919994452618948 due: -9.421663548632559 to: -8.411191754931467 injury: -9.714442198832101 ` - `display("he could not play due to apple")` - `</s>: -9.595368031203268 he: -3.402465835779653 could: -9.260463627711584 not: -6.82763851395037 play: -9.899493418563107 due: -9.375417731887582 to: -8.429231628522128 apple: -7.371283146690385 ` - `display("Q: who makes the iphone? A: Apple")` - `</s>: -9.735390750532165 Q: -4.982232076966033 :: -11.777887545688749 who: -9.614314712501576 makes: -13.181075048715304 the: -7.544189630785205 : -4.4103913220035835 iph: -14.036189109433794 one: -12.96041481602171 ?: -12.274218276071288 A: -10.156031264618123 :: -12.576397941389803 Apple: -9.243583670812342 ` - `display("Q: who makes the iphone? A: Banana")` - `</s>: -9.749204391127897 Q: -4.9924473604492725 :: -11.771810712752153 who: -9.594715884838685 makes: -13.212738897899317 the: -7.47622315378366 : -4.443921992316447 iph: -14.110338198583378 one: -12.932230614752198 ?: -12.295626501904344 A: -10.15586013628168 :: -12.57128175779327 Banana: -7.862753006007329 ` ### Expected behavior Should be getting higher log probabilities for more probable sequences. For example, it should be the case that `LogProb("professor" | "he works at the university as a") > LogProb("clown" | "he works at the university as a")`. There also appears to be very few tokens with a high (close to 0) probability. It is also weird to me that the first token (after `</s>`) seems to have a very high probability. Basically the model displays the exact opposite of the expected behaviors. This has truly been puzzling. Any input is appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20251/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20250/comments
https://api.github.com/repos/huggingface/transformers/issues/20250/events
https://github.com/huggingface/transformers/issues/20250
1,450,750,436
I_kwDOCUB6oc5WeLHk
20,250
All Flan-T5 models configs use the incorrect activation function
{ "login": "michaelroyzen", "id": 45830328, "node_id": "MDQ6VXNlcjQ1ODMwMzI4", "avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelroyzen", "html_url": "https://github.com/michaelroyzen", "followers_url": "https://api.github.com/users/michaelroyzen/followers", "following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}", "gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions", "organizations_url": "https://api.github.com/users/michaelroyzen/orgs", "repos_url": "https://api.github.com/users/michaelroyzen/repos", "events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelroyzen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @michaelroyzen \r\nThanks for raising this.\r\nYou are right, one should use `gated-gelu` as it is done in t5 LM-adapt checkpoints. We have updated with @ArthurZucker the config files of flan-T5 models. \r\nNote that forcing `is_gated_act` to `True` leads to using gated activation function too. The only difference between these 2 approaches is that using `gated-gelu` forces the model to use `gelu-new` activation function. See [this line](https://github.com/huggingface/transformers/blob/a00b7e85ea4b3e3185440f1f82a6b58e3660b01d/src/transformers/models/t5/configuration_t5.py#L132). `gelu-new` gives slightly different results than `gelu` but does not affect the overall performance of the model. \r\nThis also is not a breaking change from what I can see, as it affects only inference of the model. If someone has fine-tuned flan-T5 with `gelu` should not be affected by this change.\r\nClosing this issue as the config files have been fixed. Thanks for your help!\r\n\r\ncc @sgugger", "Thank you @younesbelkada. May I ask how this only affects inference? If a flan-t5 model was fine-tuned with the old checkpoint, wouldn't the wrong activations be used?", "> Note that forcing `is_gated_act` to `True` leads to using gated activation function too.\r\n\r\n@younesbelkada From this line of code, it seems that is_gated_act will be set to false without gated-relu? https://github.com/huggingface/transformers/blob/a00b7e85ea4b3e3185440f1f82a6b58e3660b01d/src/transformers/models/t5/configuration_t5.py#L122", "@LiJunnan1992 \r\nNo, it gets overriden by the kwargs [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/configuration_t5.py#L135), check this snippet:\r\n```\r\nfrom transformers import T5Config\r\n\r\nconfig_gated = T5Config(is_gated_act=True, hidden_act=\"gelu\")\r\nprint(config_gated.is_gated_act)\r\n>>> True\r\n\r\nconfig_gated = T5Config(hidden_act=\"gelu\")\r\nprint(config_gated.is_gated_act)\r\n>>> False\r\n\r\nconfig_gated = T5Config(feed_forward_proj=\"gated-gelu\")\r\nprint(config_gated.is_gated_act)\r\n>>> True\r\n```", "@younesbelkada I see. Thanks so much for the explanation! " ]
1,668
1,668
1,668
NONE
null
### System Info The [configs](https://huggingface.co/google/flan-t5-xxl/blob/main/config.json) for all of the Flan-T5 says that the activation function is 'gelu' and yet 'is_gated_act' is set to true. This is an inherent contradiction. Doing more digging, I realized that per [Google's original Flan-T5 checkpoints ](https://github.com/google-research/t5x/blob/main/docs/models.md#t5-11-lm-adapted-checkpoints), Flan-T5 is directly instantiated from T5v1.1 LM-adapt, which all use [gated-gelu](https://huggingface.co/google/t5-xxl-lm-adapt/blob/main/config.json) ### Who can help? @younesbelkada @arthur ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Compare the T5v1.1+LM configs to the Flan-T5 configs. ### Expected behavior "feed_forward_proj" should be "gated-gelu" and "dense_act_fn" is redundant and should be removed entirely from the config.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20250/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20249/comments
https://api.github.com/repos/huggingface/transformers/issues/20249/events
https://github.com/huggingface/transformers/issues/20249
1,450,590,575
I_kwDOCUB6oc5WdkFv
20,249
Support X | Y syntax on HfArgumentParser
{ "login": "josecannete", "id": 12201153, "node_id": "MDQ6VXNlcjEyMjAxMTUz", "avatar_url": "https://avatars.githubusercontent.com/u/12201153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josecannete", "html_url": "https://github.com/josecannete", "followers_url": "https://api.github.com/users/josecannete/followers", "following_url": "https://api.github.com/users/josecannete/following{/other_user}", "gists_url": "https://api.github.com/users/josecannete/gists{/gist_id}", "starred_url": "https://api.github.com/users/josecannete/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josecannete/subscriptions", "organizations_url": "https://api.github.com/users/josecannete/orgs", "repos_url": "https://api.github.com/users/josecannete/repos", "events_url": "https://api.github.com/users/josecannete/events{/privacy}", "received_events_url": "https://api.github.com/users/josecannete/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like adding support while not breaking previous Python version will be tricky, as `from types import UnionType` only work for Python 3.10 and above. We can look at a PR if you want to try a contribution, but I don't think we will add this ourselves until Python 3.10 is more widely supported (PyTorch and TensorFlow do not support Python 3.10 for instance).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Ran into the same issue today. Any plan to support union-type annotations (`X | Y`)?\r\n\r\nNow, Python 3.10 was released 1.5 years ago. It is widely used and has become the default Python version for `conda`. Also, if users have `from __future__ import annotations` in their scripts, some automation tools, such as `pyupgrade` / `ruff`, will automatically rewrite the type annotations (`Union[X, Y] -> X | Y`, `Optional[X] -> X | None`)." ]
1,668
1,683
1,671
NONE
null
### Feature request [PEP-604](https://peps.python.org/pep-0604/) created the X | Y syntax on python 3.10, which is equivalent to Union[X, Y]. The use of this syntax is not supported by HfArgumentParser. ### Motivation With this syntax I would like to use something like: ``` @dataclass class ModelArguments: some_argument: str | None = field( default=None, metadata={"help": "some argument"}, ) ``` Instead of: ``` @dataclass class ModelArguments: some_argument: Optional[str] = field( default=None, metadata={"help": "some argument"}, ) ``` When trying to use the first one, it throws an error: ``` Traceback (most recent call last): File "/home/jcanete/new-kd/kd/train.py", line 299, in <module> main() File "/home/jcanete/new-kd/kd/train.py", line 160, in main parser = HfArgumentParser( File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/site-packages/transformers/hf_argparser.py", line 73, in __init__ self._add_dataclass_arguments(dtype) File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/site-packages/transformers/hf_argparser.py", line 178, in _add_dataclass_arguments self._parse_dataclass_field(parser, field) File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/site-packages/transformers/hf_argparser.py", line 149, in _parse_dataclass_field parser.add_argument(field_name, **kwargs) File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/argparse.py", line 1427, in add_argument raise ValueError('%r is not callable' % (type_func,)) ValueError: str | None is not callable ``` ### Your contribution Not sure if the best solution but changing [line 88 of hf_argparser.py](https://github.com/huggingface/transformers/blob/main/src/transformers/hf_argparser.py#L88) from: `if origin_type is Union:` to `if origin_type is Union or type(origin_type) is UnionType:` Does the trick on my local installation. (it also requires to add the import of: `from types import UnionType`).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20249/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20248/comments
https://api.github.com/repos/huggingface/transformers/issues/20248/events
https://github.com/huggingface/transformers/issues/20248
1,450,589,408
I_kwDOCUB6oc5Wdjzg
20,248
Sharded T5X checkpoints can't be converted to pytorch ?
{ "login": "jmhessel", "id": 178075, "node_id": "MDQ6VXNlcjE3ODA3NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/178075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmhessel", "html_url": "https://github.com/jmhessel", "followers_url": "https://api.github.com/users/jmhessel/followers", "following_url": "https://api.github.com/users/jmhessel/following{/other_user}", "gists_url": "https://api.github.com/users/jmhessel/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmhessel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmhessel/subscriptions", "organizations_url": "https://api.github.com/users/jmhessel/orgs", "repos_url": "https://api.github.com/users/jmhessel/repos", "events_url": "https://api.github.com/users/jmhessel/events{/privacy}", "received_events_url": "https://api.github.com/users/jmhessel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @jmhessel Flax is in maintenance mode only, so it's very unlikely we will add support to this ourselves. Feel free to open a PR if you want however!", "gotcha, thank you! I will take a look", "Actually this is relatively important I think since the main way to pretrain T5 is via Google's T5X repo so having a good Flax->PyTorch conversion in our side I think is not super unimportant. \r\n\r\nIt's likely that more T5X checkpoints will come out and it'd be nice to be able to directly convert them to PyTorch.\r\n\r\nMaybe gently pinging the author of the conversion script: @stefan-it (in case you have any ideas) and @younesbelkada and @ArthurZucker in case you can find some time to help @jmhessel :-) ", "thanks @sgugger and @patrickvonplaten ! :-)\r\n\r\nI am not sure I will get a chance to look at this this week, but --- I will mention one quick solution suggested by @peterwestuw --- if one simply increases the shard size here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_utils.py#L950-L952\r\n\r\nthen it might be possible to save all weights in a single file and use the generic `from_pretrained` once again. Not the cleanest solution b/c it would be ideal to support sharded loading, but wanted to mention.", "This would require a machine with lots of RAM to accommodate the checkpoint.", "update: Increasing the shard size worked for me! But yes, I needed to grab a large RAM instance to do the conversion. Not a bad stopgap. Given that this solved my issue, I might not be able to look at this in the next few weeks, but I'll leave the issue open for now in case I come back to it", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.4.0-92-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - Huggingface_hub version: 0.2.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Train using t5x to get a checkpoint that's bigger than 10GB. 2. use official conversion script, e.g., ``` python transformers/models/t5/convert_t5x_checkpoint_to_flax.py --t5x_checkpoint_path checkpoint_100000/ --config_name google/t5-v1_1-xxl --flax_dump_folder_path checkpoint_converted ``` this yields, e.g., ``` $ ls checkpoint_converted config.json flax_model-00001-of-00005.msgpack flax_model-00002-of-00005.msgpack flax_model-00003-of-00005.msgpack flax_model-00004-of-00005.msgpack flax_model-00005-of-00005.msgpack flax_model.msgpack.index.json ``` 3. you can load like this: ``` tokenizer = T5Tokenizer.from_pretrained("google/t5-v1_1-xxl") flax_model = FlaxT5ForConditionalGeneration.from_pretrained("checkpoint_converted/") ``` but you can't load like this: ``` model = T5ForConditionalGeneration.from_pretrained("checkpoint_converted/", from_flax=True) ``` because of an error: ``` OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory ``` ### Expected behavior I think it would be nice to be able to load sharded flax checkpoints using the more generic class: this would be useful, e.g., for converting big flax checkpoints to pytorch. The machinery for loading appears to be mostly implemented, but it isn't yet connected to the `T5ForConditionalGeneration.from_pretrained` method. I can work on a PR if all this checks out, but wanted to see if there was something I was missing. Relevant parts of the code: checking for the flax sharded file type: https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/modeling_utils.py#L1997-L2029 https://github.com/huggingface/transformers/blob/a44985b41cfa2de48a5e1de7f1f93b7483da25d1/src/transformers/modeling_flax_utils.py#L659-L685 support for sharded pytorch --> flax (but not vice-versa): https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_pytorch_utils.py function that loads sharded flax checkpoints: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_utils.py#L424-L468 function that might need to be modified to detect a sharded checkpoint, and then call the above: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_pytorch_utils.py#L224-L239
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20248/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20247/comments
https://api.github.com/repos/huggingface/transformers/issues/20247/events
https://github.com/huggingface/transformers/issues/20247
1,450,403,008
I_kwDOCUB6oc5Wc2TA
20,247
Saving a 8-bit T5 does not work.
{ "login": "peregilk", "id": 9079808, "node_id": "MDQ6VXNlcjkwNzk4MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peregilk", "html_url": "https://github.com/peregilk", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "organizations_url": "https://api.github.com/users/peregilk/orgs", "repos_url": "https://api.github.com/users/peregilk/repos", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "received_events_url": "https://api.github.com/users/peregilk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Those are all arguments for inference specifically. `save_pretrained` does not work if you use `device_map=\"auto\"` if you have weights offloaded on the CPU (not sure if it's the case or not here) and it's not been tested with `load_in_8bit` either.\r\n\r\nWe will probably fix this later down the road, but for now you should not save on disk the model this way (you can find the model in your cache if you need its weights anyway)", "Thanks for the feedback, @sgugger.\r\n\r\nSo, just to be sure that I understand this: There is not really any way of easily distributing only the 8-bit model. The correct way of doing it will be to distribute the 32-bit version, and then ask people to use **load_in_8bit=True, device_map=\"auto\"** ?\r\n\r\nWhat about loading the 8-bit-version in the HuggingFace widgets. Is that even possible?", "There is no way to save and reload an 8-nit model anyway (correct me if I'm wrong @younesbelkada ) so you need to use `load_in_8bit=True, device_map=\"auto\"` indeed.", "Yes this is correct, you can also save the model in fp16 to save memory and load it back in int8, but currently saving and loading the model in 8-bit is not supported" ]
1,668
1,668
1,668
CONTRIBUTOR
null
### System Info Latest. ### Who can help? @patrickvonplaten I am able to load a T5 model in 8-bit-format using this command: `model = T5ForConditionalGeneration.from_pretrained(".",load_in_8bit=True, device_map="auto")` The model works fine. Saving the model using ".save_pretrained()" also seems to work, and the file on the disk is a lot smaller. I am able to load the save model again without errors, but then the model no longer works. Here are two sample models: north/fine_North_large north/fine_North_large_8bit ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction See description above ### Expected behavior The model should not change when being saved.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20247/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20246/comments
https://api.github.com/repos/huggingface/transformers/issues/20246/events
https://github.com/huggingface/transformers/issues/20246
1,450,402,023
I_kwDOCUB6oc5Wc2Dn
20,246
Add Galactica model
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "If the architecture is the same, it's just a matter of converting the checkpoints to the HF format and uploading them, no need to add a new model in the library :-)", "Hi, Sylvain!\r\n\r\nYes, It is the same (Even I have run the forward pass with their wrapper and the HF `OPTForCausalLM` and the output is the same. I think the ckpts are already converted (They have a config file that makes them work with HF).\r\n\r\n2 questions:\r\n\r\n1) The tokenizer is read from a single file `tokenizer.json` with the `Tokenizer.from_file()` method. How do I split it into separated files? (config, merges, vocab, etc)\r\n2) Do I upload the models with my personal account? Like this: https://huggingface.co/mrm8488/galactica-125m ?\r\n\r\nThanks!", "Not working correctly right now.", "@sgugger Can you guide us better?", "@mrm8488 would probably be best to include under the `facebook` org, some folks at HF like @patrickvonplaten should be able to upload them there", "@patrickvonplaten We need your help on this one.", "Hey guys, stoked for the integration - just one model specific thing, if user enters prompt like:\r\n\r\n\"[START_AMINO]MVATE[END_AMINO]\" -> need to tokenize so the bit inside the special tokens is character-based tokenized -> i.e M, V, A, T, E. we have the logic in https://github.com/paperswithcode/galai/blob/main/galai/model.py, see \"escape_custom_split_sequence\".", "Any update?", "> Any update?\r\n\r\nIt is WIP! Thanks! :)", "Model is fully added see: https://huggingface.co/models?other=galactica - thanks a lot @mrm8488 for you work here! Closing this :-) " ]
1,668
1,668
1,668
CONTRIBUTOR
null
### Model description Galactica is a large language model for science. Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins. Web: galactica.org Code: https://github.com/paperswithcode/galai PS: It seems to use OPT architecture so maybe we can reuse the code used for adding these model. I would like to add this model :) ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20246/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20246/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20245/comments
https://api.github.com/repos/huggingface/transformers/issues/20245/events
https://github.com/huggingface/transformers/pull/20245
1,450,399,550
PR_kwDOCUB6oc5C9hBW
20,245
Add Spanish translation of serialization.mdx
{ "login": "donelianc", "id": 7807897, "node_id": "MDQ6VXNlcjc4MDc4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donelianc", "html_url": "https://github.com/donelianc", "followers_url": "https://api.github.com/users/donelianc/followers", "following_url": "https://api.github.com/users/donelianc/following{/other_user}", "gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donelianc/subscriptions", "organizations_url": "https://api.github.com/users/donelianc/orgs", "repos_url": "https://api.github.com/users/donelianc/repos", "events_url": "https://api.github.com/users/donelianc/events{/privacy}", "received_events_url": "https://api.github.com/users/donelianc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@omarespejel or @osanseviero, can you help me review this PR? Thanks!", "@osanseviero, I addressed your comments in my last commit. Thanks for your review. I'll consider your feedback for future translations.", "@sgugger, review done :) Please merge when possible. Thanks!" ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Add the Spanish translation for `serialization.mdx` as part of the #15947 issue. Changes include the Spanish version of the original document and the updated `_toctree.yml` file. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://github.com/huggingface/transformers/issues/15947#issuecomment-1312741421)**. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20245/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20245", "html_url": "https://github.com/huggingface/transformers/pull/20245", "diff_url": "https://github.com/huggingface/transformers/pull/20245.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20245.patch", "merged_at": 1669038415000 }
https://api.github.com/repos/huggingface/transformers/issues/20244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20244/comments
https://api.github.com/repos/huggingface/transformers/issues/20244/events
https://github.com/huggingface/transformers/pull/20244
1,450,349,567
PR_kwDOCUB6oc5C9WvV
20,244
Data collator for token classification pads labels column when receives pytorch tensors
{ "login": "markovalexander", "id": 22663468, "node_id": "MDQ6VXNlcjIyNjYzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/markovalexander", "html_url": "https://github.com/markovalexander", "followers_url": "https://api.github.com/users/markovalexander/followers", "following_url": "https://api.github.com/users/markovalexander/following{/other_user}", "gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}", "starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions", "organizations_url": "https://api.github.com/users/markovalexander/orgs", "repos_url": "https://api.github.com/users/markovalexander/repos", "events_url": "https://api.github.com/users/markovalexander/events{/privacy}", "received_events_url": "https://api.github.com/users/markovalexander/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20244). All of your documentation changes will be reflected on that endpoint.", "@sgugger", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20244). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes DataCollatorForTokenClassification fails when trying to collate examples with "labels" column of pytorch tensors. I faced this issue yesterday and checked that it can be solved easily, so I did not open any Issue on github. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20244/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20244/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20244", "html_url": "https://github.com/huggingface/transformers/pull/20244", "diff_url": "https://github.com/huggingface/transformers/pull/20244.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20244.patch", "merged_at": 1668619126000 }
https://api.github.com/repos/huggingface/transformers/issues/20243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20243/comments
https://api.github.com/repos/huggingface/transformers/issues/20243/events
https://github.com/huggingface/transformers/pull/20243
1,450,303,406
PR_kwDOCUB6oc5C9M2o
20,243
DataCollatorForTokenClassification pads label column when working with torch tensors
{ "login": "markovalexander", "id": 22663468, "node_id": "MDQ6VXNlcjIyNjYzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/markovalexander", "html_url": "https://github.com/markovalexander", "followers_url": "https://api.github.com/users/markovalexander/followers", "following_url": "https://api.github.com/users/markovalexander/following{/other_user}", "gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}", "starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions", "organizations_url": "https://api.github.com/users/markovalexander/orgs", "repos_url": "https://api.github.com/users/markovalexander/repos", "events_url": "https://api.github.com/users/markovalexander/events{/privacy}", "received_events_url": "https://api.github.com/users/markovalexander/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20243). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20243", "html_url": "https://github.com/huggingface/transformers/pull/20243", "diff_url": "https://github.com/huggingface/transformers/pull/20243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20243.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20242/comments
https://api.github.com/repos/huggingface/transformers/issues/20242/events
https://github.com/huggingface/transformers/pull/20242
1,450,150,409
PR_kwDOCUB6oc5C8ssh
20,242
Update reqs to include min gather_for_metrics Accelerate version
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Update all the PyTorch examples using `accelerate` and `gather_for_metrics` to include a minimum accelerate version of 0.12.0 since this introduced `gather_for_metrics` Related to https://github.com/huggingface/accelerate/issues/854 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20242/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20242", "html_url": "https://github.com/huggingface/transformers/pull/20242", "diff_url": "https://github.com/huggingface/transformers/pull/20242.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20242.patch", "merged_at": 1668536881000 }
https://api.github.com/repos/huggingface/transformers/issues/20241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20241/comments
https://api.github.com/repos/huggingface/transformers/issues/20241/events
https://github.com/huggingface/transformers/pull/20241
1,450,103,422
PR_kwDOCUB6oc5C8irh
20,241
Adding doctest for `fill-mask` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20241). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Follow up of https://github.com/huggingface/transformers/pull/20226 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20241/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20241", "html_url": "https://github.com/huggingface/transformers/pull/20241", "diff_url": "https://github.com/huggingface/transformers/pull/20241.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20241.patch", "merged_at": 1668588681000 }
https://api.github.com/repos/huggingface/transformers/issues/20240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20240/comments
https://api.github.com/repos/huggingface/transformers/issues/20240/events
https://github.com/huggingface/transformers/pull/20240
1,450,088,784
PR_kwDOCUB6oc5C8fmx
20,240
Adding doctest for `feature-extraction`.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20240). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20240). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20240", "html_url": "https://github.com/huggingface/transformers/pull/20240", "diff_url": "https://github.com/huggingface/transformers/pull/20240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20240.patch", "merged_at": 1668588692000 }
https://api.github.com/repos/huggingface/transformers/issues/20239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20239/comments
https://api.github.com/repos/huggingface/transformers/issues/20239/events
https://github.com/huggingface/transformers/pull/20239
1,450,076,664
PR_kwDOCUB6oc5C8dEG
20,239
Adding doctest for document-question-answering
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20239). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Follow up https://github.com/huggingface/transformers/pull/20226 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20239/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20239", "html_url": "https://github.com/huggingface/transformers/pull/20239", "diff_url": "https://github.com/huggingface/transformers/pull/20239.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20239.patch", "merged_at": 1668588755000 }
https://api.github.com/repos/huggingface/transformers/issues/20238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20238/comments
https://api.github.com/repos/huggingface/transformers/issues/20238/events
https://github.com/huggingface/transformers/issues/20238
1,450,070,116
I_kwDOCUB6oc5WblBk
20,238
DonutProcessor token2json too slow
{ "login": "michaelnation26", "id": 14008434, "node_id": "MDQ6VXNlcjE0MDA4NDM0", "avatar_url": "https://avatars.githubusercontent.com/u/14008434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelnation26", "html_url": "https://github.com/michaelnation26", "followers_url": "https://api.github.com/users/michaelnation26/followers", "following_url": "https://api.github.com/users/michaelnation26/following{/other_user}", "gists_url": "https://api.github.com/users/michaelnation26/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelnation26/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelnation26/subscriptions", "organizations_url": "https://api.github.com/users/michaelnation26/orgs", "repos_url": "https://api.github.com/users/michaelnation26/repos", "events_url": "https://api.github.com/users/michaelnation26/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelnation26/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThanks for looking into this and finding the bottleneck. Would you be available to contribute this to the community by opening a PR? Thanks!", "> Hi,\r\n> \r\n> Thanks for looking into this and finding the bottleneck. Would you be available to contribute this to the community by opening a PR? Thanks!\r\n\r\nYes. I just opened the [PR](https://github.com/huggingface/transformers/pull/20283). " ]
1,668
1,669
1,669
CONTRIBUTOR
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.4.0-1094-azure-x86_64-with-glibc2.2.5 - Python version: 3.8.15 - Huggingface_hub version: 0.11.0.dev0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` processor = DonutProcessor.from_pretrained("nielsr/donut-base") sequence = ( "<s_name>John Doe</s_name><s_age>99</s_age><s_city>Atlanta</s_city>" "<s_state>GA</s_state><s_zip>30301</s_zip><s_phone>123-4567</s_phone>" ) # If you're not using a jupyter notebook, delete the %timeit %timeit processor.token2json(sequence) ``` The `token2json` method does not scale well with the number of xml tags. It takes ~70ms for each tag in the sequence str. If there are 10 tags, it will take ~700ms, 20 tags ~1400ms, etc. The bottleneck in `token2json` is `self.tokenizer.get_added_vocab()`. It's called once for every tag. `get_added_vocab` takes ~70ms to run each time. ### Expected behavior Since `get_added_vocab` isn't changing during the `token2json` call, `get_added_vocab` should only be called once and the results be reused. Here's one potential solution: ``` def token2json(self, tokens, is_inner_value=False, added_vocab=None): """ Convert a (generated) token sequence into an ordered JSON format. """ if added_vocab is None: added_vocab = self.tokenizer.get_added_vocab() output = dict() while tokens: start_token = re.search(r"<s_(.*?)>", tokens, re.IGNORECASE) if start_token is None: break key = start_token.group(1) end_token = re.search(rf"</s_{key}>", tokens, re.IGNORECASE) start_token = start_token.group() if end_token is None: tokens = tokens.replace(start_token, "") else: end_token = end_token.group() start_token_escaped = re.escape(start_token) end_token_escaped = re.escape(end_token) content = re.search(f"{start_token_escaped}(.*?){end_token_escaped}", tokens, re.IGNORECASE) if content is not None: content = content.group(1).strip() if r"<s_" in content and r"</s_" in content: # non-leaf node value = self.token2json(content, is_inner_value=True, added_vocab=added_vocab) if value: if len(value) == 1: value = value[0] output[key] = value else: # leaf nodes output[key] = [] for leaf in content.split(r"<sep/>"): leaf = leaf.strip() if leaf in added_vocab and leaf[0] == "<" and leaf[-2:] == "/>": leaf = leaf[1:-2] # for categorical special tokens output[key].append(leaf) if len(output[key]) == 1: output[key] = output[key][0] tokens = tokens[tokens.find(end_token) + len(end_token):].strip() if tokens[:6] == r"<sep/>": # non-leaf nodes return [output] + self.token2json(tokens[6:], is_inner_value=True, added_vocab=added_vocab) if len(output): return [output] if is_inner_value else output else: return [] if is_inner_value else {"text_sequence": tokens} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20238/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20237/comments
https://api.github.com/repos/huggingface/transformers/issues/20237/events
https://github.com/huggingface/transformers/pull/20237
1,450,038,165
PR_kwDOCUB6oc5C8VEq
20,237
Adding an example for `depth-estimation` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20237). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20237). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20237/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20237", "html_url": "https://github.com/huggingface/transformers/pull/20237", "diff_url": "https://github.com/huggingface/transformers/pull/20237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20237.patch", "merged_at": 1668588765000 }
https://api.github.com/repos/huggingface/transformers/issues/20236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20236/comments
https://api.github.com/repos/huggingface/transformers/issues/20236/events
https://github.com/huggingface/transformers/pull/20236
1,450,015,984
PR_kwDOCUB6oc5C8QYs
20,236
Updating the doctest for conversational.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20236). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20236). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Follow up https://github.com/huggingface/transformers/pull/20226 - Make it tested against - Add explicit output in the test. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20236", "html_url": "https://github.com/huggingface/transformers/pull/20236", "diff_url": "https://github.com/huggingface/transformers/pull/20236.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20236.patch", "merged_at": 1668588673000 }
https://api.github.com/repos/huggingface/transformers/issues/20235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20235/comments
https://api.github.com/repos/huggingface/transformers/issues/20235/events
https://github.com/huggingface/transformers/pull/20235
1,449,879,115
PR_kwDOCUB6oc5C7zfo
20,235
Adding `audio-classification` example in the doc.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Follow up of https://github.com/huggingface/transformers/pull/20226 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20235", "html_url": "https://github.com/huggingface/transformers/pull/20235", "diff_url": "https://github.com/huggingface/transformers/pull/20235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20235.patch", "merged_at": 1668588664000 }
https://api.github.com/repos/huggingface/transformers/issues/20234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20234/comments
https://api.github.com/repos/huggingface/transformers/issues/20234/events
https://github.com/huggingface/transformers/pull/20234
1,449,861,458
PR_kwDOCUB6oc5C7vrT
20,234
Fix `run_clip.py`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? The type annotation for `Transform.forward` (in `run_clip.py`) never work when I tried several torch/pillow versions. This forward is compiled by `torch.jit`. With the current type annotation `x: Image`, we get error ```bash RuntimeError: Unknown type name 'Image': ``` With `x: Image.Image`, error is ```bash NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults: File "/home/huggingface/miniconda3/envs/py38/lib/python3.8/site-packages/PIL/Image.py", line 550 def __exit__(self, *args): ~~~~~ <--- HERE if hasattr(self, "fp") and getattr(self, "_exclusive_fp", False): if getattr(self, "_fp", False): '__torch__.PIL.Image.Image' is being compiled since it was called from 'Transform.forward' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20234/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20234", "html_url": "https://github.com/huggingface/transformers/pull/20234", "diff_url": "https://github.com/huggingface/transformers/pull/20234.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20234.patch", "merged_at": 1668523521000 }
https://api.github.com/repos/huggingface/transformers/issues/20233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20233/comments
https://api.github.com/repos/huggingface/transformers/issues/20233/events
https://github.com/huggingface/transformers/pull/20233
1,449,813,908
PR_kwDOCUB6oc5C7lU3
20,233
Fix docstring of CLIPTokenizer(Fast)
{ "login": "TilmannR", "id": 11174702, "node_id": "MDQ6VXNlcjExMTc0NzAy", "avatar_url": "https://avatars.githubusercontent.com/u/11174702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TilmannR", "html_url": "https://github.com/TilmannR", "followers_url": "https://api.github.com/users/TilmannR/followers", "following_url": "https://api.github.com/users/TilmannR/following{/other_user}", "gists_url": "https://api.github.com/users/TilmannR/gists{/gist_id}", "starred_url": "https://api.github.com/users/TilmannR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TilmannR/subscriptions", "organizations_url": "https://api.github.com/users/TilmannR/orgs", "repos_url": "https://api.github.com/users/TilmannR/repos", "events_url": "https://api.github.com/users/TilmannR/events{/privacy}", "received_events_url": "https://api.github.com/users/TilmannR/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20233). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? The docstrings of `CLIPTokenizer` and `CLIPTokenizerFast` had the wrong default value for `bos_token` (`<|endoftext|>` instead of `<|startoftext|>`). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? - Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20233/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20233", "html_url": "https://github.com/huggingface/transformers/pull/20233", "diff_url": "https://github.com/huggingface/transformers/pull/20233.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20233.patch", "merged_at": 1668524416000 }
https://api.github.com/repos/huggingface/transformers/issues/20232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20232/comments
https://api.github.com/repos/huggingface/transformers/issues/20232/events
https://github.com/huggingface/transformers/pull/20232
1,449,757,129
PR_kwDOCUB6oc5C7ZB5
20,232
Slightly alter Keras dummy loss
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@gante Can you take one more look at the expanded tests?\r\n\r\nAlso, the failing test looks like flakiness with a too-low threshold to me, unrelated to this PR" ]
1,668
1,668
1,668
MEMBER
null
The existing `dummy_loss` returns a scalar, but this is incorrect - Keras expects loss functions to return a vector of per-sample losses. This only causes issues when the user passes `sample_weight` to `fit()` - I'll probably adjust our tests to include that to make sure we don't regress on this in future.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20232/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20232", "html_url": "https://github.com/huggingface/transformers/pull/20232", "diff_url": "https://github.com/huggingface/transformers/pull/20232.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20232.patch", "merged_at": 1668531523000 }
https://api.github.com/repos/huggingface/transformers/issues/20231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20231/comments
https://api.github.com/repos/huggingface/transformers/issues/20231/events
https://github.com/huggingface/transformers/pull/20231
1,449,625,057
PR_kwDOCUB6oc5C68WN
20,231
TF: add test for `PushToHubCallback`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20231). All of your documentation changes will be reflected on that endpoint.", "Could you ensure it works with `huggingface_hub` version `v0.11.0rc1`? I think there was an issue with this callback.\r\n\r\ncc @Wauplin ", "To be more precise, it should already work with `v0.11.0rc1` (was not the case with `v0.11.0rc0`) BUT a warning message should be triggered \"Creating a repository through 'clone_from' is deprecated\". To prevent this warning, `PushToHubCallback` should create the repo before cloning it. Something like:\r\n```py\r\ncreate_repo(repo_id, exists_ok=True)\r\n\r\nrepo = Repository(clone_from=repo_id)\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20231). All of your documentation changes will be reflected on that endpoint.", "All tests are now passing with `HUGGINGFACE_CO_STAGING=1 py.test tests/test_modeling_tf_common.py::TFModelPushToHubTester -vv`, tested against version `0.10.1` of the hub.\r\n\r\n@Wauplin I've factored in the `create_repo()` before cloning the repo 👍 \r\n\r\n@Rocketknight1 it was indeed failing because of the lack of `compile()`. After the fix, I ran against a problem where the last epoch was not being stored (because of the process kill). Let me know if you agree with the changes in `PushToHubCallback`", "Nice, thanks for making the change @gante !" ]
1,668
1,668
1,668
MEMBER
null
# What does this PR do? Adds a test to TF's `PushToHubCallback`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20231/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20231", "html_url": "https://github.com/huggingface/transformers/pull/20231", "diff_url": "https://github.com/huggingface/transformers/pull/20231.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20231.patch", "merged_at": 1668688424000 }
https://api.github.com/repos/huggingface/transformers/issues/20230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20230/comments
https://api.github.com/repos/huggingface/transformers/issues/20230/events
https://github.com/huggingface/transformers/pull/20230
1,449,603,946
PR_kwDOCUB6oc5C63zR
20,230
[ASR Examples] Update README for Whisper
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Summarises changes from https://github.com/huggingface/transformers/pull/19519 and the blog post at https://huggingface.co/blog/fine-tune-whisper, providing examples for fine-tuning Whisper using the example script `run_speech_recognition_seq2seq.py`. Required some re-jigging of the Speech-Encoder-Decoder Model examples in the section "[Automatic Speech Recognition with Sequence-to-Sequence](https://github.com/huggingface/transformers/pull/20230#sequence-to-sequence)".
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20230/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20230", "html_url": "https://github.com/huggingface/transformers/pull/20230", "diff_url": "https://github.com/huggingface/transformers/pull/20230.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20230.patch", "merged_at": 1668770666000 }
https://api.github.com/repos/huggingface/transformers/issues/20229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20229/comments
https://api.github.com/repos/huggingface/transformers/issues/20229/events
https://github.com/huggingface/transformers/pull/20229
1,449,577,437
PR_kwDOCUB6oc5C6x-E
20,229
Add AutoBackbone + ResNetBackbone
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "> My main question is about how the backbone is loaded and saved alongside our current models. For example, if the backbone has non-default settings - where is this information saved? Is it part of the model e.g. DETR config?\r\n\r\nA backbone can be loaded and saved just like any other model in the library (due to the inheritance of `PreTrainedModel`), either using a config to initialize the backbone with randomly initialized weights or using the `from_pretrained` method to load pre-trained weights.\r\n\r\n> Can we assume the backbone is always frozen, or are there cases when people might want to fine-tune it? In this case, how would the backbone weights be saved out?\r\n\r\nThe backbone is always meant to be fine-tuned together with the backbone, I've not seen a case where the backbone is kept frozen for now, but we could definitely add a `freeze` method in case people want to do that.", "@sgugger the remaining CI issue is about the fact that AutoBackbone and ResNetBackbone are not documented, however I'd like to actually keep them away from the docs for now. However I need to keep ResNetBackbone in the main init for it to work with the Auto API.", "@NielsRogge Thanks for your answers. The backbone can definitely be saved with `save_pretrained`, my question is really about how that's bundled together with a model. For example, if I wanted a new model, with a new-non-standard backbone - is this how I would create it? \r\n\r\n```\r\nfrom transformers import AutoConfig, AutoBackbone, AutoModelForXXX\r\n\r\n# Modify the default backbone configuration\r\nbackbone_config = AutoConfig(backbone_repo)\r\nbackbone_config.param = new_value_0\r\n\r\n# Modify the default model configuration\r\nmodel_config = AutoConfig(model_repo)\r\nmodel_config.param = new_value_1\r\n\r\nmodel = AutoModelForXXX(model_config, backbone_config)\r\n```\r\n\r\nOr is the backbone configuration part of the model configuration?\r\n\r\n```\r\nfrom transformers import AutoConfig\r\n\r\nmodel_config = AutoConfig(model_repo)\r\nmodel_config[\"backbone\"][\"param\"] = new_value_0\r\nmodel_config.param = new_value_1\r\n\r\nmodel = AutoModel(model_config)\r\n```\r\n\r\nor something else?", "@NielsRogge You can add them to [this list](https://github.com/huggingface/transformers/blob/0d0d77693f79c7f7d39bba6921cc9741f00de988/utils/check_repo.py#L665) for now.", "The way frameworks like DETR and MaskFormer can use a backbone can be as follows. In their configuration, e.g. `MaskFormerConfig`, they should have a `backbone_config` attribute, which can be set to ResNetConfig for instance, like so:\r\n\r\n```\r\nfrom transformers import ResNetConfig, MaskFormerConfig\r\n\r\nbackbone_config = ResNetConfig(hidden_sizes=...)\r\nconfig = MaskFormerConfig(backbone_config=backbone_config)\r\n````\r\nSo yes, the backbone configuration is part of the model configuration.\r\n\r\nThen, inside modeling_maskformer.py, one can instantiate the backbone like so:\r\n\r\n```\r\nfrom transformers import AutoBackbone\r\n\r\nself.backbone = AutoBackbone.from_config(config.backbone_config)\r\n```", "@NielsRogge Great - thanks for clarifying :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "@sgugger @amyeroberts feel free to approve :) ", "This has been fixed in https://github.com/NielsRogge/transformers/commit/1863e8ff5dcaf29333d5bd08f266a562d36a8ee6, have marked the conversation as resolved", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.", "Failing tests are unrelated (there seems to be an issue with the cache of MarkupLM and mBART50 on the CI), merging." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? As #20204 is a big PR, this PR adds a part of it as a standalone PR. This PR adds the AutoBackbone class, along with one example class that it supports, namely ResNetBackbone. ## Usage Usage is as follows: ``` from transformers import AutoImageProcessor, AutoBackbone import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50") model = AutoBackbone.from_pretrained("microsoft/resnet-50", out_features=["stage1", "stage2", "stage3", "stage4"]) inputs = processor(image, return_tensors="pt") outputs = model(**inputs) for k,v in zip(outputs.stage_names, outputs.hidden_states): print(k, v.shape) ``` which prints: ``` stage1 torch.Size([1, 256, 56, 56]) stage2 torch.Size([1, 512, 28, 28]) stage3 torch.Size([1, 1024, 14, 14]) stage4 torch.Size([1, 2048, 7, 7]) ``` Besides this, one can also obtain information about the channel dimension and stride for each of the requested stages, like so: ``` print(model.channels) print(model.strides) ``` This is handy as frameworks (like MaskFormer) need to know this information at initialization time. ## To do's/questions - [ ] We don't want `xxxBackbone` classes to be tested by all tests defined in `test_modeling_common.py`(i.e. it should probably not be part of `all_model_classes`), hence I added the class to IGNORE_NON_TESTED, and added a separate test for it. Let me know if this is ok. - [ ] It would probably be best to not have our backbones included in the documentation from the start. For now they are just an internal part of models like DETR and MaskFormer. Could we not include them in the docs for now? Currently I'm getting: ``` Exception: The following objects are in the public init so should be documented: - AutoBackbone - ResNetBackbone ``` An alternative option could be to add backbones to the list of PRIVATE_MODELS in utils/check_repo.py for now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20229/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20229", "html_url": "https://github.com/huggingface/transformers/pull/20229", "diff_url": "https://github.com/huggingface/transformers/pull/20229.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20229.patch", "merged_at": 1668696200000 }
https://api.github.com/repos/huggingface/transformers/issues/20228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20228/comments
https://api.github.com/repos/huggingface/transformers/issues/20228/events
https://github.com/huggingface/transformers/pull/20228
1,449,571,785
PR_kwDOCUB6oc5C6wv-
20,228
Remove `authorized_missing_keys`in favor of _keys_to_ignore_on_load_missing
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20228). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Just cleans up a bug/typo discoverd. `_keys_to_ignore_on_load_missing` is supported but not `authorized_missing_keys`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20228/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20228", "html_url": "https://github.com/huggingface/transformers/pull/20228", "diff_url": "https://github.com/huggingface/transformers/pull/20228.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20228.patch", "merged_at": 1668521561000 }
https://api.github.com/repos/huggingface/transformers/issues/20227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20227/comments
https://api.github.com/repos/huggingface/transformers/issues/20227/events
https://github.com/huggingface/transformers/pull/20227
1,449,516,247
PR_kwDOCUB6oc5C6ksT
20,227
Fix Tapas/Scatter device issue
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Fix Tapas/Scatter device issue
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20227/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20227", "html_url": "https://github.com/huggingface/transformers/pull/20227", "diff_url": "https://github.com/huggingface/transformers/pull/20227.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20227.patch", "merged_at": 1668522076000 }
https://api.github.com/repos/huggingface/transformers/issues/20226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20226/comments
https://api.github.com/repos/huggingface/transformers/issues/20226/events
https://github.com/huggingface/transformers/pull/20226
1,449,406,503
PR_kwDOCUB6oc5C6NKz
20,226
Adding ASR pipeline example.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This is the first PR of a series that will aim to add examples for ALL pipelines. The example uses `pipeline` instead of the documented `AutomaticSpeechRecognitionPipeline` intentionally. It break my personal idea of documenting really what we're documenting, but it makes the example a lot more simple and enables to link to the pipeline tutorial for the "orthogonal" arguments (notably `batch_size`.) https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline.example <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20226/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20226", "html_url": "https://github.com/huggingface/transformers/pull/20226", "diff_url": "https://github.com/huggingface/transformers/pull/20226.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20226.patch", "merged_at": 1668588705000 }
https://api.github.com/repos/huggingface/transformers/issues/20225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20225/comments
https://api.github.com/repos/huggingface/transformers/issues/20225/events
https://github.com/huggingface/transformers/issues/20225
1,449,404,412
I_kwDOCUB6oc5WZCf8
20,225
Whisper: timestamp tokens are missing in the tokenizer vocabulary
{ "login": "guillaumekln", "id": 4805513, "node_id": "MDQ6VXNlcjQ4MDU1MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4805513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guillaumekln", "html_url": "https://github.com/guillaumekln", "followers_url": "https://api.github.com/users/guillaumekln/followers", "following_url": "https://api.github.com/users/guillaumekln/following{/other_user}", "gists_url": "https://api.github.com/users/guillaumekln/gists{/gist_id}", "starred_url": "https://api.github.com/users/guillaumekln/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guillaumekln/subscriptions", "organizations_url": "https://api.github.com/users/guillaumekln/orgs", "repos_url": "https://api.github.com/users/guillaumekln/repos", "events_url": "https://api.github.com/users/guillaumekln/events{/privacy}", "received_events_url": "https://api.github.com/users/guillaumekln/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey! Though I agree with you on the fact that normally the tokenizer vocab size is the same as the model's, in this case, the original model was similar. The `timestamp` tokens are all outside vocabulary and decoded as `\"\"` with the fast `GPT2FastTokenizer` in the original code. The `WhisperTokenizer` was adapted to follow this, in order to not bother with the tokens that are only used with the `timestamp_logits_processor`. Indeed, all the extra tokens (>50363) are treated as timestamps prediction and \"ignored\". \r\n\r\ncc @LysandreJik as we had a lot of issue with other models, there were discussion on whether to always add the extra tokens or not? \r\n", "Thank you for the explanation! Feel free to close this issue if you want to keep it this way. I can work around it in my own code.\r\n\r\nFor the context, I'm converting some Transformers models to another format and I want to always match the tokenizer vocabulary size to the model vocabulary size. In many cases I need to add some tokens (most often the \"madeupword\" to pad the vocabulary to a multiple of 8) and sometimes I need to remove some (e.g. for `facebook/bart-large-cnn` the tokenizer has 1 additional token for some reasons). It would be great if `len(tokenizer.get_vocab())` is always consistent with the model vocabulary size.", "Since the original OpenAI implementation moved from HF tokenizers to their own Tiktoken library, it seems timestamp tokens are now handled and converted to token ids. Right now the timestamps tokens in HF are handled as strings instead of individual tokens.\r\n\r\n```python\r\nfrom transformers import WhisperTokenizer\r\nfrom whisper.tokenizer import get_tokenizer\r\n\r\nhf_tok = WhisperTokenizer.from_pretrained(\"openai/whisper-tiny\")\r\nopenai_tok = get_tokenizer(multilingual=True, language=\"en\", task=\"transcribe\")\r\n\r\nopenai_tok.encode(\"<|1.00|>\", disallowed_special=[])\r\n# [27, 91, 16, 13, 628, 91, 29]\r\nhf_tok.encode(\"<|1.00|>\", add_special_tokens=False)\r\n# [27, 91, 16, 13, 628, 91, 29]\r\n\r\nopenai_tok.encode(\"<|1.00|>\", allowed_special=set(openai_tok.special_tokens.keys()))\r\n# [50414]\r\nhf_tok.encode(\"<|1.00|>\", add_special_tokens=True)\r\n# [50258, 50363, 27, 91, 16, 13, 628, 91, 29, 50257]\r\n```\r\n\r\nCould it be the time to revisit this issue?", "Nope, we also added support for decoding with timestamps. For that you just need to specify the `decode_with_timestamps` see [here](https://github.com/ArthurZucker/transformers/blob/4a9e17b0ab3f35f9be92b82e7f783d043c0ca161/src/transformers/models/whisper/tokenization_whisper.py#L493)", "Yeah, but if you want to train using the right timestamps tokens, there's no support for that AFAIK. We had to add the tokens manually. The encoding function is a bit more convoluted to modify to support encoding of the timestamps tokens with a flag like it's now implemented for decoding.", "Then we should probably add them to `added_tokens_encoder` and refactor a bit the tokenizer for encoding decoding wdyt @sanchit-gandhi @hollance ", "Yep I agree - took a look through and @versae is spot on, the new OpenAI tokenizer has these tokens as part of their tokenizer, so they can be encoded directly. We should follow suit and update our encoding function accordingly", "Also if they are part of the special tokens, they will not be skipped by default, and would have to be skipped using `skip_special_tokens = True`. But should be alright! @versae feel free to open a PR and ping me if you have time, otherwise I might be able to tackle that in 2 weeks", "Hey! Sorry, haven't had the time to properly implement this, but I can confirm than using `AddedToken`s works well 👌", "I'll open a PR, I am not entirely sure just using added tokens will solve this. We need backward compatibility so I'll add a new argument like `encoder_special`. Will see", "Ok! So it seems that `skip_special_tokens` when encoding will make it's way to transformers 😉 \r\n", "Keep us posted!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Skip special tokens was merged in #25081 so closing this now" ]
1,668
1,692
1,692
CONTRIBUTOR
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The vocabulary size returned by the `WhisperTokenizer` does not match the vocabulary size reported in the configuration `config.vocab_size`. The timestamp tokens are missing in the tokenizer vocabulary. Consider this example: ```python import transformers tokenizer = transformers.WhisperTokenizer.from_pretrained("openai/whisper-tiny") config = transformers.WhisperConfig.from_pretrained("openai/whisper-tiny") vocab = tokenizer.get_vocab() print(len(vocab) == config.vocab_size) # prints False for i in range(1500 + 1): timestamp = "<|%.2f|>" % (i * 0.02) vocab[timestamp] = len(vocab) print(len(vocab) == config.vocab_size) # prints True ``` The token surface used in the code snippet is copied from the reference implementation: https://github.com/openai/whisper/blob/9f70a352f9f8630ab3aa0d06af5cb9532bd8c21d/whisper/tokenizer.py#L151 ### Expected behavior The vocabulary size returned by the tokenizer should match the model vocabulary size.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20225/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20224/comments
https://api.github.com/repos/huggingface/transformers/issues/20224/events
https://github.com/huggingface/transformers/issues/20224
1,449,346,442
I_kwDOCUB6oc5WY0WK
20,224
Question about layer norm in T5
{ "login": "FrostML", "id": 7160927, "node_id": "MDQ6VXNlcjcxNjA5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/7160927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrostML", "html_url": "https://github.com/FrostML", "followers_url": "https://api.github.com/users/FrostML/followers", "following_url": "https://api.github.com/users/FrostML/following{/other_user}", "gists_url": "https://api.github.com/users/FrostML/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrostML/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrostML/subscriptions", "organizations_url": "https://api.github.com/users/FrostML/orgs", "repos_url": "https://api.github.com/users/FrostML/repos", "events_url": "https://api.github.com/users/FrostML/events{/privacy}", "received_events_url": "https://api.github.com/users/FrostML/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.", "Sure. have created this topic [question-about-layer-norm-in-t5](https://discuss.huggingface.co/t/question-about-layer-norm-in-t5/26142). I'd close this issue. " ]
1,668
1,668
1,668
NONE
null
I notice that there is no bias and no subtraction of mean in layer norm. I understand no bias but I'm confused about the meaning of computing variance without subtraction of mean. Normally, we compute variance, for example: <img width="491" alt="image" src="https://user-images.githubusercontent.com/7160927/201863288-bcb63739-0f74-4487-bbb8-91299dad2eef.png"> But it's different here. Why is that? Hoping some explanation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20224/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20223/comments
https://api.github.com/repos/huggingface/transformers/issues/20223/events
https://github.com/huggingface/transformers/issues/20223
1,449,333,554
I_kwDOCUB6oc5WYxMy
20,223
Yolo model trigger win10/11 reboot on Intel 13900k
{ "login": "Michael-YongWang", "id": 12209153, "node_id": "MDQ6VXNlcjEyMjA5MTUz", "avatar_url": "https://avatars.githubusercontent.com/u/12209153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Michael-YongWang", "html_url": "https://github.com/Michael-YongWang", "followers_url": "https://api.github.com/users/Michael-YongWang/followers", "following_url": "https://api.github.com/users/Michael-YongWang/following{/other_user}", "gists_url": "https://api.github.com/users/Michael-YongWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/Michael-YongWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Michael-YongWang/subscriptions", "organizations_url": "https://api.github.com/users/Michael-YongWang/orgs", "repos_url": "https://api.github.com/users/Michael-YongWang/repos", "events_url": "https://api.github.com/users/Michael-YongWang/events{/privacy}", "received_events_url": "https://api.github.com/users/Michael-YongWang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "How fast does it work with transformers?\r\n13900k compared to gpu?" ]
1,668
1,675
1,671
NONE
null
### System Info All Yolos models/pipeline case my pc reboot immediately on windows 10 and 11 system ( WSL in win11 and Ubuntu(not virtual) work well on the same machine) 1. I reinstalled the os system and tested (both win10 and win11) two times and make sure all the drivers are up to date. The results are the same. (I test the system with Aida64 system stability test, all ok) 2. I tried Ubuntu(not virtual) and WSL in windows 11, both works ok. 3. I tried other models, such as Bert, all works fine. ``` python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny') inputs = feature_extractor(images=image, return_tensors="pt") for _ in range(100): model(**inputs) # this line causes the PC reboot. (usually after several loops) ``` Hardware info: CPU: Intel 13900k Motherboard: ASUS Z690-G, bios 2103 GPU: Intel 13900k Software info: Conda env latest Python 3.10.4 transformers 4.24.0 windows 10 and windows 11 My guess it is caused by the incompatible of the hardware. Could you please help to take a look? ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Use python 3.10 on windows 10/11 with Intel 13900k run the code below ``` python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny') # this line cases the PC reboot. inputs = feature_extractor(images=image, return_tensors="pt") for _ in range(100): model(**inputs) # this line causes the PC reboot. (usually after several loops) ``` ### Expected behavior The pc reboot immediately after several loop
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20223/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20222/comments
https://api.github.com/repos/huggingface/transformers/issues/20222/events
https://github.com/huggingface/transformers/pull/20222
1,449,331,638
PR_kwDOCUB6oc5C59SW
20,222
Quick fix for `test_torch_fx` under torch 1.13
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20222). All of your documentation changes will be reflected on that endpoint.", "Apart from the exceptional case where we don't want to use the prod server for testing, I am very much against using env variable/constants that change for testing. Why would the test fail for the meta device but in real life the code would work? It doesn't make any sense.", "Well, using `meta` with `torch 1.13` gives random values, which fail some **input checks** in some modeling files, like the one at the end.\r\n\r\nIf this is the intended behavior of `meta` (i.e. if the behavior in PT < 1.13 was wrong - but we took advantage of it), then moving away from `meta` seems the way to me.\r\n\r\nIf you still think we should stick with `meta`, I will leave @michaelbenayoun to take care of the related test though :-)\r\n\r\n#### More details\r\n\r\n(notice that here we don't provide any input tensor)\r\n```ptyon\r\ntraced_model = symbolic_trace(model, input_names)\r\n```\r\nDuring this call, in `fx.py`, there are tensor preparation, which use `meta` and it gives random values which fail the check in some modeling files\r\n\r\n(bart)\r\n```python\r\n eos_mask = input_ids.eq(self.config.eos_token_id)\r\n\r\n if len(torch.unique_consecutive(eos_mask.sum(1))) > 1:\r\n raise ValueError(\"All examples must have the same number of <eos> tokens.\")\r\n```", "close for now, unless one day we need this." ]
1,668
1,675
1,668
COLLABORATOR
null
# What does this PR do? Quick fix for `test_torch_fx` under torch 1.13. Since `torch 1.13`, using `meta` device gives some noisy tensors, which breaks `test_torch_fx`. @michaelbenayoun is more qualified than me to explain this more clearly. This PR changes the device to "cpu" if `fx.py` is used in the testing. running all `test_torch_fx_*` tests: - with "**meta**": 1m33s - with "**cpu**": 1m45s So the speed-up of using `meta` is tiny , and we can just use `cpu`. However, **Questions**: - It looks like `fx.py` is only used in the testing (at least in `transformers`)? - Is this module in `optimum`? - If so: - is it used only in testing? - is it used for large real models?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20222/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20222", "html_url": "https://github.com/huggingface/transformers/pull/20222", "diff_url": "https://github.com/huggingface/transformers/pull/20222.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20222.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20221/comments
https://api.github.com/repos/huggingface/transformers/issues/20221/events
https://github.com/huggingface/transformers/issues/20221
1,449,243,594
I_kwDOCUB6oc5WYbPK
20,221
Unable to use BERTmodel
{ "login": "ashlytom", "id": 65658907, "node_id": "MDQ6VXNlcjY1NjU4OTA3", "avatar_url": "https://avatars.githubusercontent.com/u/65658907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashlytom", "html_url": "https://github.com/ashlytom", "followers_url": "https://api.github.com/users/ashlytom/followers", "following_url": "https://api.github.com/users/ashlytom/following{/other_user}", "gists_url": "https://api.github.com/users/ashlytom/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashlytom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashlytom/subscriptions", "organizations_url": "https://api.github.com/users/ashlytom/orgs", "repos_url": "https://api.github.com/users/ashlytom/repos", "events_url": "https://api.github.com/users/ashlytom/events{/privacy}", "received_events_url": "https://api.github.com/users/ashlytom/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Are you behind some firewall? It seems like you can't connect hugging face. If you are able to freely access internet with proxy you could yous this snippet:\r\n\r\n```python3\r\nimport os\r\nos.environ['HTTP_PROXY'] = 'http://proxy_url:proxy_port'\r\nos.environ['HTTPS_PROXY'] = 'http://proxy_url:proxy_port'\r\n```\r\n\r\notherwise you can manually download model and load it from disk", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
### System Info Transformer version = 4.24.0 Python = 3.8 Error: SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-large-uncased/resolve/main/config.json (Caused by SSLError(SSLError(136, '[X509: NO_CERTIFICATE_OR_CRL_FOUND] no certificate or crl found (_ssl.c:4264)'))) @LysandreJik ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from summarizer import Summarizer body = 'Text body that you want to summarize with BERT' body2 = 'Something else you want to summarize with BERT' model = Summarizer() model(body) model(body2) ### Expected behavior Load the summaries
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20221/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20220
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20220/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20220/comments
https://api.github.com/repos/huggingface/transformers/issues/20220/events
https://github.com/huggingface/transformers/pull/20220
1,449,001,207
PR_kwDOCUB6oc5C43A0
20,220
Fixing Spelling Error in Testing Documentation - Issue #20194
{ "login": "kasmith11", "id": 29484286, "node_id": "MDQ6VXNlcjI5NDg0Mjg2", "avatar_url": "https://avatars.githubusercontent.com/u/29484286?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kasmith11", "html_url": "https://github.com/kasmith11", "followers_url": "https://api.github.com/users/kasmith11/followers", "following_url": "https://api.github.com/users/kasmith11/following{/other_user}", "gists_url": "https://api.github.com/users/kasmith11/gists{/gist_id}", "starred_url": "https://api.github.com/users/kasmith11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kasmith11/subscriptions", "organizations_url": "https://api.github.com/users/kasmith11/orgs", "repos_url": "https://api.github.com/users/kasmith11/repos", "events_url": "https://api.github.com/users/kasmith11/events{/privacy}", "received_events_url": "https://api.github.com/users/kasmith11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20220). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR fixes a spelling error in the testing documentation. Changes "checkt" to "check". Fixes #20194 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20220/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20220", "html_url": "https://github.com/huggingface/transformers/pull/20220", "diff_url": "https://github.com/huggingface/transformers/pull/20220.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20220.patch", "merged_at": 1668523207000 }
https://api.github.com/repos/huggingface/transformers/issues/20219
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20219/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20219/comments
https://api.github.com/repos/huggingface/transformers/issues/20219/events
https://github.com/huggingface/transformers/pull/20219
1,448,886,050
PR_kwDOCUB6oc5C4ekj
20,219
Add Neighborhood Attention Transformer (NAT) and Dilated NAT (DiNAT) models
{ "login": "alihassanijr", "id": 68103095, "node_id": "MDQ6VXNlcjY4MTAzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/68103095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alihassanijr", "html_url": "https://github.com/alihassanijr", "followers_url": "https://api.github.com/users/alihassanijr/followers", "following_url": "https://api.github.com/users/alihassanijr/following{/other_user}", "gists_url": "https://api.github.com/users/alihassanijr/gists{/gist_id}", "starred_url": "https://api.github.com/users/alihassanijr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alihassanijr/subscriptions", "organizations_url": "https://api.github.com/users/alihassanijr/orgs", "repos_url": "https://api.github.com/users/alihassanijr/repos", "events_url": "https://api.github.com/users/alihassanijr/events{/privacy}", "received_events_url": "https://api.github.com/users/alihassanijr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.", "In this instance, I'd actually prefer to rely on the extra dep (as long as it's properly set up as a soft dependency, which seems to be the case in the PR). We don't know how to maintain CUDA kernels anyway, so support will be a lot better if it's done elsewhere.", "Hi @NielsRogge @sgugger \r\nActually we're happy to do it either way, but the reason we packaged NATTEN as a pip package in the first place is to make installation easier, especially since we plan to upgrade it frequently.\r\nUnlike Deformable Attention's extension, NATTEN it doesn't come with a fixed set of kernels. There's still improvements that we've planned ahead to NATTEN, especially adding new kernels to optimize latency. \r\nAnd just to confirm @sgugger 's comment, maintaining all the kernels in NATTEN might increase your wheel sizes, which I'm not sure if you want to do. The cpu-only wheels aren't too bad, but the ones with cuda wheels are up to 50MB.\r\nAnd as far as testing CUDA kernels go, you'd need to have unit tests to check the backwards functions (gradcheck), and running those for all different use cases that call different kernels is just really time consuming (and we only pull it off by running it on 80GB A100 GPUs; it's so memory-intensive).\r\n\r\nAnd yes, as @sgugger stated, it would work as a soft dependency; even imports aren't broken. But there's dummy calls to the package in case it's not available, that will raise an error only when the forward functions are called.\r\n\r\nAs for the torch tests, I only did those as a suggestion. I would personally recommend having a separate test for these models in general so that it doesn't get in the way. Additionally, knowing the torch build beforehand is better, since that way we can just specify a wheel URL and have it just install a lot faster.\r\n\r\nI'll make the changes to the docs and run fix-copies again.\r\n\r\nAnd yes, both models were cloned off of Swin; the architectures are somewhat similar.\r\nThe difference here is that there's a convolutional tokenizer and downsampler replacing the patch+embed and patch merging layers; and we like to keep tensors in shape B H W C, since NATTEN also expects the height and width axes to be unrolled, like a Conv2d. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.", "Actually, I just noticed `transformers` doesn't come with wheels, right?\r\nMy previous statement about wheel sizes is irrelevant in that case.\r\n\r\nHowever, I would shift more towards @sgugger 's point of view, since loading torch extensions at runtime becomes less and less reliable as extensions grow, and NATTEN already has twice the number of kernels compared to MSDeformAttn (excluding the templating that goes on in NATTEN). This would have the users wait up to 5 minutes before being able to use these models, and would affect reproducibility (because the torch build's cuda version doesn't necessarily match the system's, or the expected one for that matter).\r\n\r\nFWIW, I've definitely seen libraries take one of three approaches: \r\n* either adding a C backend to their package for all custom operations and build wheels (`detectron2`, `mmcv`); \r\n* or just having soft dependencies to pip packages that already do that to avoid the hassle. It also doesn't create new issues with upgrades to CUDA or torch (which depending on their usage can break things) \r\n* And of course there's still the option of lazy loading (the way MSDeformAttn is handled right now), which is honestly a great alternative to both, but only as long as the kernels aren't being updated and compile time is relatively low.\r\n\r\n", "@sgugger All done.\r\nAnd sorry I didn't go with directly applying your changes, some of them needed a few more replacements so I figured I'd make the changes and commit all of them at once.", "Thanks! You'll need a rebase on main to get the tests to pass as TensorFlow new release breaks the CI", "Thanks for the reviews and feedback @sgugger @NielsRogge @amyeroberts .\r\nLooking forward to contributing more in the future." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR adds [NAT](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer/blob/main/NAT.md) and [DiNAT](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer/blob/main/DiNAT.md) and their dependencies. ## Dependencies - [NATTEN](https://github.com/SHI-Labs/NATTEN/) is the only new requirement. The models themselves are mostly in the same style as most `timm` models. They just require NATTEN to get the sliding window attention. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? - Yes, mostly boilerplate from similar models. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten @NielsRogge @amyeroberts @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20219/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20219", "html_url": "https://github.com/huggingface/transformers/pull/20219", "diff_url": "https://github.com/huggingface/transformers/pull/20219.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20219.patch", "merged_at": 1668794907000 }
https://api.github.com/repos/huggingface/transformers/issues/20218
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20218/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20218/comments
https://api.github.com/repos/huggingface/transformers/issues/20218/events
https://github.com/huggingface/transformers/pull/20218
1,448,654,888
PR_kwDOCUB6oc5C3rju
20,218
Generate: add generation config class
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20218). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20218). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20218). All of your documentation changes will be reflected on that endpoint.", "@sgugger @patrickvonplaten all comments addressed. There is a doctest example that loads our first `generation_config.json`, [from gpt-2](https://huggingface.co/gpt2/blob/main/generation_config.json) 🙌 \r\n\r\nI took the liberty to add a few goodies, like making it a public class (to allow `from transformers import GenerationConfig`), adding a `__repr__` (for a preview) and sections in the `__init__` docstring (I think users will appreciate)." ]
1,668
1,669
1,669
MEMBER
null
# What does this PR do? Adds the class that handles generation configurations. From this class, we can then start working on the interaction with other parts of the Hugging Face 🤗 ecosystem. See https://github.com/huggingface/transformers/issues/18655 for the original plan, which matches this implementation. Throughout the PR, there are a few very visible comments where your input would be appreciated to establish solid base functionality. The expected follow-up tasks are: 1. We can load a Generation Config from a Model Config (to make it compatible with all existing repos) 2. `.generate()` inherits default values from generate config, instead of from model config. Add an argument to pass a generation configuration. 3. Add doctests for loading, storing, and using with `.generate()` 4. The top 100 models with generate compatibility get a PR with the corresponding generation confg file(s). At least 1 has multiple generation config files, to be used as an example. 5. (bonus) Generate config validates its attributes at initialization time 6. (bonus) Pipelines make automatic use of the appropriate generation config file 7. (bonus) Ensure that advanced functionality (e.g. models like RAG and Trainer) operates well with the generation config file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20218/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20218", "html_url": "https://github.com/huggingface/transformers/pull/20218", "diff_url": "https://github.com/huggingface/transformers/pull/20218.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20218.patch", "merged_at": 1669037416000 }
https://api.github.com/repos/huggingface/transformers/issues/20217
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20217/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20217/comments
https://api.github.com/repos/huggingface/transformers/issues/20217/events
https://github.com/huggingface/transformers/pull/20217
1,448,560,195
PR_kwDOCUB6oc5C3XDC
20,217
remaining pytorch type hints
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.", "@Rocketknight1 There is only 1 left: EsmForProteinFolding. when I tested test_modeling_esmfold.py before doing any changes to the file some tests where initially getting failed.\r\n\r\n![Capture](https://user-images.githubusercontent.com/88665786/202189778-224a16b0-4926-4adc-9fef-c9b87ca1b161.PNG)\r\n", "@IMvision12 Those tests shouldn't be failing anymore, but don't worry - I'm happy to accept all the other changes!" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Type hints @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20217/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20217", "html_url": "https://github.com/huggingface/transformers/pull/20217", "diff_url": "https://github.com/huggingface/transformers/pull/20217.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20217.patch", "merged_at": 1668617620000 }
https://api.github.com/repos/huggingface/transformers/issues/20216
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20216/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20216/comments
https://api.github.com/repos/huggingface/transformers/issues/20216/events
https://github.com/huggingface/transformers/issues/20216
1,448,521,466
I_kwDOCUB6oc5WVq76
20,216
Infer pretrained base model and tokenizer used from a fine-tuned model_dir
{ "login": "yunjiangster", "id": 1061224, "node_id": "MDQ6VXNlcjEwNjEyMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1061224?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yunjiangster", "html_url": "https://github.com/yunjiangster", "followers_url": "https://api.github.com/users/yunjiangster/followers", "following_url": "https://api.github.com/users/yunjiangster/following{/other_user}", "gists_url": "https://api.github.com/users/yunjiangster/gists{/gist_id}", "starred_url": "https://api.github.com/users/yunjiangster/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yunjiangster/subscriptions", "organizations_url": "https://api.github.com/users/yunjiangster/orgs", "repos_url": "https://api.github.com/users/yunjiangster/repos", "events_url": "https://api.github.com/users/yunjiangster/events{/privacy}", "received_events_url": "https://api.github.com/users/yunjiangster/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
### Feature request I would like to infer the pretrained base model and the tokenizer from the model_dir of a fine-tuned model. Is that possible with the default trainer setup? What I did previously was to programmatically copy all the tokenizer data into each checkpoint folder. But this seems cumbersome. ### Motivation This reduces overhead during inference/prediction. All I have to specify is just the model_dir. ### Your contribution Given enough guidance, I am happy to submit a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20216/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20215
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20215/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20215/comments
https://api.github.com/repos/huggingface/transformers/issues/20215/events
https://github.com/huggingface/transformers/issues/20215
1,448,501,532
I_kwDOCUB6oc5WVmEc
20,215
Why is tokenized-text Non-Fast when it is of type BatchEncoding and the tokenizer is Fast?
{ "login": "vineetk1", "id": 10158529, "node_id": "MDQ6VXNlcjEwMTU4NTI5", "avatar_url": "https://avatars.githubusercontent.com/u/10158529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineetk1", "html_url": "https://github.com/vineetk1", "followers_url": "https://api.github.com/users/vineetk1/followers", "following_url": "https://api.github.com/users/vineetk1/following{/other_user}", "gists_url": "https://api.github.com/users/vineetk1/gists{/gist_id}", "starred_url": "https://api.github.com/users/vineetk1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineetk1/subscriptions", "organizations_url": "https://api.github.com/users/vineetk1/orgs", "repos_url": "https://api.github.com/users/vineetk1/repos", "events_url": "https://api.github.com/users/vineetk1/events{/privacy}", "received_events_url": "https://api.github.com/users/vineetk1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Hey, could you provide a full reproduction script? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,674
1,674
NONE
null
### System Info Huggingface Transformers version **4.21.1** PyTorch version **1.12.1+cu116** Python version **3.10.4** Platform Ubuntu **22.04.1 LTS** **transformers-cli env** Traceback (most recent call last): File "/home/vin/.local/bin/transformers-cli", line 5, in <module> from transformers.commands.transformers_cli import main File "/home/vin/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module> from .pt_to_tf import PTtoTFCommand File "/home/vin/.local/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module> from datasets import load_dataset ModuleNotFoundError: No module named 'datasets' ### Who can help? @LysandreJik, @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ### Step 1: Text is tokenized as follows: ``` batch_nnIn_ids = self.tokenizer(text=batch_histories, text_pair=batch_user_input_pretok, is_split_into_words=True, padding=True, truncation='only_first', return_tensors='pt', return_token_type_ids=False, return_attention_mask=True, return_overflowing_tokens=False) ``` ### Step 2A: Tokenized text is put in a dictionary as follows: ``` batch = { 'user_input_pretok': batch_user_input_pretok, 'nnIn_ids': batch_nnIn_ids, 'ids': batch_ids, 'labels': batch_token_labels } return batch ``` ### Step 2B: Using PDB, I check that the tokenized text is of type BatchEncoding and that the tokenizer is Fast: ``` (Pdb) type(batch['nnIn_ids']), batch['nnIn_ids'].is_fast, self.tokenizer.is_fast (<class 'transformers.tokenization_utils_base.BatchEncoding'>, True, True) Note that the tokenizer is Fast, batch['nnIn_ids'] is Fast and it is of type BatchEncoding ``` ### Step 3: Control goes to Lightning, and then to the predict function where I use PDB to make the same checks as follows: ``` (Pdb) type(batch['nnIn_ids']), batch['nnIn_ids'].is_fast, self.tokenizer.is_fast (<class 'transformers.tokenization_utils_base.BatchEncoding'>, False, True) ``` Why did batch['nnIn_ids'] switched to non-Fast from Fast? Note that the tokenizer is Fast but batch['nnIn_ids'] is Non-Fast even though it is of type BatchEncoding ### Expected behavior A text is tokenized using a Fast tokenizer, and the tokenized-text is of type BatchEncoding and it is also Fast. After this tokenized text is passed to another function through Lightning, it becomes non-Fast even though the tokenized-text remains of type BatchEncoding and the tokenizer remain Fast. The expected behavior of the tokenized-text should remain Fast.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20215/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20214
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20214/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20214/comments
https://api.github.com/repos/huggingface/transformers/issues/20214/events
https://github.com/huggingface/transformers/pull/20214
1,448,408,777
PR_kwDOCUB6oc5C22eO
20,214
Allow trainer to return eval. loss for CLIP-like models
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Hopefully the change covers everything that could happen now and in the future." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Allow trainer to give **evaluation** loss for CLIP-like models. Currently, this line https://github.com/huggingface/transformers/blob/07d8d6e2f7a920d399e5e86a82d78179cdfa6746/src/transformers/trainer.py#L3192 gives `has_labels = False` for CLIP-like models, and can't give loss value in the evaluation. without this PR: ```bash ***** eval metrics ***** epoch = 1.0 eval_runtime = 0:00:01.67 eval_samples_per_second = 9.571 eval_steps_per_second = 4.785 ``` with this PR. ```bash ***** eval metrics ***** epoch = 1.0 eval_loss = 0.8159 eval_runtime = 0:00:01.66 eval_samples_per_second = 9.598 eval_steps_per_second = 4.799 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20214/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20214", "html_url": "https://github.com/huggingface/transformers/pull/20214", "diff_url": "https://github.com/huggingface/transformers/pull/20214.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20214.patch", "merged_at": 1668538030000 }
https://api.github.com/repos/huggingface/transformers/issues/20213
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20213/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20213/comments
https://api.github.com/repos/huggingface/transformers/issues/20213/events
https://github.com/huggingface/transformers/pull/20213
1,448,311,468
PR_kwDOCUB6oc5C2hsM
20,213
Generate: add Bloom fixes for contrastive search
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20213). All of your documentation changes will be reflected on that endpoint.", "(rebasing to include #20200 in CI, the related test was failing)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20213). All of your documentation changes will be reflected on that endpoint.", "Stumbled on the same issue and found this fix. Thanks lots! " ]
1,668
1,668
1,668
MEMBER
null
# What does this PR do? Bloom has a different cache format, where the batch size and the number of heads are packed in a single dimension. Contrastive search needs to manipulate the cache at the batch dimension, so naturally it fails. This PR adds functionality to convert Bloom's cache back and forth between its own format and the standard cache format. Then, propagates the use of these new functions to places where the conversion logic was already being used, and finally fixes Bloom's contrastive search. All slow tests are passing. ____________________________________ This fix was also requested [here](https://huggingface.co/spaces/joaogante/contrastive_search_generation/discussions/1#636e23d1c441b42489215026)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20213/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20213/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20213", "html_url": "https://github.com/huggingface/transformers/pull/20213", "diff_url": "https://github.com/huggingface/transformers/pull/20213.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20213.patch", "merged_at": 1668450852000 }
https://api.github.com/repos/huggingface/transformers/issues/20212
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20212/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20212/comments
https://api.github.com/repos/huggingface/transformers/issues/20212/events
https://github.com/huggingface/transformers/issues/20212
1,448,292,974
I_kwDOCUB6oc5WUzJu
20,212
The Document of Pipelines are not complete.
{ "login": "zhaowei-wang-nlp", "id": 22047467, "node_id": "MDQ6VXNlcjIyMDQ3NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaowei-wang-nlp", "html_url": "https://github.com/zhaowei-wang-nlp", "followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers", "following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs", "repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos", "events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "Is this doc more helpful ?\r\n\r\nhttps://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines\r\n\r\nWe definitely need to update the docs for pipeline overall.\r\n\r\n", "This doc still doesn't mention that results will be returned one by one (instead of in a batch) if I pass a dataset to the pipeline.", "https://github.com/huggingface/transformers/pull/20437\r\n\r\nDon't hesitate to comment on the PR if you feel it's not clear enough.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,672
1,672
NONE
null
### System Info transformers 4.24.0 ### Who can help? @sgugger @stevhliu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.FillMaskPipeline.__call__ The "\_\_call\_\_" function only describes what will happen when I pass a list of samples into the pipeline. However, this function also receives other input, such as a dataset: https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipeline-batching The document currently doesn't have a description of what will happen if I pass a dataset to the pipeline. This bothers me a lot. For example, when I pass a dataset and use batch_size=64 in this pipeline, I expect that I will get the results of 64 samples. However, the pipeline only returns the result of the next sample and I need to iterate 64 times to get the results of the whole batch. This doesn't involve any fatal error but is very confusing. ### Expected behavior add a description of passing a dataset to a pipeline.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20212/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20211
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20211/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20211/comments
https://api.github.com/repos/huggingface/transformers/issues/20211/events
https://github.com/huggingface/transformers/pull/20211
1,448,281,954
PR_kwDOCUB6oc5C2bff
20,211
prepare for "__floordiv__ is deprecated and its behavior will change in a future version of pytorch"
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,677
1,677
COLLABORATOR
null
# What does this PR do? Should adress the `__floordiv__` warnings mentionned in #19934. Divinding torch tensor using `//` is no longer supported and has to be done via `torch.div`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20211/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20211", "html_url": "https://github.com/huggingface/transformers/pull/20211", "diff_url": "https://github.com/huggingface/transformers/pull/20211.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20211.patch", "merged_at": 1677664162000 }
https://api.github.com/repos/huggingface/transformers/issues/20210
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20210/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20210/comments
https://api.github.com/repos/huggingface/transformers/issues/20210/events
https://github.com/huggingface/transformers/pull/20210
1,448,278,845
PR_kwDOCUB6oc5C2a1H
20,210
[DPR] fix unexpected "pooler" keys
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20210). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Fixes #19111, by adding [r"pooler"] to the list of ignored unexpected keys for both the `DPRPretrainedContextEncoder` and `DPRPretrainedQuestionEncoder`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20210/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20210", "html_url": "https://github.com/huggingface/transformers/pull/20210", "diff_url": "https://github.com/huggingface/transformers/pull/20210.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20210.patch", "merged_at": 1668445520000 }
https://api.github.com/repos/huggingface/transformers/issues/20209
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20209/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20209/comments
https://api.github.com/repos/huggingface/transformers/issues/20209/events
https://github.com/huggingface/transformers/pull/20209
1,448,129,147
PR_kwDOCUB6oc5C16JY
20,209
Add gpt-sw3 model to transformers
{ "login": "ekgren", "id": 1921821, "node_id": "MDQ6VXNlcjE5MjE4MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1921821?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekgren", "html_url": "https://github.com/ekgren", "followers_url": "https://api.github.com/users/ekgren/followers", "following_url": "https://api.github.com/users/ekgren/following{/other_user}", "gists_url": "https://api.github.com/users/ekgren/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekgren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekgren/subscriptions", "organizations_url": "https://api.github.com/users/ekgren/orgs", "repos_url": "https://api.github.com/users/ekgren/repos", "events_url": "https://api.github.com/users/ekgren/events{/privacy}", "received_events_url": "https://api.github.com/users/ekgren/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey! Feel free to ping me if you need any pointers! :) \r\n5seems like the history is a bit broken at this point `rebasing` with a force push should help. ", "> Hey! Feel free to ping me if you need any pointers! :) 5seems like the history is a bit broken at this point `rebasing` with a force push should help.\r\n\r\nHi thank you for the help, did a rebase! \r\n\r\nWe are soon going to put the models public on the hub, after that we hope that we are close to being able to make the PR ready for review.\r\n", "_The documentation is not available anymore as the PR was closed or merged._", "We have finally reached the point where most of the work seems to be done and would appreciate any feedback or a merge. The models haven't been made public yet but will be shortly. @ArthurZucker ", "> Thanks for the great work! My biggest question here is : what are the architectural differences with `gpt2`? If there are none, I am not sure that we actually need to add any modelling files\r\n\r\nSo after reimplementing the code from nemo Megatron it turns out that with float 32 we get the exact same output between our implementation and the hf gpt-2 implementation and we reverted back to copying the GPT-2 hf implementation. So in the end we might not need it although we would of course be happy to have our model name featured as a class, the same as bloom and opt. The tokenizer on the other hand implements new behaviour.\r\n\r\nWhat is your opinion on the model class?", "@ArthurZucker just checking in that we adressed your comments! Thank you. We had a couple of questions otherwise it looks good from our end 🤩", "Actually, it seems like the modeling code is exactly the same as for GPT2? In this case you can just set in the auto-mappings a correspondance `(\"gpt-sw3\", \"GPT2Model\")` without needing to add a new model module.", "Yep sorry for the late reply! Let's do the same as what was done with [BertJapanese](https://huggingface.co/docs/transformers/model_doc/bert-japanese). I'll review again sorry for not realising sooner `# Copied from` 😓 ", "> Actually, it seems like the modeling code is exactly the same as for GPT2? In this case you can just set in the auto-mappings a correspondance `(\"gpt-sw3\", \"GPT2Model\")` without needing to add a new model module.\r\n\r\nThank you for your feedback, we're happy to follow your lead on how to proceed! So, if we understand you correctly, we should then remove `modeling_gpt_sw3.py`, `configuration_gpt_sw3.py` entirely?\r\n\r\n> Yep sorry for the late reply! Let's do the same as what was done with [BertJapanese](https://huggingface.co/docs/transformers/model_doc/bert-japanese). I'll review again sorry for not realising sooner `# Copied from` sweat\r\n\r\nShould we await further review or simply get started on this?\r\n\r\n@sgugger @ArthurZucker ", "Yes, that would be easier. Just remove the model and config files and in the auto mapping, use the GPT2 classes.", "Thank you again for your help, I hope we have now resolved all of your issues. Do you see anything else required from our side in this PR? @sgugger @ArthurZucker ", "A last nit @JoeyOhman , could you add an example of a pretrained model that you released being loaded in the doc? Like what is done with `BertJapanese` [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20209/en/model_doc/bert-japanese). Would help people to understand that they can use the `GPT2` model with this tokenizer 😉 ", "Hey @ekgren could you add the correct checkpoints? They are probably private. \r\nSee our CI fail [here](https://github.com/huggingface/transformers/actions/runs/3681682680/jobs/6228686391#step:8:1027) ", "@sgugger @ArthurZucker Thank you for all the help and guidance! We have made all the tokenizers reffered to in the PR public.\r\n\r\nWe encountered some internal issues with the model sharing in the last minute, very sorry for that. Currently we are not allowed to share the model files publicly. However we can share the tokenizer and would very much like for it to be included in huggingface, since those with private access to the model easily can use the full hf ecosystem. We hope to be able to share the models fully public in the near future.\r\n\r\nHopefully our PR can still be included in the release now that the tests should pass.", "No problem, I was thinking about the tokenizer rather than the actual checkpoints! You were mostly adding a tokenizer so I don't really see an issue with this 😉 Thanks for the contribution!" ]
1,668
1,671
1,670
CONTRIBUTOR
null
This adds the gpt-sw3 models and tokenizer to hf. The models are developed by AI Sweden and others. They are gpt models trained from scratch with the nemo-megatron framework and will initially range in sizes from 128m to 20B. The models are multilingual and the languages in the models are English, Swedish, Norwegian, Danish and Icelandic. Fixes # (issue) https://github.com/huggingface/transformers/issues/20176 @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20209", "html_url": "https://github.com/huggingface/transformers/pull/20209", "diff_url": "https://github.com/huggingface/transformers/pull/20209.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20209.patch", "merged_at": 1670868733000 }
https://api.github.com/repos/huggingface/transformers/issues/20208
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20208/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20208/comments
https://api.github.com/repos/huggingface/transformers/issues/20208/events
https://github.com/huggingface/transformers/pull/20208
1,448,127,213
PR_kwDOCUB6oc5C15uL
20,208
Generate: TF sample doctest result update
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20208). All of your documentation changes will be reflected on that endpoint.", "> But here do you mean some specific implementation differences in the generation method?\r\n\r\nNo, all differences are just at hardware level :)" ]
1,668
1,668
1,668
MEMBER
null
# What does this PR do? Updates sample's output to match the [expected output in CI](https://github.com/huggingface/transformers/actions/runs/3457499228/jobs/5770974732), which has a GPU. (The previous output was for a CPU device -- sampling has different outputs for the same seed depending on the hardware, due to implementation differences.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20208/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20208", "html_url": "https://github.com/huggingface/transformers/pull/20208", "diff_url": "https://github.com/huggingface/transformers/pull/20208.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20208.patch", "merged_at": 1668440569000 }
https://api.github.com/repos/huggingface/transformers/issues/20207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20207/comments
https://api.github.com/repos/huggingface/transformers/issues/20207/events
https://github.com/huggingface/transformers/issues/20207
1,448,098,663
I_kwDOCUB6oc5WUDtn
20,207
Pipeline only returns the result of the first sample in a batch
{ "login": "zhaowei-wang-nlp", "id": 22047467, "node_id": "MDQ6VXNlcjIyMDQ3NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaowei-wang-nlp", "html_url": "https://github.com/zhaowei-wang-nlp", "followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers", "following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs", "repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos", "events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,668
1,668
1,668
NONE
null
### System Info transformers 4.24.0 ### Who can help? @Nars ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I just used the code written on https://huggingface.co/docs/transformers/main_classes/pipelines, as follows: ``` from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset import datasets dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised") pipe = pipeline("text-classification", device=0) for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"): print(out) # [{'label': 'POSITIVE', 'score': 0.9998743534088135}] # Exactly the same output as before, but the content are passed # as batches to the model ``` The problem is the variable only contains one result of the first sample in a batch, even though the batch size is 8. If I write: ``` [x for x in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first")] ``` I will get the results of all samples. ### Expected behavior The batch size is 8 so the variable "out" should contain 8 results (8 lists of dicts).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20207/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20206
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20206/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20206/comments
https://api.github.com/repos/huggingface/transformers/issues/20206/events
https://github.com/huggingface/transformers/issues/20206
1,448,010,160
I_kwDOCUB6oc5WTuGw
20,206
ONNX support for encoder/decoder separately
{ "login": "yanksyoon", "id": 37652070, "node_id": "MDQ6VXNlcjM3NjUyMDcw", "avatar_url": "https://avatars.githubusercontent.com/u/37652070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanksyoon", "html_url": "https://github.com/yanksyoon", "followers_url": "https://api.github.com/users/yanksyoon/followers", "following_url": "https://api.github.com/users/yanksyoon/following{/other_user}", "gists_url": "https://api.github.com/users/yanksyoon/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanksyoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanksyoon/subscriptions", "organizations_url": "https://api.github.com/users/yanksyoon/orgs", "repos_url": "https://api.github.com/users/yanksyoon/repos", "events_url": "https://api.github.com/users/yanksyoon/events{/privacy}", "received_events_url": "https://api.github.com/users/yanksyoon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Might be of interest to @lewtun ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @yanksyoon we have a new ONNX exporter in `optimum` [docs](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model) that splits the encoder and decoder separately for fast inference with T5 models. I recommend opening your feature request there\r\n\r\ncc @fxmarty ", "Hi @yanksyoon , indeed Triton Inference Server should be quite good! For reference, the latest release should help to export separately encoder/decoder, see the section \"Extended ONNX export for encoder-decoder and decoder models\": https://github.com/huggingface/optimum/releases/tag/v1.6.0\r\n\r\nWe are very open to contributions to make the use of Triton Inference Server for decoder models smoother in Optimum. Feel free to open an issue there so that we can track and help!", "@fxmarty @lewtun Thanks for the guidance! Will have a look this weekend :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,675
1,675
NONE
null
### Feature request Hello, thanks for your work! I am trying to leverage NVIDIA's TRITON inference server to speed up generation for Blenderbot model for production. If I export the model in ONNX format, it exports the entire model whereas generative models go through a separate encoder->decoder cycles(if I understand correctly). Would you be able to point me towards a direction on how I would be able to manage this? I've tried: 1. writing an ONNX config and generating the model.onnx files separately for each encoder & decoder. Got stuck at converting encoder outputs to decoder inputs and such, mainly due to mismatch in ONNX model outputs and generative model outputs(such as hidden layers) 2. using the whole model exported as onnx and loading it into TRITON inference server. Also stuck at generation parameters & handling input/outputs, reason same as above. ### Motivation To use generative model for blenderbot 3 on TRITON inference server ### Your contribution Would love to make it work and share the results, either through PR for separate encoder/decoder onnx configs or sharing the results.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20206/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20205
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20205/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20205/comments
https://api.github.com/repos/huggingface/transformers/issues/20205/events
https://github.com/huggingface/transformers/pull/20205
1,448,005,417
PR_kwDOCUB6oc5C1e7e
20,205
Make size_dict conversion logs clearer
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger @patrickvonplaten The quickest and easiest remedy I had was adding a `param_name` flag for the `get_size_dict` function. This way, if an image processor has both `size` and `crop_size` variables being updated, the logs reflect the parameter being changed. However, it's a bit of a dirty trick. LMK if you have an alternative suggestion. \r\n\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20205). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20205). All of your documentation changes will be reflected on that endpoint.", "Thanks!" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? * Tidies up logic for converting `size` parameter to the expected dictionary format for image processors. * Adds `param_name` as a flag so logs reflect the variable being updated e.g. `crop_size` versus `size` Address part of #20185 - trying to make the logs clearer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20205/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20205", "html_url": "https://github.com/huggingface/transformers/pull/20205", "diff_url": "https://github.com/huggingface/transformers/pull/20205.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20205.patch", "merged_at": 1668509578000 }
https://api.github.com/repos/huggingface/transformers/issues/20204
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20204/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20204/comments
https://api.github.com/repos/huggingface/transformers/issues/20204/events
https://github.com/huggingface/transformers/pull/20204
1,447,964,530
PR_kwDOCUB6oc5C1WEE
20,204
[MaskFormer] PoC of AutoBackbone API to support ResNet + Swin
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20204). All of your documentation changes will be reflected on that endpoint.", "Closing this PR as it has been added in smaller separate PRs.", "Thanks again for splitting it, it was really better this way!" ]
1,668
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? This PR adds support for more backbones than just Swin for the MaskFormer framework. The MaskFormer authors released checkpoints that leverage either ResNet or Swin as backbones, however we currently only support Swin. To support various backbones, this PR introduces the AutoBackbone API. It introduces the following improvements: - [x] adding AutoBackbone, ResNetBackbone - [x] move MaskFormerSwin to its own modeling files and add MaskFormerSwinBackbone - [x] make MaskFormer use the AutoBackbone API to leverage any backbone, including ResNet ## AutoBackbone API The API is implemented as follows. For a given model, one should implement an additional class, `xxxBackbone`, for instance `ResNetBackbone`, in addition to the regular classes like `xxxModel` and `xxxForImageClassification`. The backbone class turns the `xxxModel` into a generic backbone to be consumed by a framework, like DETR or MaskFormer. The API is inspired by the one used in [Detectron2](https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/backbone.py). This means that any backbone should implement a `forward` and an `output_shape` method: * the `forward` method returns the hidden states for each of the desired stages * the `output_shape` method returns the channel dimension + strides for each of the desired stages. There are additional methods like `size_divisibility` and `padding_constraints` which could be added in the future, for now they don't seem necessary. ## Usage An example can be found below. Basically, the user can specify which layers/stages to get the feature maps from. ``` from transformers import ResNetConfig, ResNetBackbone import torch config = ResNetConfig(out_features=["stem", "stage1", "stage2", "stage3", "stage4"]) model = ResNetBackbone(config) pixel_values = torch.randn(1, 3, 224, 224) outputs = model(pixel_values) for key, value in outputs.items(): print(key, value.shape) ``` which prints: ``` stem torch.Size([1, 64, 56, 56]) stage1 torch.Size([1, 256, 56, 56]) stage2 torch.Size([1, 512, 28, 28]) stage3 torch.Size([1, 1024, 14, 14]) stage4 torch.Size([1, 2048, 7, 7]) ``` One can check the output specification as follows: ``` print(model.output_shape()) ``` which prints: ``` {'stem': ShapeSpec(channels=64, height=None, width=None, stride=2), 'stage1': ShapeSpec(channels=256, height=None, width=None, stride=4), 'stage2': ShapeSpec(channels=512, height=None, width=None, stride=4), 'stage3': ShapeSpec(channels=1024, height=None, width=None, stride=4), 'stage4': ShapeSpec(channels=2048, height=None, width=None, stride=4)} ``` This is useful for frameworks, as they oftentimes require to know these things at initialization. The Backbone API has a corresponding Auto class, which means that the following also works: ``` from transformers import ResNetConfig, AutoBackbone config = ResNetConfig(out_features=["stem", "stage1", "stage2", "stage3", "stage4"]) model = AutoBackbone.from_config(config) ``` The AutoBackbone class also supports loading pre-trained weights, like so: ``` from transformers import AutoBackbone backbone = AutoBackbone.from_pretrained("microsoft/resnet-50") ``` As the backbone also uses the same `base_model_prefix` like other head models. ## To do's - [ ] Add tests for backbones. Backbone classes should not be tested with all tests defined in `test_modeling_common.py`, instead they should have separate tests. Here I'd like to discuss the best way to add these tests. - [ ] make fixup is currently complaining about the following: ``` Exception: There were 2 failures: MaskFormerSwinBackbone is defined in transformers.models.maskformer.modeling_maskformer_swin but is not present in any of the auto mapping. If that is intended behavior, add its name to `IGNORE_NON_AUTO_CONFIGURED` in the file `utils/check_repo.py`. ResNetBackbone is defined in transformers.models.resnet.modeling_resnet but is not present in any of the auto mapping. If that is intended behavior, add its name to `IGNORE_NON_AUTO_CONFIGURED` in the file `utils/check_repo.py` ``` => however I added both MaskFormerSwinBackbone and ResNetBackbone to modeling_auto.py, so not sure why this fails. cc @sgugger ## MaskFormer specifics MaskFormer supports both ResNet and Swin as backbone. It does support native ResNets, but it doesn't use the native Swin as backbone, which is why we have a separate `MaskFormerSwinModel` class in the library, as well as a `MaskFormerSwinBackbone` class in this PR. Happy to discuss the design!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20204/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20204", "html_url": "https://github.com/huggingface/transformers/pull/20204", "diff_url": "https://github.com/huggingface/transformers/pull/20204.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20204.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20203
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20203/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20203/comments
https://api.github.com/repos/huggingface/transformers/issues/20203/events
https://github.com/huggingface/transformers/pull/20203
1,447,914,201
PR_kwDOCUB6oc5C1K6l
20,203
update relative positional embedding
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20203). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20203). All of your documentation changes will be reflected on that endpoint.", "cc @sgugger just FYI, will ping you once I know that all the tests pass. ", "( before the modification the added test did not pass, now they do) ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20203). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? The current `relative_key` positional embedding is incorrect when `use_cache=True`. This was first highlighted in #19045. Reproducing script : ```pyton import torch from transformers import BertTokenizer, BertLMHeadModel, set_seed tokenizer = BertTokenizer.from_pretrained("zhiheng-huang/bert-base-uncased-embedding-relative-key") model = BertLMHeadModel.from_pretrained("zhiheng-huang/bert-base-uncased-embedding-relative-key", is_decoder = True) inputs = tokenizer("No I'm not missing the ", return_tensors="pt") input_ids = inputs.input_ids[:,:-1] attention_mask = inputs.attention_mask[:,:-1] with torch.no_grad(): model.config.use_cache = False set_seed(0) output = model(input_ids, attention_mask = attention_mask, use_cache =False) print(output.logits[:,-1,:]) model.config.use_cache = True output_1 = model(input_ids[:,:-1], use_cache = True, attention_mask = attention_mask[:,:-1]) pkv = output_1.past_key_values output_2 = model(input_ids[:,-1:], past_key_values = pkv , use_cache = True) print(output_2.logits[:,-1,:]) ``` ``` tensor([[-5.4971, -6.4888, -8.3359, ..., -7.3612, -5.5480, -0.9784]]) tensor([[ -7.2693, -7.7799, -10.0905, ..., -7.5183, -7.4255, -4.6804]]) ``` Lets also make sure that this feature is tested using `create_and_check_decoder_model_past_large_inputs`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20203/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20203", "html_url": "https://github.com/huggingface/transformers/pull/20203", "diff_url": "https://github.com/huggingface/transformers/pull/20203.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20203.patch", "merged_at": 1668505595000 }
https://api.github.com/repos/huggingface/transformers/issues/20202
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20202/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20202/comments
https://api.github.com/repos/huggingface/transformers/issues/20202/events
https://github.com/huggingface/transformers/pull/20202
1,447,773,342
PR_kwDOCUB6oc5C0sLq
20,202
Downgrade log warning -> info
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20202). All of your documentation changes will be reflected on that endpoint.", "Thank you! " ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Downgrades the logged messages about the size parameter being converted from int/tuple -> dict from warning to info. ## Fixes Addresses part of the issue raised in #20185 - where many downstream tasks would have multiple warning messages, potentially scaring the users and spamming their logs. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20202/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20202", "html_url": "https://github.com/huggingface/transformers/pull/20202", "diff_url": "https://github.com/huggingface/transformers/pull/20202.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20202.patch", "merged_at": 1668448613000 }
https://api.github.com/repos/huggingface/transformers/issues/20201
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20201/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20201/comments
https://api.github.com/repos/huggingface/transformers/issues/20201/events
https://github.com/huggingface/transformers/issues/20201
1,447,710,299
I_kwDOCUB6oc5WSk5b
20,201
distributed training
{ "login": "liumaishen", "id": 59140766, "node_id": "MDQ6VXNlcjU5MTQwNzY2", "avatar_url": "https://avatars.githubusercontent.com/u/59140766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liumaishen", "html_url": "https://github.com/liumaishen", "followers_url": "https://api.github.com/users/liumaishen/followers", "following_url": "https://api.github.com/users/liumaishen/following{/other_user}", "gists_url": "https://api.github.com/users/liumaishen/gists{/gist_id}", "starred_url": "https://api.github.com/users/liumaishen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liumaishen/subscriptions", "organizations_url": "https://api.github.com/users/liumaishen/orgs", "repos_url": "https://api.github.com/users/liumaishen/repos", "events_url": "https://api.github.com/users/liumaishen/events{/privacy}", "received_events_url": "https://api.github.com/users/liumaishen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
why the training args n_gpu is set to 1 when use Trainer's distributed training? which means only one device is allowed in a node? and the training parameters printed is calculated with the n_gpus=1. I want to know what should i do when every node has multi gpus.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20201/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20200
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20200/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20200/comments
https://api.github.com/repos/huggingface/transformers/issues/20200/events
https://github.com/huggingface/transformers/pull/20200
1,447,695,354
PR_kwDOCUB6oc5C0bS0
20,200
mark `test_save_load_fast_init_from_base` as `is_flaky`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I guess the best first step is to run it 1000 times (in a loop) and see how many times it fails (before and after your PR 20042).\r\n\r\nSometimes it is difficult to get reproducible failure if it is flaky.", "As I commented before, this probably comes from models having weights initialized outside of the `_init_weights` function. A way to debug would be to drop somewhere which weight was randomly dropped when the test has failed (if it's printed for instance, we can look for it in the logs)" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Mark `test_save_load_fast_init_from_base` as `is_flaky`. - This test is known to be flaky, see #19849 - The level of flakiness seems to get higher after #20042 - **ran 5 times, and all passed**. - **TODO**: check why #20042 makes this test more flaky.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20200/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20200", "html_url": "https://github.com/huggingface/transformers/pull/20200", "diff_url": "https://github.com/huggingface/transformers/pull/20200.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20200.patch", "merged_at": 1668448294000 }
https://api.github.com/repos/huggingface/transformers/issues/20199
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20199/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20199/comments
https://api.github.com/repos/huggingface/transformers/issues/20199/events
https://github.com/huggingface/transformers/pull/20199
1,447,612,404
PR_kwDOCUB6oc5C0Jhy
20,199
[docs] wrote i18n issue template
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20199). All of your documentation changes will be reflected on that endpoint.", "> Thanks a lot for drafting this! I think the `WIP` label is fine, it would avoid the issue getting closed by the stale-bot.\r\n\r\nOk, adding it in. So resulting lables will be `WIP` only.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20199). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Makes the issue template for translation- `i18n.md` available. Thank you @omarespejel https://github.com/huggingface/transformers/pull/17004 Some questions: - Would you prefer a `.yml` format for templates? I can convert it if you wish, @sgugger. - What labels should be added to the issue? `docs`, `i18n`, `help-wanted`, `WIP` perhaps? Part of https://github.com/huggingface/transformers/issues/20183, an effort to update the translation guide. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Hello @sgugger, may you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20199/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/20199/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20199", "html_url": "https://github.com/huggingface/transformers/pull/20199", "diff_url": "https://github.com/huggingface/transformers/pull/20199.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20199.patch", "merged_at": 1668447418000 }
https://api.github.com/repos/huggingface/transformers/issues/20198
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20198/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20198/comments
https://api.github.com/repos/huggingface/transformers/issues/20198/events
https://github.com/huggingface/transformers/pull/20198
1,447,595,492
PR_kwDOCUB6oc5C0F9X
20,198
Fix bug in segmentation postprocessing
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20198). All of your documentation changes will be reflected on that endpoint.", "> Could you add a corresponding test for this?\r\n\r\nAdded a test to `test_feature_extraction_maskformer.py` to test label fusing, I think I covered all cases but the test might still be a bit flaky due to using dummy model outputs.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20198). All of your documentation changes will be reflected on that endpoint.", "@NielsRogge I thought this issue was fixed with the ImageProcessor PR but the MaskFormer instance segmentation post-processing issue still persists. I updated the PR and added a test for label fusing. Could you take another look and approve if everything looks good?", "> PR looks good to me, except that modeling_vit.py should not be changed\r\n\r\nThe changes are reverted, could you take another look? ", "> Thanks for fixing!\r\n> \r\n> However I'm still wondering how MaskFormer would solve instance segmentation datasets which have overlapping instances, like COCO\r\n\r\nWe have MaskFormer models trained on the COCO panoptic and ADE semantic datasets, other models don't have model cards yet. But I can test it if one of the other models is trained on an instance segmentation dataset.\r\n\r\n" ]
1,668
1,672
1,672
CONTRIBUTOR
null
# What does this PR do? - Fixes bug in `MaskFormerFeatureExtractor.compute_segments()` that causes an error when label_ids_to_fuse is set to None during instance and panoptic segmentation post-processing. - Adds test to check if target labels are fused correctly Fixes #20132 ## Before submitting - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/20132
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20198/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20198", "html_url": "https://github.com/huggingface/transformers/pull/20198", "diff_url": "https://github.com/huggingface/transformers/pull/20198.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20198.patch", "merged_at": 1672846498000 }
https://api.github.com/repos/huggingface/transformers/issues/20197
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20197/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20197/comments
https://api.github.com/repos/huggingface/transformers/issues/20197/events
https://github.com/huggingface/transformers/pull/20197
1,447,547,322
PR_kwDOCUB6oc5Cz7wH
20,197
[docs] set overflowing image width to auto-scale
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20197). All of your documentation changes will be reflected on that endpoint.", "Thanks for fixing this, it looks good to me!\r\n@LysandreJik Do you mind double-checking?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20197). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20197). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? The image shown below overflowed on small screens. A simple inline-css to set the same width as its div solved the problem. before | after ------- | ------- ![Screenshot_20221115_032312_Chrome.jpg](https://user-images.githubusercontent.com/29195190/201737154-59876bf8-1d71-494f-a25f-015a0636d382.jpg) | ![Screenshot_20221115_032216_Chrome.jpg](https://user-images.githubusercontent.com/29195190/201737194-a23a28a8-71f7-4042-94c3-076c32288de8.jpg) Fixes #20196 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Hello @sgugger, may you please review this simple PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20197/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20197", "html_url": "https://github.com/huggingface/transformers/pull/20197", "diff_url": "https://github.com/huggingface/transformers/pull/20197.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20197.patch", "merged_at": 1668471221000 }
https://api.github.com/repos/huggingface/transformers/issues/20196
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20196/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20196/comments
https://api.github.com/repos/huggingface/transformers/issues/20196/events
https://github.com/huggingface/transformers/issues/20196
1,447,539,985
I_kwDOCUB6oc5WR7UR
20,196
[docs] index page has overflowing image
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,668
1,668
1,668
CONTRIBUTOR
null
The current index page has an overflowing image for huggingface.co/support as shown below. ![image](https://user-images.githubusercontent.com/29195190/201600669-f5bf8b40-ffc2-4c93-aada-27ba50209671.png) This can be fixed easily by adding `width=100%;` to the inline style. Note: all current languages are affected.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20196/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20195
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20195/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20195/comments
https://api.github.com/repos/huggingface/transformers/issues/20195/events
https://github.com/huggingface/transformers/issues/20195
1,447,376,927
I_kwDOCUB6oc5WRTgf
20,195
Wav2vec2 Pretraining issue
{ "login": "Kshitizkhandel", "id": 102614926, "node_id": "U_kgDOBh3Hjg", "avatar_url": "https://avatars.githubusercontent.com/u/102614926?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kshitizkhandel", "html_url": "https://github.com/Kshitizkhandel", "followers_url": "https://api.github.com/users/Kshitizkhandel/followers", "following_url": "https://api.github.com/users/Kshitizkhandel/following{/other_user}", "gists_url": "https://api.github.com/users/Kshitizkhandel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kshitizkhandel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kshitizkhandel/subscriptions", "organizations_url": "https://api.github.com/users/Kshitizkhandel/orgs", "repos_url": "https://api.github.com/users/Kshitizkhandel/repos", "events_url": "https://api.github.com/users/Kshitizkhandel/events{/privacy}", "received_events_url": "https://api.github.com/users/Kshitizkhandel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The script you are using probably requires a more recent version of Transformers. cc @sanchit-gandhi ", "As @sgugger has mentioned, you can relax the pinning constraints on all your HF libraries and install them from their latest versions:\r\n```\r\n!pip install transformers datasets accelerate jiwer\r\n```\r\n\r\nAlso, you've got to be careful downgrading `torchaudio` like that as it can mess-up your torch installation. My recommendation would be to leave the default `torchaudio` version and update the Unix package `ffmpeg` to version 4:\r\n\r\n```\r\n!add-apt-repository -y ppa:jonathonf/ffmpeg-4\r\n!apt update\r\n!apt install -y ffmpeg \r\n```\r\n\r\n(taken from this blog https://huggingface.co/blog/fine-tune-whisper#prepare-environment)\r\n", "Hi @Kshitizkhandel :) - I can run it on `4.25.0.dev0` so it is a problem with a version.", "> As @sgugger has mentioned, you can relax the pinning constraints on all your HF libraries and install them from their latest versions:\r\n> \r\n> ```\r\n> !pip install transformers datasets accelerate jiwer\r\n> ```\r\n> \r\n> Also, you've got to be careful downgrading `torchaudio` like that as it can mess-up your torch installation. My recommendation would be to leave the default `torchaudio` version and update the Unix package `ffmpeg` to version 4:\r\n> \r\n> ```\r\n> !add-apt-repository -y ppa:jonathonf/ffmpeg-4\r\n> !apt update\r\n> !apt install -y ffmpeg \r\n> ```\r\n> \r\n> (taken from this blog https://huggingface.co/blog/fine-tune-whisper#prepare-environment)\r\n\r\n\r\n\r\n> As @sgugger has mentioned, you can relax the pinning constraints on all your HF libraries and install them from their latest versions:\r\n> \r\n> ```\r\n> !pip install transformers datasets accelerate jiwer\r\n> ```\r\n> \r\n> Also, you've got to be careful downgrading `torchaudio` like that as it can mess-up your torch installation. My recommendation would be to leave the default `torchaudio` version and update the Unix package `ffmpeg` to version 4:\r\n> \r\n> ```\r\n> !add-apt-repository -y ppa:jonathonf/ffmpeg-4\r\n> !apt update\r\n> !apt install -y ffmpeg \r\n> ```\r\n> \r\n> (taken from this blog https://huggingface.co/blog/fine-tune-whisper#prepare-environment)\r\n\r\nThank you for your swift response on this. @sanchit-gandhi I'm running out of disk space even on google colab pro, do you recommend using colab pro plus/ or anything that you suggest?", "Hey @Kshitizkhandel. The full LibriSpeech dataset contains approx 1000h of labelled audio data. As such, it requires ~130GB disk space to download and prepare, which is why you're running into trouble on a Google Colab! Pre-training is an intensive task, both for compute and storage, so it's only really advised if you really need to do it. Otherwise, you can take a pre-trained checkpoint and fine-tune it for your downstream application (see https://huggingface.co/blog/fine-tune-wav2vec2-english).\r\n\r\nMay I ask what the purpose is for pre-training? Are you simply wanting to try it for yourself, or do you have the intent of pre-training on your own custom dataset?\r\n\r\nThe reason that I ask is that the 'official' Wav2Vec2 checkpoints are pre-trained on 60,000h of audio data. These checkpoints are available to you on the HF Hub:\r\n- https://huggingface.co/facebook/wav2vec2-base\r\n- https://huggingface.co/facebook/wav2vec2-large-lv60\r\n\r\n-> so if you want a pre-trained checkpoint, you can skip the pre-training and load any of these pre-trained checkpoints\r\n\r\nThere are also multiple fine-tuned checkpoints that you can use out-of-the-box for downstream speech recognition:\r\n- https://huggingface.co/facebook/wav2vec2-base-960h\r\n- https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self\r\n\r\n-> see the aforementioned blog for fine-tuning these checkpoints\r\n\r\nClosing this issue as the original question has been solved! Feel free to ask on the forum if you require any additional assistance with your task and I'll be more than happy to help @Kshitizkhandel! https://discuss.huggingface.co \r\n\r\nBest of luck!", "> Hey @Kshitizkhandel. The full LibriSpeech dataset contains approx 1000h of labelled audio data. As such, it requires ~130GB disk space to download and prepare, which is why you're running into trouble on a Google Colab! Pre-training is an intensive task, both for compute and storage, so it's only really advised if you really need to do it. Otherwise, you can take a pre-trained checkpoint and fine-tune it for your downstream application (see https://huggingface.co/blog/fine-tune-wav2vec2-english).\r\n> \r\n> May I ask what the purpose is for pre-training? Are you simply wanting to try it for yourself, or do you have the intent of pre-training on your own custom dataset?\r\n> \r\n> The reason that I ask is that the 'official' Wav2Vec2 checkpoints are pre-trained on 60,000h of audio data. These checkpoints are available to you on the HF Hub:\r\n> \r\n> * https://huggingface.co/facebook/wav2vec2-base\r\n> * https://huggingface.co/facebook/wav2vec2-large-lv60\r\n> \r\n> -> so if you want a pre-trained checkpoint, you can skip the pre-training and load any of these pre-trained checkpoints\r\n> \r\n> There are also multiple fine-tuned checkpoints that you can use out-of-the-box for downstream speech recognition:\r\n> \r\n> * https://huggingface.co/facebook/wav2vec2-base-960h\r\n> * https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self\r\n> \r\n> -> see the aforementioned blog for fine-tuning these checkpoints\r\n> \r\n> Closing this issue as the original question has been solved! Feel free to ask on the forum if you require any additional assistance with your task and I'll be more than happy to help @Kshitizkhandel! https://discuss.huggingface.co\r\n> \r\n> Best of luck!\r\n\r\nI've fine-tuned projects and was looking to try something new but given the hassles it doesn't seem to be a conducive option. Thanks for your help on this @sanchit-gandhi ", "Pre-training is certainly possible, you just need a lot of disk space and compute time for it to be worthwhile!\r\n\r\nIf you want to try something new, you can check out the Whisper model from OpenAI 😉 https://huggingface.co/blog/fine-tune-whisper This gets very good results with very little fine-tuning!", "> Pre-training is certainly possible, you just need a lot of disk space and compute time for it to be worthwhile!\r\n> \r\n> If you want to try something new, you can check out the Whisper model from OpenAI 😉 https://huggingface.co/blog/fine-tune-whisper This gets very good results with very little fine-tuning!\r\n\r\nYeah, I read your lucid and well articulated blog on it. Great work!" ]
1,668
1,668
1,668
NONE
null
### System Info ```shell transformers version-4.11.3 ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1kepA7ryMG7YmNtSYjiJjBM984KRbpZuV#scrollTo=LdIxS2EEgMmz ### Expected behavior ```shell I tried to run the pre training demo of wav2vec2 on libri speech but i run into this error of unrecognised arguments:/ or no module found 'transformers.modeling_outputs ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20195/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20194
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20194/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20194/comments
https://api.github.com/repos/huggingface/transformers/issues/20194/events
https://github.com/huggingface/transformers/issues/20194
1,447,114,926
I_kwDOCUB6oc5WQTiu
20,194
Spelling Error in Testing Documentation - "checkt" -> "check"
{ "login": "kasmith11", "id": 29484286, "node_id": "MDQ6VXNlcjI5NDg0Mjg2", "avatar_url": "https://avatars.githubusercontent.com/u/29484286?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kasmith11", "html_url": "https://github.com/kasmith11", "followers_url": "https://api.github.com/users/kasmith11/followers", "following_url": "https://api.github.com/users/kasmith11/following{/other_user}", "gists_url": "https://api.github.com/users/kasmith11/gists{/gist_id}", "starred_url": "https://api.github.com/users/kasmith11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kasmith11/subscriptions", "organizations_url": "https://api.github.com/users/kasmith11/orgs", "repos_url": "https://api.github.com/users/kasmith11/repos", "events_url": "https://api.github.com/users/kasmith11/events{/privacy}", "received_events_url": "https://api.github.com/users/kasmith11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting, would you like to open a PR to fix this typo?", "@sgugger sure I can open a PR to fix this type." ]
1,668
1,668
1,668
CONTRIBUTOR
null
### System Info - `transformers` version: 4.24.0.dev0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @stevhliu ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Under `Run documentation tests` section of [Testing documentation](https://huggingface.co/docs/transformers/testing#run-documentation-tests) it says: `In order to test whether the documentation examples are correct, you should checkt that the doctests are passing.` ### Expected behavior The line can be changed to: `In order to test whether the documentation examples are correct, you should check that the doctests are passing.`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20194/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20193/comments
https://api.github.com/repos/huggingface/transformers/issues/20193/events
https://github.com/huggingface/transformers/issues/20193
1,447,043,518
I_kwDOCUB6oc5WQCG-
20,193
Urgent! Weird behavior of CLIPTokenizer when encoding out of vocabulary /non-English text with openai/clip-vit-base-patch32, and question about merges.txt.
{ "login": "Edenzzzz", "id": 87317405, "node_id": "MDQ6VXNlcjg3MzE3NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/87317405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Edenzzzz", "html_url": "https://github.com/Edenzzzz", "followers_url": "https://api.github.com/users/Edenzzzz/followers", "following_url": "https://api.github.com/users/Edenzzzz/following{/other_user}", "gists_url": "https://api.github.com/users/Edenzzzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Edenzzzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Edenzzzz/subscriptions", "organizations_url": "https://api.github.com/users/Edenzzzz/orgs", "repos_url": "https://api.github.com/users/Edenzzzz/repos", "events_url": "https://api.github.com/users/Edenzzzz/events{/privacy}", "received_events_url": "https://api.github.com/users/Edenzzzz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
### System Info **Environment is Google colab** - `transformers` version: 4.24.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patil-suraj I've seen you answering questions about CLIP before and would really appreciate your help. Any other's prompt help would be deeply appreciated because I'm in the middle of a project. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The easiest way to reproduce is to open:[this colab notebook](https://colab.research.google.com/drive/1TjXIAWItblSGKFaCND9dYr7qrHoD4R4G#scrollTo=6zDhnjx9HjW1) Step 1: Download and import tokenizer ``` !pip install transformers !pip install datasets !pip install -qq transformers ftfy !pip install -qq "ipywidgets>=7,<8" from transformers import CLIPTokenizer !transformers-cli env ``` Step 2: Test how out-of-vocab tokens are actually tokenized ``` tokenizer = CLIPTokenizer.from_pretrained( "openai/clip-vit-base-patch32", ) print("tokenizer vocab size:",len(tokenizer.encoder)) print("tokenize known sequence:",tokenizer.tokenize(" . ")) print("tokenize unknow sequence:",tokenizer.decode(tokenizer("谁 谁")['input_ids'])) print("Unknown token should be:",tokenizer.unk_token) print("But tokenize() maps the unknown tokens to the tokens and values below") tokenizer.tokenize("谁 谁"),tokenizer.convert_tokens_to_ids(tokenizer.tokenize("谁 谁")) ``` Step 3: Test abnormal behavior of adding space inside ``` print("Inconsistent input and output: a space is added") encoded=tokenizer.encode(tokenizer.tokenize("«è")) print("Encoded:",encoded)#only has two values besides start and end tokens but decoded to 3 tokens? print("Expected: <|startoftext|>«è <|endoftext|>. (no space between « and è)\nActual output:",tokenizer.decode(encoded)) ``` Step 4: Test how this leads to being able to encode and decode a non-English sentence back while it should become unknown tokens. It's not useful anyway because now the decoded sentence has spaces added inside words. ``` input="«è cosa ormai risaputa che a uno scapolo in possesso di un'ingente fortuna manchi soltanto una moglie. questa verità è cosí radicata nella mente delle famiglie del luoho che, nel momento in cui un simile personaggio viene a far parte del vicinato, prima ancora di conoscere anche lontanamente i suoi desiderî in proposito, viene immediatamente considerato come proprietà legittima di una o l'altra delle loro figlie.»orgoglio e pregiudizio è uno dei primi romanzi di jane austen. la scrittrice lo iniziò a ventun anni; il libro, rifiutato da un editore londinese, rimase in un cassetto fino alla sua pubblicazione anonima nel 1813, e da allora è considerato tra i piú importanti romanzi della letteratura inglese. è la storia delle cinque sorelle bennet e dei loro corteggiatori, con al centro il romantico contrasto tra l'adorabile e capricciosa elizabeth e l'altezzoso darcy; lo spirito di osservazione implacabile e quasi cinico, lo studio arguto dei caratteri, la satira delle vanità e delle debolezze della vita domestica, fanno di questo romanzo una delle piú efficaci e indimenticabili commedie di costume del periodo regency inglese." output=tokenizer.decode(tokenizer.encode(input)) print(output[:15],output[-14:]) output=output[15:-14]#removing start token and end token print('Input: ',input) print('Output:',output) ``` ### Expected behavior I'm dealing with a text dataset that is multilingual and wish to tell which sentence is non-English by counting the percentage of unk_tokens after tokenizing. Based on my understanding, tokenizer.encode(string) is equivalent to tokenizer.convert_tokens_to_ids(tokenizer.tokenize(string)) and should map tokens that are not in the vocab to tokenizer.unk_token. Also spaces are ignored during encoding and decoding will add spaces between tokens. However this is not the case, tokenize() seems to map them to some token in merges.txt (https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/merges.txt) and then map them into values according to vocab.json(https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/vocab.json). Sometimes this seems to separate the text further into sub-tokens according to merges.txt and thus add spaces between them when decoding. This behavior is very annoying because it treats non-English text and English text in the same way, decodes the sentence back to its original form, but randomly adds spaces inside these non-English words. Thanks very much for your patience and help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20193/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20192/comments
https://api.github.com/repos/huggingface/transformers/issues/20192/events
https://github.com/huggingface/transformers/pull/20192
1,447,017,668
PR_kwDOCUB6oc5CyKDe
20,192
Typo on doctring in ElectraTokenizer
{ "login": "FacerAin", "id": 16442978, "node_id": "MDQ6VXNlcjE2NDQyOTc4", "avatar_url": "https://avatars.githubusercontent.com/u/16442978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FacerAin", "html_url": "https://github.com/FacerAin", "followers_url": "https://api.github.com/users/FacerAin/followers", "following_url": "https://api.github.com/users/FacerAin/following{/other_user}", "gists_url": "https://api.github.com/users/FacerAin/gists{/gist_id}", "starred_url": "https://api.github.com/users/FacerAin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FacerAin/subscriptions", "organizations_url": "https://api.github.com/users/FacerAin/orgs", "repos_url": "https://api.github.com/users/FacerAin/repos", "events_url": "https://api.github.com/users/FacerAin/events{/privacy}", "received_events_url": "https://api.github.com/users/FacerAin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20192). All of your documentation changes will be reflected on that endpoint.", "Thank you for the guide when I was confused with the copy check. 😊\r\nI add the part `,BERT->Electra` you said.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20192). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? Typo on docstring in [ElectraTokenizer](https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/src/transformers/models/electra/tokenization_electra.py#L93). This should be modified from **Bert** to **Electra** for readability. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20192/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20192/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20192", "html_url": "https://github.com/huggingface/transformers/pull/20192", "diff_url": "https://github.com/huggingface/transformers/pull/20192.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20192.patch", "merged_at": 1668521420000 }
https://api.github.com/repos/huggingface/transformers/issues/20191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20191/comments
https://api.github.com/repos/huggingface/transformers/issues/20191/events
https://github.com/huggingface/transformers/pull/20191
1,446,998,406
PR_kwDOCUB6oc5CyGKv
20,191
fix convert longformer to onnx bug
{ "login": "SysuCharon", "id": 10196635, "node_id": "MDQ6VXNlcjEwMTk2NjM1", "avatar_url": "https://avatars.githubusercontent.com/u/10196635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SysuCharon", "html_url": "https://github.com/SysuCharon", "followers_url": "https://api.github.com/users/SysuCharon/followers", "following_url": "https://api.github.com/users/SysuCharon/following{/other_user}", "gists_url": "https://api.github.com/users/SysuCharon/gists{/gist_id}", "starred_url": "https://api.github.com/users/SysuCharon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SysuCharon/subscriptions", "organizations_url": "https://api.github.com/users/SysuCharon/orgs", "repos_url": "https://api.github.com/users/SysuCharon/repos", "events_url": "https://api.github.com/users/SysuCharon/events{/privacy}", "received_events_url": "https://api.github.com/users/SysuCharon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@deutschmn ", "_The documentation is not available anymore as the PR was closed or merged._", "cc @lewtun ", "As far as I know, Longformer, like RoBERTa, doesn't use token_type_ids see #9111 and https://huggingface.co/docs/transformers/model_doc/longformer", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Gently pinging @fxmarty in case he has bandwidth to take a look", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20191). All of your documentation changes will be reflected on that endpoint.", "I think\r\n\r\n> As far as I know, Longformer, like RoBERTa, doesn't use token_type_ids see https://github.com/huggingface/transformers/issues/9111 and https://huggingface.co/docs/transformers/model_doc/longformer\r\n\r\nis a good answer!\r\n\r\n@SysuCharon could you provide a code that you are expecting to see working but which does not?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "not stale", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,677
1,677
CONTRIBUTOR
null
class `LongformerOnnxConfig` property `inputs` miss one input `"token_type_ids"` ```shell # before fix : onnx input node name: "token_type_ids" type { tensor_type { elem_type: 7 shape { dim { dim_value: 2 } dim { dim_value: 8 } } } } # after fix : onnx input node name: "token_type_ids" type { tensor_type { elem_type: 7 shape { dim { dim_param: "batch" } dim { dim_param: "sequence" } } } } ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20191/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20191", "html_url": "https://github.com/huggingface/transformers/pull/20191", "diff_url": "https://github.com/huggingface/transformers/pull/20191.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20191.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20190/comments
https://api.github.com/repos/huggingface/transformers/issues/20190/events
https://github.com/huggingface/transformers/pull/20190
1,446,946,119
PR_kwDOCUB6oc5Cx7kv
20,190
Add clip resources to the transformers documentation
{ "login": "ambujpawar", "id": 19887541, "node_id": "MDQ6VXNlcjE5ODg3NTQx", "avatar_url": "https://avatars.githubusercontent.com/u/19887541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ambujpawar", "html_url": "https://github.com/ambujpawar", "followers_url": "https://api.github.com/users/ambujpawar/followers", "following_url": "https://api.github.com/users/ambujpawar/following{/other_user}", "gists_url": "https://api.github.com/users/ambujpawar/gists{/gist_id}", "starred_url": "https://api.github.com/users/ambujpawar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ambujpawar/subscriptions", "organizations_url": "https://api.github.com/users/ambujpawar/orgs", "repos_url": "https://api.github.com/users/ambujpawar/repos", "events_url": "https://api.github.com/users/ambujpawar/events{/privacy}", "received_events_url": "https://api.github.com/users/ambujpawar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20190). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20190). All of your documentation changes will be reflected on that endpoint.", "Folks, please make sure resources which are added are talking about the particular model.\r\n\r\n" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Remove if not applicable --> Fixes #20055 (partially) ## Before submitting - [x] This PR improves the docs of CLIP by adding common and most used resources - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #20055 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? @stevhliu Please can you have a look?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20190/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20190", "html_url": "https://github.com/huggingface/transformers/pull/20190", "diff_url": "https://github.com/huggingface/transformers/pull/20190.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20190.patch", "merged_at": 1668536806000 }
https://api.github.com/repos/huggingface/transformers/issues/20189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20189/comments
https://api.github.com/repos/huggingface/transformers/issues/20189/events
https://github.com/huggingface/transformers/issues/20189
1,446,889,408
I_kwDOCUB6oc5WPcfA
20,189
Hanging in TextClassificationPipeline's prediction
{ "login": "amiralikaboli", "id": 41191410, "node_id": "MDQ6VXNlcjQxMTkxNDEw", "avatar_url": "https://avatars.githubusercontent.com/u/41191410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amiralikaboli", "html_url": "https://github.com/amiralikaboli", "followers_url": "https://api.github.com/users/amiralikaboli/followers", "following_url": "https://api.github.com/users/amiralikaboli/following{/other_user}", "gists_url": "https://api.github.com/users/amiralikaboli/gists{/gist_id}", "starred_url": "https://api.github.com/users/amiralikaboli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amiralikaboli/subscriptions", "organizations_url": "https://api.github.com/users/amiralikaboli/orgs", "repos_url": "https://api.github.com/users/amiralikaboli/repos", "events_url": "https://api.github.com/users/amiralikaboli/events{/privacy}", "received_events_url": "https://api.github.com/users/amiralikaboli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amiralikaboli Can you try setting `TOKENIZERS_PARALLELISM=0` before calling your script ?\r\n\r\nYou might be triggerring: https://github.com/huggingface/transformers/issues/5486\r\n\r\nBasically `tokenizers` does parallelism by default, but it can be messed up by other sources of parallelism, most cases are handled, but maybe you found a way to trigger the deadlock.\r\nUsing that might help at least make sure this is not the issue.\r\n\r\nAnd if the problem is still there, could you provide a simple reproducing script ?", "@Narsil Actually, It doesn't work. The script I use for training and predicting my model is similar to the above script. Do you want a script for a server/client which runs the model?", "If you could provide a simple (one file) script that's easily launchable to reproduce that'd be perfect yes.\r\n\r\nThe bug you're encountering is almost certainly a deadlock linked to multiple libs doing parallelism in some way hurting each other. Without the full script to reproduce it's hard to pinpoint though. Also are you on Mac or Linux (there's different default behavior for forking if my memory serves correctly).", "You are right about the deadlock. We loaded our model several times as a temporary solution, and it worked. But, it is not ideal because of more resource usage.\r\nProviding you with a simple script exactly based on our case is not possible since we used a private library over falcon.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,673
1,673
NONE
null
### System Info - `transformers` version: 4.22.2 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.8.15 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am using this class. It hangs forever after deploying and sending a request to my server (using falcon and gunicorn) that calls the `predict` function. However, when I call it in a simple script, everything is ok, and its predictions are returned. ```python class TrainStage: def __init__(self, config): self.config = config def fit(self): model_config = AutoConfig.from_pretrained(self.config.pretrained_model_path) self.tokenizer = AutoTokenizer.from_pretrained(self.config.pretrained_model_path) self.model = AutoModelForSequenceClassification.from_pretrained( self.config.pretrained_model_path, config=model_config ) training_args = TrainingArguments(...) trainer = Trainer(...) trainer.train() def transform(self, texts: List[str]): pipeline = TextClassificationPipeline(model=self.model, tokenizer=self.tokenizer) results = pipeline(texts) return results ``` ### Expected behavior Returning predictions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20189/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20188/comments
https://api.github.com/repos/huggingface/transformers/issues/20188/events
https://github.com/huggingface/transformers/pull/20188
1,446,867,210
PR_kwDOCUB6oc5CxrOD
20,188
Fix a typo in examples/pytorch/translation/README.md
{ "login": "Nietism", "id": 14136473, "node_id": "MDQ6VXNlcjE0MTM2NDcz", "avatar_url": "https://avatars.githubusercontent.com/u/14136473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nietism", "html_url": "https://github.com/Nietism", "followers_url": "https://api.github.com/users/Nietism/followers", "following_url": "https://api.github.com/users/Nietism/following{/other_user}", "gists_url": "https://api.github.com/users/Nietism/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nietism/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nietism/subscriptions", "organizations_url": "https://api.github.com/users/Nietism/orgs", "repos_url": "https://api.github.com/users/Nietism/repos", "events_url": "https://api.github.com/users/Nietism/events{/privacy}", "received_events_url": "https://api.github.com/users/Nietism/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20188). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,679
1,668
CONTRIBUTOR
null
# What does this PR do? There is typo in the original hyperlink. Below is the original version: ```markdown Based on the script [`run_translation_no_trainer.py`] (https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/ **run_translationn_no_trainer.py**) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20188/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20188/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20188", "html_url": "https://github.com/huggingface/transformers/pull/20188", "diff_url": "https://github.com/huggingface/transformers/pull/20188.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20188.patch", "merged_at": 1668448383000 }
https://api.github.com/repos/huggingface/transformers/issues/20187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20187/comments
https://api.github.com/repos/huggingface/transformers/issues/20187/events
https://github.com/huggingface/transformers/issues/20187
1,446,777,217
I_kwDOCUB6oc5WPBGB
20,187
How to train my own dataset?
{ "login": "Arsmart1", "id": 49458769, "node_id": "MDQ6VXNlcjQ5NDU4NzY5", "avatar_url": "https://avatars.githubusercontent.com/u/49458769?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arsmart1", "html_url": "https://github.com/Arsmart1", "followers_url": "https://api.github.com/users/Arsmart1/followers", "following_url": "https://api.github.com/users/Arsmart1/following{/other_user}", "gists_url": "https://api.github.com/users/Arsmart1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arsmart1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arsmart1/subscriptions", "organizations_url": "https://api.github.com/users/Arsmart1/orgs", "repos_url": "https://api.github.com/users/Arsmart1/repos", "events_url": "https://api.github.com/users/Arsmart1/events{/privacy}", "received_events_url": "https://api.github.com/users/Arsmart1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You could start by checking out the [course](https://huggingface.co/course/chapter1/1). Also please use the [forums](https://discuss.huggingface.co/) for questions like this, since we keep features for bugs and feature requests only.\r\n\r\nYou could start by checking out the [course](https://huggingface.co/course/chapter1/1). Also please use the [forums](https://discuss.huggingface.co/) for questions like this, since we keep features for bugs and feature requests only.\r\n\r\nYou could start by checking out the [course](https://huggingface.co/course/chapter1/1). Also please use the [forums](https://discuss.huggingface.co/) for questions like this, since we keep features for bugs and feature requests only." ]
1,668
1,668
1,668
NONE
null
### Feature request Could you privide a tutorial for training the whole model using my own dataset? I am a new learner and very confused. Thanks! ### Motivation Could you privide a tutorial for training the whole model using my own dataset? I am a new learner and very confused. Thanks! ### Your contribution Could you privide a tutorial for training the whole model using my own dataset? I am a new learner and very confused. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20187/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20186/comments
https://api.github.com/repos/huggingface/transformers/issues/20186/events
https://github.com/huggingface/transformers/issues/20186
1,446,732,120
I_kwDOCUB6oc5WO2FY
20,186
Weird bugs when using ```run_image_classification.py```
{ "login": "dxlong2000", "id": 54766384, "node_id": "MDQ6VXNlcjU0NzY2Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/54766384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dxlong2000", "html_url": "https://github.com/dxlong2000", "followers_url": "https://api.github.com/users/dxlong2000/followers", "following_url": "https://api.github.com/users/dxlong2000/following{/other_user}", "gists_url": "https://api.github.com/users/dxlong2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/dxlong2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dxlong2000/subscriptions", "organizations_url": "https://api.github.com/users/dxlong2000/orgs", "repos_url": "https://api.github.com/users/dxlong2000/repos", "events_url": "https://api.github.com/users/dxlong2000/events{/privacy}", "received_events_url": "https://api.github.com/users/dxlong2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nCould you check whether [this part of the code](https://github.com/huggingface/transformers/blob/2308f3d42cff281cecee413f97f19044f54636d7/examples/pytorch/image-classification/run_image_classification.py#L231-L241) runs fine?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
### System Info Dear authors, Thanks for your great work. I had a small dataset as I share with you here: https://drive.google.com/drive/folders/1-GAFdH0S16SlYXPdOwDmqijCrpIeeLbm?usp=sharing. When I trained a ViT for image classification from ```run_image_classification.py```, my command as I attached: ``` !CUDA_DIVISIBLE_DEVICES=0, python3 run_image_classification.py \ --train_dir $TRAIN_DIR \ --output_dir $OUTPUT_DIR \ --remove_unused_columns False \ --do_train \ --do_eval \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --save_total_limit 3 \ --seed 1337 \ --overwrite_output_dir ``` I got an error: ``` Traceback (most recent call last): File "run_image_classification.py", line 388, in <module> main() File "run_image_classification.py", line 240, in main task="image-classification", File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1681, in load_dataset **config_kwargs, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1453, in load_dataset_builder data_files=data_files, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1089, in dataset_module_factory download_mode=download_mode, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 701, in get_module base_path=base_path, File "/usr/local/lib/python3.7/dist-packages/datasets/data_files.py", line 801, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/usr/local/lib/python3.7/dist-packages/datasets/data_files.py", line 763, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/usr/local/lib/python3.7/dist-packages/datasets/data_files.py", line 368, in resolve_patterns_locally_or_by_urls raise FileNotFoundError(error_msg) FileNotFoundError: Unable to resolve any data file that matches '['/content/drive/MyDrive/EOS/OVO/AF/**']' at /content/drive/MyDrive/EOS/codes ``` while if I do: ``` os.listdir("/content/drive/MyDrive/EOS/OVO/AF/") ``` I got: ``` ['Altered_material', 'Free_crystal', 'Lithic', 'Juvenile'] ``` And all the images are still there. May I ask if you have any advice? Thanks! I look forward hearing you asap! ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction As above ### Expected behavior As above
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20186/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20185/comments
https://api.github.com/repos/huggingface/transformers/issues/20185/events
https://github.com/huggingface/transformers/issues/20185
1,446,613,943
I_kwDOCUB6oc5WOZO3
20,185
Difficult to understand warning
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Updating the config online would make them incompatible with previous versions of Transformers so that's not a possibility. I'm fine with downgrading the warnings to infos.", "I wonder why that warning is being printed twice, one time for \"shortest_edge\" and one time for \"height, width\"\r\n", "I'll open a PR to downgrade the warnings. I agree the message isn't very clear either - I'll open up another PR to tidy this up. \r\n\r\n@NielsRogge This is likely because there's two parameters `size` and `crop_size` that are being converted to a dict - makes the point about the warnings being unclear quite well! " ]
1,668
1,668
1,668
MEMBER
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - Huggingface_hub version: 0.11.0.dev0 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import CLIPImageProcessor processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14") ``` => this throws the following warning: ``` The size parameter should be a dictionary with keys ('height', 'width'), ('shortest_edge', 'longest_edge') or ('shortest_edge',) got 224. Setting as {'shortest_edge': 224}. The size parameter should be a dictionary with keys ('height', 'width'), ('shortest_edge', 'longest_edge') or ('shortest_edge',) got 224. Setting as {'height': 224, 'width': 224}. ``` ### Expected behavior I'm wondering whether it would be better to for now downgrade the warning here: https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/src/transformers/image_processing_utils.py#L500 to just `logger.info(...)` The `openai/clip-vit-large-patch14` model is used **a lot** in downstream applications (CompVis, diffusers) and the thrown warning here is maybe a bit too much and scares the user? Or should we aggressively update all the configs of CLIP models on the Hub in order to make the warning go away we would have to update the config online of a lot of clip models..
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20185/timeline
completed
null
null