url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/18876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18876/comments
https://api.github.com/repos/huggingface/transformers/issues/18876/events
https://github.com/huggingface/transformers/issues/18876
1,360,727,372
I_kwDOCUB6oc5RGw1M
18,876
BigBirdTokenizer Serialization Error on Spark
{ "login": "jmwoloso", "id": 7530947, "node_id": "MDQ6VXNlcjc1MzA5NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmwoloso", "html_url": "https://github.com/jmwoloso", "followers_url": "https://api.github.com/users/jmwoloso/followers", "following_url": "https://api.github.com/users/jmwoloso/following{/other_user}", "gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions", "organizations_url": "https://api.github.com/users/jmwoloso/orgs", "repos_url": "https://api.github.com/users/jmwoloso/repos", "events_url": "https://api.github.com/users/jmwoloso/events{/privacy}", "received_events_url": "https://api.github.com/users/jmwoloso/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @jmwoloso Thank you for reporting the issue. I have a few questions though: I see there is [fast big_bird tokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/tokenization_big_bird_fast.py) in `transformers`. Have you tried it?", "Hi @ydshieh. No I havne't tried that directly. I tried the `AutoTokenizer` which uses the Fast tokenizers by default (so I assume the behavior will be the same), but the problem with the fast tokenizers is that you cannot feed them pre-tokenized text like you can the vanilla tokenizers, and I need to do that for my use case.", "So it seems we don't need a fast tokenizer implementation, but the fast tokenizers actually need feature parity with the vanilla tokenizers.", "Unless using the `BigBirdTokenizerFast` directly is the solution, rather than `AutoTokenizer`?", "Possibly related [#1045](https://github.com/huggingface/tokenizers/issues/1045)", "I was able to come up with a workaround @ydshieh, but it still feels like the fast tokenizers should behave the same as the standard tokenizers (i.e. accept tokenized inputs)\r\n\r\nthe solution I came up with tries to load the tokenizer from the local node where the udf is being run. if it doesn't exist, we download it, and save it for future calls to make use of. \r\n\r\nexpanding upon `create_tokens` in my reprex above:\r\n```python\r\n\r\ndef create_tokens(text):\r\n from transformers import BigBirdTokenizer\r\n model = \"google/bigbird-roberta-base\"\r\n try:\r\n # try to access it locally first\r\n tokenizer = BigBirdTokenizer.from_pretrained(f\"/{model}\")\r\n except Exception:\r\n tokenizer = BigBirdTokenizer.from_pretrained(model)\r\n tokenizer.save_pretrained(f\"/{model}\")\r\n return tokenizer.encode_plus(text)[\"input_ids\"]\r\n\r\n```", "@jmwoloso I am glad you found a workaround!\r\n\r\nBut for me to understand the problem a bit better, on the remote nodes (hopefully this term makes sense - I am not really familiar with Spark ecosystem), they are not able to download/load the tokenizer from `google/bigbird-roberta-base`?\r\n\r\nAnd if we download and save it (from the local node where the udf is being run), it could be loaded (through a local directory?) when the code is running on the remote nodes?\r\n\r\n----\r\n\r\nRegarding the pre-tokenization in fast tokenizers, probably there are some limitations in `tokenizers` (which is written in `rust`) and is designed to be so. I am not very familiar with that libraries. If you would like to, maybe you can open a feature request in [tokenizers](https://github.com/huggingface/tokenizers), with a description of the current limitation you encounter.", "Hi @ydshieh thank you for the quick reply. The issue I linked to above in the tokenizers library captures the limitation I believe, so I can follow up on that thread to see about what is possible there.\r\n\r\nAnd yes \"remote nodes\" is a fine term to use, we typically think about Spark clusters in terms of the driver node (the node where our notebooks run) and the worker nodes (which take a serialized form of the commands we specify on the driver node and use those to do work on their slice of the data).\r\n\r\nSpark also has user-defined functions (UDFs) which are python functions that can be serialized and sent to the worker nodes so they have what they need to operate on data. I have tried instantiating the tokenizer within the udf, but a spark cluster has 200 partitions (jobs) by default and you can adjust that to suit your workload. The problem is that each job will call it's version of the udf which results (in this case) in 200 calls to download the tokenizer and we get an error that we've made too many requests to the site.\r\n\r\nThe alternative to this is to use partial application with the udf. So create the tokenizer a single time on the driver node and then we use partial application to apply the tokenizer to the udf so that the tokenizer gets shipped along with the udf to each of the workers. \r\n\r\nThis works just fine for (as far as I can tell) the non-sentencepiece based tokenizers. The spiece tokenizers however, seem to try and reload the model after they've been sent to the workers. The problem is that the cache exists on the driver node, but not on the worker nodes, so the tokenizer tries to load its config again and the cache doesn't exist so the job fails.\r\n\r\nMy solution above does exactly what you suggest, I try to access the tokenizer from the local file system of the worker node first and if that fails, I download it and save the config on the worker node so that subsequent calls to the udf use the locally saved tokenizer instead of making the api call to download it.", "Thank you @jmwoloso for the patience and effort to answer my question! \r\n\r\nActually, I was not giving suggestion in my last reply, but just to understand better your workaround/solution :-)\r\n\r\n> The problem is that each job will call it's version of the udf which results (in this case) in 200 calls to download the tokenizer and we get an error that we've made too many requests to the site.\r\n\r\n> I download it and save the config on the worker node so that subsequent calls to the udf use the locally saved tokenizer instead of making the api call to download it.\r\n\r\nIf you still have a bit time, one last question (as I am a bit confused here): \r\n\r\nall those ~200 jobs can access the locally saved tokenizer, downloaded only once on a particular worker? Or each worker still has to download it (so totally ~200 downloads) separately, but subsequent calls on each node won't download it again? \r\n\r\nFrom the description, I believe you mean the former. But I am a bit surprised that all workers can access the same file system.\r\n\r\nFinally, I am going to close the issue as you already provide a working solution (and the issue is not really a bug in the library). Hope this is OK for you.\r\n\r\n ", "> all those ~200 jobs can access the locally saved tokenizer, downloaded only once on a particular worker?\r\n\r\nHi @ydshieh Yes, this. One download per worker that any jobs on that worker can then access if my solution above is used.\r\n\r\n\r\n> Finally, I am going to close the issue as you already provide a working solution (and the issue is not really a bug in the library). Hope this is OK for you.\r\n\r\nI'm torn on this because I would expect that all tokenizers behave the same and could be interoperable in a distributed setting without having to make the fix above, but the `spiece` ones behave differently. But having said that, the solution above would work for all tokenizers `spiece` and otherwise. It doesn't feel so much like a bug, just unexpected/inconsistent behavior among tokenizers within the library. Is this something I could add to like a \"recipes\" section in the documentation as a pattern to be used in distributed settings? I'd be happy to work on that if you think it would be useful, but otherwise, I'm fine with closing this issue. I appreciate your help!\r\n\r\n", "Thanks again, @jmwoloso . Let's keep this issue open for now, and I will discuss with 2 colleagues regarding the documentation.", "Sounds great @ydshieh, thank you!", "Hi, @jmwoloso \r\n\r\nAfter a discussion with one of my colleagues @Narsil, we think this scenario is too specific to be documented in the doc. I think, for now, having your workaround on this GitHub issue page is already very good and helpful for others having the same issue :-) Thanks again.\r\n\r\nRegarding slow/fast tokenizer, I left [a comment](https://github.com/huggingface/tokenizers/issues/1045#issuecomment-1257802082) :-)\r\n ", "Thank you @ydshieh! " ]
1,662
1,664
1,664
CONTRIBUTOR
null
### System Info Hello! Running `transformers-cli env` throws the following error for me on the databricks cluster that I'm using currently (but it works locally). ```python /databricks/conda/envs/default/lib/python3.8/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.10) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " Traceback (most recent call last): File "/databricks/conda/envs/prism_ai/bin/transformers-cli", line 5, in <module> from transformers.commands.transformers_cli import main File "/databricks/conda/envs/prism_ai/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 26, in <module> from .user import UserCommands File "/databricks/conda/envs/prism_ai/lib/python3.8/site-packages/transformers/commands/user.py", line 20, in <module> from huggingface_hub.hf_api import HfFolder, create_repo, login, logout, whoami ImportError: cannot import name 'login' from 'huggingface_hub.hf_api' (/databricks/conda/envs/prism_ai/lib/python3.8/site-packages/huggingface_hub/hf_api.py) ``` I can provide some of the info that I know (we maintain the container that we use): ``` transformers: 4.17.0 python: 3.8.12 pt version: 1.11.0+cu113 tf version: 2.8.0 flax: N/A jax: N/A jaxlib: N/A using gpu: nope using distributed: yes, via pyspark udf ``` The reprex below this produces the following error: ```bash --------------------------------------------------------------------------- ... /databricks/spark/python/pyspark/sql/dataframe.py in count(self) 668 2 669 """ --> 670 return int(self._jdf.count()) 671 672 def collect(self): /databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in __call__(self, *args) 1302 1303 answer = self.gateway_client.send_command(command) -> 1304 return_value = get_return_value( 1305 answer, self.gateway_client, self.target_id, self.name) 1306 /databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw) 121 # Hide where the exception came from that shows a non-Pythonic 122 # JVM exception message. --> 123 raise converted from None 124 else: 125 raise PythonException: An exception was thrown from a UDF: 'pyspark.serializers.SerializationError: Caused by Traceback (most recent call last): File "/databricks/spark/python/pyspark/serializers.py", line 165, in _read_with_length return self.loads(obj) File "/databricks/spark/python/pyspark/serializers.py", line 469, in loads return pickle.loads(obj, encoding=encoding) File "/databricks/python/lib/python3.8/site-packages/transformers/models/big_bird/tokenization_big_bird.py", line 164, in __setstate__ self.sp_model.Load(self.vocab_file) File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 367, in Load return self.LoadFromFile(model_file) File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) OSError: Not found: "/.cache/huggingface/d318d7bb69cafb1d8964fc87515592ac3092a2c8fdb305068f9ba4020df3ee3b.271d467a9adc15fb44348481bc75c48b63cba0fd4934bc5377d63a63de052c45": No such file or directory Error #2'. Full traceback below: Traceback (most recent call last): File "/databricks/spark/python/pyspark/serializers.py", line 165, in _read_with_length return self.loads(obj) File "/databricks/spark/python/pyspark/serializers.py", line 469, in loads return pickle.loads(obj, encoding=encoding) File "/databricks/python/lib/python3.8/site-packages/transformers/models/big_bird/tokenization_big_bird.py", line 164, in __setstate__ self.sp_model.Load(self.vocab_file) File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 367, in Load return self.LoadFromFile(model_file) File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) OSError: Not found: "/.cache/huggingface/d318d7bb69cafb1d8964fc87515592ac3092a2c8fdb305068f9ba4020df3ee3b.271d467a9adc15fb44348481bc75c48b63cba0fd4934bc5377d63a63de052c45": No such file or directory Error #2 During handling of the above exception, another exception occurred: pyspark.serializers.SerializationError: Caused by Traceback (most recent call last): File "/databricks/spark/python/pyspark/serializers.py", line 165, in _read_with_length return self.loads(obj) File "/databricks/spark/python/pyspark/serializers.py", line 469, in loads return pickle.loads(obj, encoding=encoding) File "/databricks/python/lib/python3.8/site-packages/transformers/models/big_bird/tokenization_big_bird.py", line 164, in __setstate__ self.sp_model.Load(self.vocab_file) File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 367, in Load return self.LoadFromFile(model_file) File "/databricks/python/lib/python3.8/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) OSError: Not found: "/.cache/huggingface/d318d7bb69cafb1d8964fc87515592ac3092a2c8fdb305068f9ba4020df3ee3b.271d467a9adc15fb44348481bc75c48b63cba0fd4934bc5377d63a63de052c45": No such file or directory Error #2 ``` I believe this is related to #15982. If this is related, it seems we need fast tokenizers for `Marian` and now also `BigBird`. Is anyone working on that currently? If not I could take a stab at `BigBird` (or maybe we just need to add `BigBird` functionality to `PreTrainedTokenizerFast`?) ### Who can help? BigBird: @ydshieh Marian: @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Here is a (simplified) reprex of what my function(s) look like and I'm running this on a multi-node databricks cluster on azure. ```python import functools from pyspark.sql import functions as F, types as T from transformers import BigBirdTokenizer tokenizer = BigBirdTokenizer.from_pretrained("google/bigbird-roberta-base") return_type = T.ArrayType(T.IntegerType()) def create_tokens(tokenizer, text): return tokenizer.encode_plus(text)["input_ids"] create_tokens_partial = functools.partial(create_tokens, tokenizer=tokenizer) create_tokens_udf = F.udf(lambda text: create_tokens_partial(text=text, returnType=return_type) df = df.withColumn("InputIds", create_tokens_udf(F.column("Text"))) df.cache().count() ``` ### Expected behavior Serialization of the `BigBirdTokenizer` that can be shipped off to worker nodes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18876/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18875/comments
https://api.github.com/repos/huggingface/transformers/issues/18875/events
https://github.com/huggingface/transformers/pull/18875
1,360,725,340
PR_kwDOCUB6oc4-Tyzy
18,875
Disable model checkpoint sharding of large models for SageMaker Model Parallel
{ "login": "viclzhu", "id": 20961977, "node_id": "MDQ6VXNlcjIwOTYxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/20961977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/viclzhu", "html_url": "https://github.com/viclzhu", "followers_url": "https://api.github.com/users/viclzhu/followers", "following_url": "https://api.github.com/users/viclzhu/following{/other_user}", "gists_url": "https://api.github.com/users/viclzhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/viclzhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viclzhu/subscriptions", "organizations_url": "https://api.github.com/users/viclzhu/orgs", "repos_url": "https://api.github.com/users/viclzhu/repos", "events_url": "https://api.github.com/users/viclzhu/events{/privacy}", "received_events_url": "https://api.github.com/users/viclzhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @philschmid ", "Ok, sounds good! I opened a new PR with changes in Trainer instead. #18928 " ]
1,662
1,662
1,662
CONTRIBUTOR
null
Disable model checkpoint sharding of large models for SageMaker Model Parallel * SageMaker Model Parallel does not support loading these # What does this PR do? This PR disables the automatic model checkpoint sharding done in `PreTrainedModel` for SageMaker Model Parallel as SMP does not support loading these. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18875", "html_url": "https://github.com/huggingface/transformers/pull/18875", "diff_url": "https://github.com/huggingface/transformers/pull/18875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18875.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18874/comments
https://api.github.com/repos/huggingface/transformers/issues/18874/events
https://github.com/huggingface/transformers/pull/18874
1,360,558,576
PR_kwDOCUB6oc4-TQLp
18,874
Remove unused `cur_len` in generation_utils.py
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @patrickvonplaten - please review this one when you get some time.", "Nice!" ]
1,662
1,664
1,664
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Removes unused `cur_len` in `generation_utils.py` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @gante <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18874/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18874", "html_url": "https://github.com/huggingface/transformers/pull/18874", "diff_url": "https://github.com/huggingface/transformers/pull/18874.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18874.patch", "merged_at": 1664267971000 }
https://api.github.com/repos/huggingface/transformers/issues/18873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18873/comments
https://api.github.com/repos/huggingface/transformers/issues/18873/events
https://github.com/huggingface/transformers/issues/18873
1,360,410,207
I_kwDOCUB6oc5RFjZf
18,873
LayoutLMV3 Tokenizer Inserts Odd Characters
{ "login": "logan-markewich", "id": 22285038, "node_id": "MDQ6VXNlcjIyMjg1MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/22285038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/logan-markewich", "html_url": "https://github.com/logan-markewich", "followers_url": "https://api.github.com/users/logan-markewich/followers", "following_url": "https://api.github.com/users/logan-markewich/following{/other_user}", "gists_url": "https://api.github.com/users/logan-markewich/gists{/gist_id}", "starred_url": "https://api.github.com/users/logan-markewich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/logan-markewich/subscriptions", "organizations_url": "https://api.github.com/users/logan-markewich/orgs", "repos_url": "https://api.github.com/users/logan-markewich/repos", "events_url": "https://api.github.com/users/logan-markewich/events{/privacy}", "received_events_url": "https://api.github.com/users/logan-markewich/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "@logan-markewich -- have a look at this [entry in our forum](https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/2?u=joaogante)", "@gante Yea ok, that gives some context for what is actually going on\r\n\r\nI guess I will just need to remove those characters myself? A little annoying, but oh well", "Hi,\r\n\r\nNo that's just the way tokenization happens. The tokenization is based on Byte Pair Encoding (BPE) algorithm. This is also used by RoBERTa for instance. \r\n\r\nYou can check out the Huggingface course if you want to learn about this: https://huggingface.co/course/chapter6/5?fw=pt" ]
1,662
1,663
1,663
NONE
null
### System Info (This is on a fresh google colab instance) - `transformers` version: 4.21.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import LayoutLMv3TokenizerFast tok = LayoutLMv3TokenizerFast.from_pretrained('microsoft/layoutlmv3-base') tok.tokenize('Hello world') >>> ['ĠHello', 'Ġworld'] ``` ### Expected behavior The special characters are not expected when calling tokenize(). This doesn't happen when using the LayoutLMV2 tokenizer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18872/comments
https://api.github.com/repos/huggingface/transformers/issues/18872/events
https://github.com/huggingface/transformers/issues/18872
1,360,385,254
I_kwDOCUB6oc5RFdTm
18,872
we want to submit a PR about Source-Free compression training on huggingface NLP model,Where would you suggest submitting it?
{ "login": "leiqing1", "id": 54695910, "node_id": "MDQ6VXNlcjU0Njk1OTEw", "avatar_url": "https://avatars.githubusercontent.com/u/54695910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leiqing1", "html_url": "https://github.com/leiqing1", "followers_url": "https://api.github.com/users/leiqing1/followers", "following_url": "https://api.github.com/users/leiqing1/following{/other_user}", "gists_url": "https://api.github.com/users/leiqing1/gists{/gist_id}", "starred_url": "https://api.github.com/users/leiqing1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leiqing1/subscriptions", "organizations_url": "https://api.github.com/users/leiqing1/orgs", "repos_url": "https://api.github.com/users/leiqing1/repos", "events_url": "https://api.github.com/users/leiqing1/events{/privacy}", "received_events_url": "https://api.github.com/users/leiqing1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @leiqing1 ! Thanks a lot for being motivated to contribute 🤗 ! You mention this is about training, right? In this case, I would say `huggingface/transformers` is the place to go. But let's wait the opinions from my colleagues @sgugger @LysandreJik and @michaelbenayoun\r\n\r\nAs the training (for PyTorch models) is done by using the [Trainer class](https://github.com/huggingface/transformers/blob/ecdf9b06bc03af272ceb8d6951e30e677fdfd35c/src/transformers/trainer.py#L223), the PR is likely to involve the integration your compression method into that class.\r\n", "Let's maybe start with a simple training script that you could add as a new research project in the repo?", "Thanks for the reply. @ydshieh Yes, the compression process is about training. You mean we can prepare a simple training script to submit to https://github.com/huggingface/transformers/tree/main/examples/research_projects this directory?\r\n@sgugger ", "Yes, alongside with the extra modules you might need, all in the same folder.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,662
1,665
1,665
NONE
null
### Feature request Hello, we have done a Source-Free compression training function, and the benefits of bert-base-cased are as follows. I want to submit a PR, is it ok? and which repo, huggingface/transformers or huggingface/optimum, is more suitable? ![image](https://user-images.githubusercontent.com/54695910/188194251-ef204386-e1fc-40a3-9796-36a2fa5b0a56.png) ![image](https://user-images.githubusercontent.com/54695910/188194560-171f1c31-1828-4272-9cc6-df241b2fb45d.png) ### Motivation Improve the inference speed of NLP models ### Your contribution we want to submit a pr for huggingface~
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18872/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18871/comments
https://api.github.com/repos/huggingface/transformers/issues/18871/events
https://github.com/huggingface/transformers/pull/18871
1,360,323,238
PR_kwDOCUB6oc4-Sev_
18,871
Further reduce the number of alls to head for cached objects
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "and also cc @Wauplin :)", "Yes, my plan was to port this to `hugginface_hub` next, along with the commi_hash argument (which does not exist there yet), to then be able to use the function of `huggingface_hub` after the next release!\r\n\r\nThanks for the reviews, will address comments later this morning." ]
1,662
1,662
1,662
COLLABORATOR
null
# What does this PR do? This PR completes #18534 and leverages the cache system of files that do not exist at a given commit in a repo introduced in the last release of `huggingface_hub` (by this [PR](https://github.com/huggingface/huggingface_hub/pull/986)) to further reduce the numbers of calls to the API when trying to load configurations/models/tokenizers/pipelines to just 1 call **every time** the object is cached and the current commit is the same one as the distant repo for the given revision. cc @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18871/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18871", "html_url": "https://github.com/huggingface/transformers/pull/18871", "diff_url": "https://github.com/huggingface/transformers/pull/18871.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18871.patch", "merged_at": 1662482078000 }
https://api.github.com/repos/huggingface/transformers/issues/18870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18870/comments
https://api.github.com/repos/huggingface/transformers/issues/18870/events
https://github.com/huggingface/transformers/issues/18870
1,360,296,809
I_kwDOCUB6oc5RFHtp
18,870
mlflow can log a maximum of 100 parameters on Azure ML
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note that we are not maintaining the external logging platforms ourselves, their creators are supposed to do this work. Since there is no one in Microsoft that is pushing this forward, some community contributed callbacks have been created, but it's not guaranteed that they work. I would recommend using other trackers that are better maintained until some folks at MlFlow/Azure put the work for a nice integration :-)", "I'll just make my own callback. Thanks!", "Hi nbroad1881, \r\n\r\nI am facing the same problem that you have mentioned at the beginning, however, I am not sure about the workaround and its bit urgent. Can you please help me how were you able to have your own callback? This will be of great help.\r\n\r\nRegards,\r\nDev", "@dks198 Something you might be able to do for a custom AzureMLCallback once you've defined your trainers:\r\n\r\nfrom transformers import TrainerCallback\r\nimport importlib.util\r\n\r\n------------------------------------------------\r\n\r\ndef is_azureml_available():\r\n if importlib.util.find_spec(\"azureml\") is None:\r\n return False\r\n if importlib.util.find_spec(\"azureml.core\") is None:\r\n return False\r\n return importlib.util.find_spec(\"azureml.core.run\") is not None\r\n\r\nclass AzureMLCallback(TrainerCallback):\r\n \"\"\"\r\n A [`TrainerCallback`] that sends the logs to [AzureML](https://pypi.org/project/azureml-sdk/).\r\n \"\"\"\r\n\r\n def __init__(self, azureml_run=None):\r\n if not is_azureml_available():\r\n raise RuntimeError(\"AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`.\")\r\n self.azureml_run = azureml_run\r\n\r\n def on_init_end(self, args, state, control, **kwargs):\r\n from azureml.core.run import Run\r\n\r\n if self.azureml_run is None and state.is_world_process_zero:\r\n self.azureml_run = Run.get_context()\r\n\r\n def on_log(self, args, state, control, logs=None, **kwargs):\r\n if self.azureml_run and state.is_world_process_zero:\r\n for k, v in logs.items():\r\n if isinstance(v, (int, float)):\r\n self.azureml_run.log(k, v, description=k)\r\n\r\n-----------------------------------------------\r\n\r\ntrainer.add_callback(AzureMLCallback)\r\n\r\ntrainer.train()", "I believe AzureML logging is being deprecated for mlflow. \r\n\r\n@dks198, you can use the [answer](https://github.com/huggingface/accelerate/pull/675#discussion_r974482489) I gave in the other thread. You can just limit the number of parameters logged.", "Hi,\n\nI tried implementing the code you have shared for my Bert model training\nhowever, I still get the same error message. I am trying to figure out how\nto implement this so that I can get through this error. I have attached the\nerror message for your reference.\n\nRegards,\nDev\n\nOn Tue, Sep 20, 2022 at 1:52 PM giozinzi ***@***.***> wrote:\n\n> @dks198 <https://github.com/dks198> Something you might be able to do for\n> a custom AzureMLCallback once you've defined your trainers:\n>\n> from transformers import TrainerCallback\n> import importlib.util\n> ------------------------------\n>\n> def is_azureml_available():\n> if importlib.util.find_spec(\"azureml\") is None:\n> return False\n> if importlib.util.find_spec(\"azureml.core\") is None:\n> return False\n> return importlib.util.find_spec(\"azureml.core.run\") is not None\n>\n> class AzureMLCallback(TrainerCallback):\n> \"\"\"\n> A [TrainerCallback] that sends the logs to AzureML\n> <https://pypi.org/project/azureml-sdk/>.\n> \"\"\"\n>\n> def __init__(self, azureml_run=None):\n> if not is_azureml_available():\n> raise RuntimeError(\"AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`.\")\n> self.azureml_run = azureml_run\n>\n> def on_init_end(self, args, state, control, **kwargs):\n> from azureml.core.run import Run\n>\n> if self.azureml_run is None and state.is_world_process_zero:\n> self.azureml_run = Run.get_context()\n>\n> def on_log(self, args, state, control, logs=None, **kwargs):\n> if self.azureml_run and state.is_world_process_zero:\n> for k, v in logs.items():\n> if isinstance(v, (int, float)):\n> self.azureml_run.log(k, v, description=k)\n>\n> ------------------------------\n>\n> trainer.add_callback(AzureMLCallback)\n>\n> trainer.train()\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/18870#issuecomment-1252898595>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/A3FF72LYYCFYTOIPM2LYXS3V7IPW7ANCNFSM6AAAAAAQDK7Y2I>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n", "AzureML recently raised the limit to the number of parameters that can be logged per mlflow run to 200. This should unblock using HF autolog in the issue raised initially. That change has been rolled out earlier in October. By now, you should be able to drop the workaround and just use HF autolog with mlflow in AzureML." ]
1,662
1,665
1,662
CONTRIBUTOR
null
When trying to use the Trainer's mlflow integration within Azure ML, it will fail at the `on_train_begin` callback because it tries to log all of the TrainingArguments and the model config which will total more than 100 parameters. > RestException: INVALID_PARAMETER_VALUE: Response: {'Error': {'Code': 'ValidationError', 'Severity': None, 'Message': 'A field of the entity is over the size limit. FieldName=Parameters, Limit=100, Size=175. See https://aka.ms/azure-machine-learning-limits for service limits documentation.', 'MessageFormat': None, 'MessageParameters': None, 'ReferenceCode': None, 'DetailsUri': None, 'Target': None, 'Details': [], 'InnerError': None, 'DebugInfo': None, 'AdditionalInfo': None}, 'Correlation': {'operation': '9816ae760b843120b907ea5121aeb911', 'request': '4ac415478d4966af'}, 'Environment': 'eastus2', 'Location': 'eastus2', 'Time': '2022-09-02T14:33:48.3242911+00:00', 'ComponentName': 'mlflow', 'error_code': 'INVALID_PARAMETER_VALUE'} Doing this outside of Azure ML does not produce this error. I know there is a separate AzureML callback, but I believe this is for the older version of Azure ML and the newer version just uses mlflow. I could not get it to work using only the Azure ML callback. There are many values in TrainingArguments that are not super important and are typically never set, so the easy workaround is to limit the number of TrainingArguments that get logged. After going through all arguments, I identified 40 or so that are the most important for logging. The rest of the arguments can still be saved to the output directory by saving all arguments as a json file. If you want to see this error for yourself, run the following code inside Azure ML: ```python from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential import mlflow from transformers import AutoModel, TrainingArguments ml_client = MLClient.from_config(credential=DefaultAzureCredential()) azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri mlflow.set_tracking_uri(azureml_mlflow_uri) with mlflow.start_run(): targs = TrainingArguments("output") model = AutoModel.from_pretrained("albert-base-v2") mlflow.log_params(targs.to_dict()) mlflow.log_params(model.config.to_dict()) ``` Questions: 1. Should a change be made to the mlflow callback? 2. Alternatively, should Azure ML + mlflow users not use the mlflow callback and instead do their own custom logging? 3. Should an updated Azure ML callback be created? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18870/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18869/comments
https://api.github.com/repos/huggingface/transformers/issues/18869/events
https://github.com/huggingface/transformers/pull/18869
1,360,277,146
PR_kwDOCUB6oc4-SU6h
18,869
pin Slack SDK to 3.18.1 to avoid failing issue
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
COLLABORATOR
null
# What does this PR do? Currently CI Slack reports failed to be sent due to an error ```bash The server responded with: {'ok': False, 'error': 'invalid_blocks_format'} ``` It happens since `slack-sdk-3.18.2`. This PR pin `slack-sdk-3.18.1` so we can receive the reports.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18869/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18869", "html_url": "https://github.com/huggingface/transformers/pull/18869", "diff_url": "https://github.com/huggingface/transformers/pull/18869.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18869.patch", "merged_at": 1662130149000 }
https://api.github.com/repos/huggingface/transformers/issues/18868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18868/comments
https://api.github.com/repos/huggingface/transformers/issues/18868/events
https://github.com/huggingface/transformers/pull/18868
1,360,217,468
PR_kwDOCUB6oc4-SIIE
18,868
Remove cached torch_extensions on CI runners
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
COLLABORATOR
null
# What does this PR do? - The test ``` tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_scheduler_ds_optimizer ``` failed since 2 weeks due to some cache issue. The error message is ```bash E ImportError: /github/home/.cache/torch_extensions/py38_cu113/fused_adam/fused_adam.so: undefined symbol: _ZN3c104impl8GPUTrace13gpuTraceStateE` ``` - After I remove the cache (on the host runners, not inside the running docker) by ```bash sudo rm -rf /home/github_actions/actions-runner/_work_temp/_github_home/.cache/torch_extensions/py38_cu113/ ``` the test passes. - This PR add the following in the workflow file ```bash rm -rf /github/home/.cache/torch_extensions/ ``` to avoid the same problem occurring in the future. Remark: Notice the host directory ``` /home/github_actions/actions-runner/_work_temp/_github_home/ ``` is mapped to ``` /github/home/ ``` inside the running docker (we can see this in the job run page).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18868/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18868", "html_url": "https://github.com/huggingface/transformers/pull/18868", "diff_url": "https://github.com/huggingface/transformers/pull/18868.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18868.patch", "merged_at": 1662135479000 }
https://api.github.com/repos/huggingface/transformers/issues/18867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18867/comments
https://api.github.com/repos/huggingface/transformers/issues/18867/events
https://github.com/huggingface/transformers/pull/18867
1,360,202,324
PR_kwDOCUB6oc4-SE7K
18,867
[OWL-ViT] Add model to the appropriate section
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? This PR moves OWL-ViT to the "multimodal" section in the docs, as the model isn't vision-only. cc @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18867/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18867/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18867", "html_url": "https://github.com/huggingface/transformers/pull/18867", "diff_url": "https://github.com/huggingface/transformers/pull/18867.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18867.patch", "merged_at": 1662127165000 }
https://api.github.com/repos/huggingface/transformers/issues/18866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18866/comments
https://api.github.com/repos/huggingface/transformers/issues/18866/events
https://github.com/huggingface/transformers/pull/18866
1,360,013,543
PR_kwDOCUB6oc4-RciD
18,866
Added AnyPrecisionAdamW as an optimizer
{ "login": "Zeesky-code", "id": 71593672, "node_id": "MDQ6VXNlcjcxNTkzNjcy", "avatar_url": "https://avatars.githubusercontent.com/u/71593672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zeesky-code", "html_url": "https://github.com/Zeesky-code", "followers_url": "https://api.github.com/users/Zeesky-code/followers", "following_url": "https://api.github.com/users/Zeesky-code/following{/other_user}", "gists_url": "https://api.github.com/users/Zeesky-code/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zeesky-code/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zeesky-code/subscriptions", "organizations_url": "https://api.github.com/users/Zeesky-code/orgs", "repos_url": "https://api.github.com/users/Zeesky-code/repos", "events_url": "https://api.github.com/users/Zeesky-code/events{/privacy}", "received_events_url": "https://api.github.com/users/Zeesky-code/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for trying, @Zeesky-code - but that won't do anything ;)\r\n\r\nI guess my instructions weren't very instructive I've just pointed to a few places where one would start looking at mimicking the integration of other optimizers, my apologies if it wasn't obvious.\r\n\r\nSo in this case it'd follow the path of `adamw_torch` , as it's the nearest similar optimizer.\r\n\r\nand as I said the key to the PR is tests and documentation. Again checking the existing tests and working from there is what's needed.\r\n\r\nIf it's too much and you're no longer interested please don't hesitate to comment in the feature request issue that it's open for grabs again. If you want to continue, that is great too - please don't hesitate to ask questions if any.\r\n\r\np.s. it might help to look at the previous PRs that added new optimizers, e.g. find the PR that added `adamw_bnb_8bit` - that could be a good model to copy from. And you can see the scope of work that needs to be done.\r\n", "\r\n\r\nOhh, I've taken a look at the PR that added adamw_bnb_8bit and I'm afraid I don't think I'll be able to work on this. \r\nI'll close this PR and let others know the issue is still available to work on.\r\n\r\nThank you😅\r\n\r\n> Thank you for trying, @Zeesky-code - but that won't do anything ;)\r\n> \r\n> I guess my instructions weren't very instructive I've just pointed to a few places where one would start looking at mimicking the integration of other optimizers, my apologies if it wasn't obvious.\r\n> \r\n> So in this case it'd follow the path of `adamw_torch` , as it's the nearest similar optimizer.\r\n> \r\n> and as I said the key to the PR is tests and documentation. Again checking the existing tests and working from there is what's needed.\r\n> \r\n> If it's too much and you're no longer interested please don't hesitate to comment in the feature request issue that it's open for grabs again. If you want to continue, that is great too - please don't hesitate to ask questions if any.\r\n> \r\n> p.s. it might help to look at the previous PRs that added new optimizers, e.g. find the PR that added `adamw_bnb_8bit` - that could be a good model to copy from. And you can see the scope of work that needs to be done.\r\n\r\n", "Thank you for an honest evaluation, @Zeesky-code - and much appreciated for trying!" ]
1,662
1,662
1,662
NONE
null
Added AnyPrecisionAdamW as an optimizer - Related Issue #18827 @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18866/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18866/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18866", "html_url": "https://github.com/huggingface/transformers/pull/18866", "diff_url": "https://github.com/huggingface/transformers/pull/18866.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18866.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18865/comments
https://api.github.com/repos/huggingface/transformers/issues/18865/events
https://github.com/huggingface/transformers/pull/18865
1,359,977,184
PR_kwDOCUB6oc4-RUv8
18,865
A script to download artifacts and perform CI error statistics
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
COLLABORATOR
null
# What does this PR do? This script is helpful for the past CI project. It downloads all artifacts from a workflow run, and get the error statistics + the corresponding failing tests. `errors.json`: the places where error occur + what those errors are `failed_tests.json`: which test methods failed (can be used to determine which models will be supported in a specific backend version) ** We might adjust this script a bit once we start to perform the automation of the past CI project. ** Currently, it prints something as the following (but we save the full information in 2 json files) ```bash ('RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.', 66) ("AttributeError: module 'torch.jit' has no attribute '_state'", 51) ("AttributeError: module 'torch' has no attribute 'minimum'", 45) ("AttributeError: module 'torch' has no attribute 'multiply'", 25) ('AttributeError: Caught AttributeError in replica 0 on device 0.', 3) ('RuntimeError: "normal_kernel_cpu" not implemented for \'BFloat16\'', 3) ("AssertionError: Couldn't trace module.", 3) ("AttributeError: module 'torch' has no attribute 'isneginf'", 2) ("TypeError: where(): argument 'input' (position 2) must be Tensor, not int", 2) ("AttributeError: 'Tensor' object has no attribute 'nansum'", 1) ('RuntimeError: Caught RuntimeError in replica 0 on device 0.', 1) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18865/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18865", "html_url": "https://github.com/huggingface/transformers/pull/18865", "diff_url": "https://github.com/huggingface/transformers/pull/18865.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18865.patch", "merged_at": 1662134366000 }
https://api.github.com/repos/huggingface/transformers/issues/18864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18864/comments
https://api.github.com/repos/huggingface/transformers/issues/18864/events
https://github.com/huggingface/transformers/pull/18864
1,359,858,778
PR_kwDOCUB6oc4-Q7fO
18,864
Fix naming issue with Image2TextGenerationPipeline
{ "login": "OlivierDehaene", "id": 23298448, "node_id": "MDQ6VXNlcjIzMjk4NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/23298448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OlivierDehaene", "html_url": "https://github.com/OlivierDehaene", "followers_url": "https://api.github.com/users/OlivierDehaene/followers", "following_url": "https://api.github.com/users/OlivierDehaene/following{/other_user}", "gists_url": "https://api.github.com/users/OlivierDehaene/gists{/gist_id}", "starred_url": "https://api.github.com/users/OlivierDehaene/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OlivierDehaene/subscriptions", "organizations_url": "https://api.github.com/users/OlivierDehaene/orgs", "repos_url": "https://api.github.com/users/OlivierDehaene/repos", "events_url": "https://api.github.com/users/OlivierDehaene/events{/privacy}", "received_events_url": "https://api.github.com/users/OlivierDehaene/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
MEMBER
null
# What does this PR do? Fixes naming issue with the Image2TextGenerationPipeline: naming was not consistent with other libraries. See [this message](https://huggingface.slack.com/archives/C014N4749J9/p1662091141349169?thread_ts=1662048034.910319&cid=C014N4749J9). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @Narsil, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18864/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18864", "html_url": "https://github.com/huggingface/transformers/pull/18864", "diff_url": "https://github.com/huggingface/transformers/pull/18864.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18864.patch", "merged_at": 1662119730000 }
https://api.github.com/repos/huggingface/transformers/issues/18863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18863/comments
https://api.github.com/repos/huggingface/transformers/issues/18863/events
https://github.com/huggingface/transformers/pull/18863
1,359,689,178
PR_kwDOCUB6oc4-QXub
18,863
alter retrived to retrieved
{ "login": "gouqi666", "id": 52353600, "node_id": "MDQ6VXNlcjUyMzUzNjAw", "avatar_url": "https://avatars.githubusercontent.com/u/52353600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gouqi666", "html_url": "https://github.com/gouqi666", "followers_url": "https://api.github.com/users/gouqi666/followers", "following_url": "https://api.github.com/users/gouqi666/following{/other_user}", "gists_url": "https://api.github.com/users/gouqi666/gists{/gist_id}", "starred_url": "https://api.github.com/users/gouqi666/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gouqi666/subscriptions", "organizations_url": "https://api.github.com/users/gouqi666/orgs", "repos_url": "https://api.github.com/users/gouqi666/repos", "events_url": "https://api.github.com/users/gouqi666/events{/privacy}", "received_events_url": "https://api.github.com/users/gouqi666/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,662
1,664
1,664
CONTRIBUTOR
null
# What does this PR do? alter 'retrived' to 'retrieved' <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18863/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18863", "html_url": "https://github.com/huggingface/transformers/pull/18863", "diff_url": "https://github.com/huggingface/transformers/pull/18863.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18863.patch", "merged_at": 1664892048000 }
https://api.github.com/repos/huggingface/transformers/issues/18862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18862/comments
https://api.github.com/repos/huggingface/transformers/issues/18862/events
https://github.com/huggingface/transformers/issues/18862
1,359,597,508
I_kwDOCUB6oc5RCc_E
18,862
How can I control data feeding order to model using Huggingface Trainer?
{ "login": "SangwonPark0211", "id": 59641312, "node_id": "MDQ6VXNlcjU5NjQxMzEy", "avatar_url": "https://avatars.githubusercontent.com/u/59641312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SangwonPark0211", "html_url": "https://github.com/SangwonPark0211", "followers_url": "https://api.github.com/users/SangwonPark0211/followers", "following_url": "https://api.github.com/users/SangwonPark0211/following{/other_user}", "gists_url": "https://api.github.com/users/SangwonPark0211/gists{/gist_id}", "starred_url": "https://api.github.com/users/SangwonPark0211/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangwonPark0211/subscriptions", "organizations_url": "https://api.github.com/users/SangwonPark0211/orgs", "repos_url": "https://api.github.com/users/SangwonPark0211/repos", "events_url": "https://api.github.com/users/SangwonPark0211/events{/privacy}", "received_events_url": "https://api.github.com/users/SangwonPark0211/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can subclass the Seq2SeqTrainer and override the [_get_train_sampler](https://github.com/huggingface/transformers/blob/8d59385f124dd1b330cac7eaa7162799870793ec/src/transformers/trainer.py#L759) method. Instead of creating a RandomSampler object, create a [SequentialSampler](https://pytorch.org/docs/stable/data.html#torch.utils.data.SequentialSampler).\r\n\r\n```python3\r\nfrom transformers.trainer_seq2seq import Seq2SeqTrainer\r\nfrom torch.utils.data import SequentialSampler\r\n\r\nclass SequentialSeq2SeqTrainer(Seq2SeqTrainer):\r\n def _get_train_sampler(self) -> SequentialSampler:\r\n return SequentialSampler(self.train_dataset)\r\n```", "Thank you!! I'll try as you mentioned.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,662
1,667
1,667
NONE
null
### Feature request I want to train model in the order in which the data are stored. For example, if there are 100 data, then I want to feed 1st, 2nd data together(because I set batch_size=2 in code) and then 3rd, 4th data and then 5th, 6th data together and so on.... But huggingface Trainer train model using datacollator and this feed data to model randomly by the parameter data_seed. **How can I train model feeding data in the order in which the data are stored?** ``` # load tokenizer model_checkpoint = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) # load model model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) # make batch data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) batch_size = 2 epochs = 3 args = Seq2SeqTrainingArguments( output_dir = "saved_model", overwrite_output_dir = True, evaluation_strategy = "epoch", save_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, gradient_accumulation_steps=2, weight_decay=0.01, num_train_epochs=epochs, predict_with_generate=True, fp16=False, dataloader_num_workers=8, ) trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) ``` ### Motivation I want to control data feeding order to the model. ### Your contribution I want to control data feeding order to the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18862/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18861/comments
https://api.github.com/repos/huggingface/transformers/issues/18861/events
https://github.com/huggingface/transformers/pull/18861
1,359,576,457
PR_kwDOCUB6oc4-QAUp
18,861
Convert logged learning rate from tensor to float via `.item()` so it can be JSON serialized.
{ "login": "kmckiern", "id": 4978902, "node_id": "MDQ6VXNlcjQ5Nzg5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/4978902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kmckiern", "html_url": "https://github.com/kmckiern", "followers_url": "https://api.github.com/users/kmckiern/followers", "following_url": "https://api.github.com/users/kmckiern/following{/other_user}", "gists_url": "https://api.github.com/users/kmckiern/gists{/gist_id}", "starred_url": "https://api.github.com/users/kmckiern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kmckiern/subscriptions", "organizations_url": "https://api.github.com/users/kmckiern/orgs", "repos_url": "https://api.github.com/users/kmckiern/repos", "events_url": "https://api.github.com/users/kmckiern/events{/privacy}", "received_events_url": "https://api.github.com/users/kmckiern/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
CONTRIBUTOR
null
Fixes #18860 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18861", "html_url": "https://github.com/huggingface/transformers/pull/18861", "diff_url": "https://github.com/huggingface/transformers/pull/18861.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18861.patch", "merged_at": 1662119191000 }
https://api.github.com/repos/huggingface/transformers/issues/18860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18860/comments
https://api.github.com/repos/huggingface/transformers/issues/18860/events
https://github.com/huggingface/transformers/issues/18860
1,359,561,644
I_kwDOCUB6oc5RCUOs
18,860
Learning rate is given as tensor, cannot serialize TrainerState in order to save checkpoint.
{ "login": "kmckiern", "id": 4978902, "node_id": "MDQ6VXNlcjQ5Nzg5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/4978902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kmckiern", "html_url": "https://github.com/kmckiern", "followers_url": "https://api.github.com/users/kmckiern/followers", "following_url": "https://api.github.com/users/kmckiern/following{/other_user}", "gists_url": "https://api.github.com/users/kmckiern/gists{/gist_id}", "starred_url": "https://api.github.com/users/kmckiern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kmckiern/subscriptions", "organizations_url": "https://api.github.com/users/kmckiern/orgs", "repos_url": "https://api.github.com/users/kmckiern/repos", "events_url": "https://api.github.com/users/kmckiern/events{/privacy}", "received_events_url": "https://api.github.com/users/kmckiern/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[]
1,662
1,662
1,662
CONTRIBUTOR
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35 - Python version: 3.10.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1.post200 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. use Adafactor optimizer and schedule ``` optimizer = Adafactor( model.parameters(), relative_step=True, warmup_init=True, ) lr_scheduler = AdafactorSchedule(optimizer) ``` 2. save a checkpoint ### Expected behavior When using an `AdafactorSchedule` I can't use the `Trainer` class to save a checkpoint. It breaks [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py#L97) since the learning rate attached to the `TrainerState` is given by a tensor and tensors are not JSON serializable. I dropped a breakpoint at this line and took a look at my `TrainerState`: ``` In [4]: self.log_history[0]['learning_rate'] Out[4]: tensor(0.0001, device='cuda:0') ``` Expected behavior is that the learning rate attached to the log history would be given by a float and would therefore be JSON serializable.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18860/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18859/comments
https://api.github.com/repos/huggingface/transformers/issues/18859/events
https://github.com/huggingface/transformers/pull/18859
1,359,557,249
PR_kwDOCUB6oc4-P8Tb
18,859
[modeling_utils] postpone bnb loading until and if it's needed
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "actually the problem was much more severe - before this PR on a machine with no gpu, it lead to this huge crash:\r\n\r\n```\r\npython -c \"from transformers import AutoModel, AutoTokenizer, AutoConfig; AutoModel.from_pretrained('gpt2'), AutoTokenizer.from_pretrained('gpt2'), AutoConfig.from_pretrained('gpt2');\"\r\n\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████| 665/665 [00:00<00:00, 550kB/s]\r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\nFor effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link\r\n================================================================================\r\nCUDA SETUP: CUDA runtime path found: /gpfswork/rech/six/commun/conda/inference/lib/libcudart.so\r\nCUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!\r\nTraceback (most recent call last):\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/import_utils.py\", line 1031, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 843, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/gpt2/modeling_gpt2.py\", line 49, in <module>\r\n from ...modeling_utils import PreTrainedModel, SequenceSummary\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/modeling_utils.py\", line 88, in <module>\r\n from .utils.bitsandbytes import get_key_to_not_convert, replace_8bit_linear, set_module_8bit_tensor_to_device\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/bitsandbytes.py\", line 10, in <module>\r\n import bitsandbytes as bnb\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/__init__.py\", line 6, in <module>\r\n from .autograd._functions import (\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py\", line 4, in <module>\r\n import bitsandbytes.functional as F\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/functional.py\", line 14, in <module>\r\n from .cextension import COMPILED_WITH_CUDA, lib\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cextension.py\", line 41, in <module>\r\n lib = CUDALibrary_Singleton.get_instance().lib\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cextension.py\", line 37, in get_instance\r\n cls._instance.initialize()\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cextension.py\", line 15, in initialize\r\n binary_name = evaluate_cuda_setup()\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py\", line 136, in evaluate_cuda_setup\r\n cc = get_compute_capability(cuda)\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py\", line 109, in get_compute_capability\r\n ccs = get_compute_capabilities(cuda)\r\n File \"/gpfswork/rech/six/commun/conda/inference/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py\", line 87, in get_compute_capabilities\r\n check_cuda_result(cuda, cuda.cuDeviceGetCount(ctypes.byref(nGpus)))\r\nAttributeError: 'NoneType' object has no attribute 'cuDeviceGetCount'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py\", line 462, in from_pretrained\r\n model_class = _get_model_class(config, cls._model_mapping)\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py\", line 359, in _get_model_class\r\n supported_models = model_mapping[type(config)]\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py\", line 583, in __getitem__\r\n return self._load_attr_from_module(model_type, model_name)\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py\", line 597, in _load_attr_from_module\r\n return getattribute_from_module(self._modules[module_name], attr)\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/models/auto/auto_factory.py\", line 553, in getattribute_from_module\r\n if hasattr(module, attr):\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/import_utils.py\", line 1021, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/gpfsssd/worksf/projects/rech/six/commun/code/inference/transformers/src/transformers/utils/import_utils.py\", line 1033, in _get_module\r\n raise RuntimeError(\r\nRuntimeError: Failed to import transformers.models.gpt2.modeling_gpt2 because of the following error (look up to see its traceback):\r\n'NoneType' object has no attribute 'cuDeviceGetCount'\r\n```\r\n\r\nbasically rendering transformers completely broken if bnb was installed and the machine had no visible gpu.\r\n\r\nafter updating the clone post this PR merge all is back to normal.", "@younesbelkada, I think this functionality of `load_in_8bit=True` requires checking that there is at least one gpu and cleanly assert if there isn't any. i.e this feature can be used only with gpu_count > 0.", "Hi @stas00 ,\r\n\r\nThanks a lot for adding this! I agree with all the points stated on the PR.\r\nAgreed also on your final suggestion, I will add a small PR to cleanly check if a GPU has been correctly detected by Pytorch" ]
1,662
1,662
1,662
CONTRIBUTOR
null
BNB shouldn't be loaded unless it's actually used - definitely not by used-everywhere `modeling_utils.py`: The following shouldn't (1) generate all this noise and (2) use up memory and resources w/o an actual need: ``` $ python -c "from transformers import BloomModel" ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link ================================================================================ CUDA SETUP: CUDA runtime path found: /home/stas/anaconda3/envs/py38-pt112/lib/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 6.1 CUDA SETUP: Detected CUDA version 116 CUDA SETUP: Loading binary /home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda116_nocublaslt.so... ``` Specifically, currently only using `from_pretrained(..., load_in_8bit=True)` should load it. My proposal is probably not the best, but it solves this problem Probably a cleaner solution is to rewrite `src/transformers/utils/bitsandbytes.py` to delay loading its libraries until and if it is used - not sure. Totally open to other suggestions. @sgugger, @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18859/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18859", "html_url": "https://github.com/huggingface/transformers/pull/18859", "diff_url": "https://github.com/huggingface/transformers/pull/18859.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18859.patch", "merged_at": 1662132167000 }
https://api.github.com/repos/huggingface/transformers/issues/18858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18858/comments
https://api.github.com/repos/huggingface/transformers/issues/18858/events
https://github.com/huggingface/transformers/pull/18858
1,359,188,776
PR_kwDOCUB6oc4-OrsM
18,858
V4.3.0
{ "login": "narandharanggi", "id": 24572673, "node_id": "MDQ6VXNlcjI0NTcyNjcz", "avatar_url": "https://avatars.githubusercontent.com/u/24572673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/narandharanggi", "html_url": "https://github.com/narandharanggi", "followers_url": "https://api.github.com/users/narandharanggi/followers", "following_url": "https://api.github.com/users/narandharanggi/following{/other_user}", "gists_url": "https://api.github.com/users/narandharanggi/gists{/gist_id}", "starred_url": "https://api.github.com/users/narandharanggi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/narandharanggi/subscriptions", "organizations_url": "https://api.github.com/users/narandharanggi/orgs", "repos_url": "https://api.github.com/users/narandharanggi/repos", "events_url": "https://api.github.com/users/narandharanggi/events{/privacy}", "received_events_url": "https://api.github.com/users/narandharanggi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,662
1,662
1,662
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18858/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18858", "html_url": "https://github.com/huggingface/transformers/pull/18858", "diff_url": "https://github.com/huggingface/transformers/pull/18858.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18858.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18857/comments
https://api.github.com/repos/huggingface/transformers/issues/18857/events
https://github.com/huggingface/transformers/pull/18857
1,359,141,389
PR_kwDOCUB6oc4-OhXn
18,857
Clean up utils.hub using the latest from hf_hub
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
COLLABORATOR
null
# What does this PR do? This PR uses the newly released version of `hugginface_hub` to clean up a few things introduced in #18438 (points 1 and 2 in the description of this PR). For point 3 (load from cache) there is currently a difference between Transformers' `try_to_load_from_cache` and the huggingface_hub's one, so this will a follow-up PR in `huggingface_hub` (and then to wait for the next release).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18857/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18857", "html_url": "https://github.com/huggingface/transformers/pull/18857", "diff_url": "https://github.com/huggingface/transformers/pull/18857.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18857.patch", "merged_at": 1662129006000 }
https://api.github.com/repos/huggingface/transformers/issues/18856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18856/comments
https://api.github.com/repos/huggingface/transformers/issues/18856/events
https://github.com/huggingface/transformers/pull/18856
1,359,014,984
PR_kwDOCUB6oc4-OGCV
18,856
Fix number of examples for iterable datasets in multiprocessing
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
COLLABORATOR
null
# What does this PR do? As pointed out in #18608, `IterableDatasetShard.num_examples` is not always update in multiprocessing environments. This PR addresses that by ignoring the value in those cases. It also adds a stronger check of trusting the observed number of examples when for the reason the length is 0. Fixes #18608
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18856/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18856", "html_url": "https://github.com/huggingface/transformers/pull/18856", "diff_url": "https://github.com/huggingface/transformers/pull/18856.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18856.patch", "merged_at": 1662130179000 }
https://api.github.com/repos/huggingface/transformers/issues/18855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18855/comments
https://api.github.com/repos/huggingface/transformers/issues/18855/events
https://github.com/huggingface/transformers/pull/18855
1,359,000,563
PR_kwDOCUB6oc4-OC00
18,855
Tie weights after preparing the model in run_clm
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
COLLABORATOR
null
# What does this PR do? #18676 fixed the weights tying in `run_mlm_no_trainer` but not in `run_clm_no_trainer`. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18855", "html_url": "https://github.com/huggingface/transformers/pull/18855", "diff_url": "https://github.com/huggingface/transformers/pull/18855.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18855.patch", "merged_at": 1662048416000 }
https://api.github.com/repos/huggingface/transformers/issues/18854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18854/comments
https://api.github.com/repos/huggingface/transformers/issues/18854/events
https://github.com/huggingface/transformers/pull/18854
1,358,947,161
PR_kwDOCUB6oc4-N3Q-
18,854
Pin revision for LayoutLMForQuestionAnswering and TFLayoutLMForQuestionAnswering tests
{ "login": "ankrgyl", "id": 565363, "node_id": "MDQ6VXNlcjU2NTM2Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankrgyl", "html_url": "https://github.com/ankrgyl", "followers_url": "https://api.github.com/users/ankrgyl/followers", "following_url": "https://api.github.com/users/ankrgyl/following{/other_user}", "gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions", "organizations_url": "https://api.github.com/users/ankrgyl/orgs", "repos_url": "https://api.github.com/users/ankrgyl/repos", "events_url": "https://api.github.com/users/ankrgyl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankrgyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? The newly introduced tests for `LayoutLMForQuestionAnswering` and `TFLayoutLMForQuestionAnswering` broke due to a change to the weights in https://huggingface.co/impira/layoutlm-document-qa. To make sure a weights change does not break tests, I've pinned the revisions in these tests. I'll separately investigate the weights and debug if something is broken with them. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. It was raised here: https://github.com/huggingface/transformers/pull/18407. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge @ydshieh <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18854", "html_url": "https://github.com/huggingface/transformers/pull/18854", "diff_url": "https://github.com/huggingface/transformers/pull/18854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18854.patch", "merged_at": 1662051153000 }
https://api.github.com/repos/huggingface/transformers/issues/18853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18853/comments
https://api.github.com/repos/huggingface/transformers/issues/18853/events
https://github.com/huggingface/transformers/issues/18853
1,358,890,551
I_kwDOCUB6oc5Q_wY3
18,853
loading CLIPVisionModel from openai/clip-vit-base-patch32
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I don't see the issue here, it is just telling you that the weights of the text encoder aren't used. Which makes total sense as you are loading only the vision encoder, whose weights are all loaded properly.\r\n\r\nIt's a warning, not an error, so I'm not sure whether this can be improved.", "Oh sorry, my bad, the message warning was so huge I thought it also discarded the weights of the vision model." ]
1,662
1,662
1,662
CONTRIBUTOR
null
### System Info - `transformers` version: 4.12.5 - Platform: Linux-4.18.0-305.57.1.el8_4.x86_64-x86_64-with-redhat-8.4-Ootpa - Python version: 3.7.11 - PyTorch version (GPU?): 1.10.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help? @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction CLIPVisionModel keys do not match the state dict available in `"openai/clip-vit-base-patch32"`. Perhaps it can be fixed but you should at least change the doc which provides the sample code below: https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModel ```py >>> from transformers import CLIPVisionModel >>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") You are using a model of type clip to instantiate a model of type clip_vision_model. This is not supported for all configurations of models and can yield errors. Some weights of the model checkpoint at openai/clip-vit-base-patch32 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.1.layer_norm1.weight', …, 'text_model.encoder.layers.9.self_attn.q_proj.bias'] - This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model fro m a BertForPreTraining model). - This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSeq uenceClassification model). ``` # Circumventing it I was able to circumvent this problem easily: ```py >>> from transformers import CLIPVisionModel, CLIPModel >>> cv = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") # ignore warning and load the full CLIP >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") # load one state dict into the other and save it >>> cv.vision_model.load_state_dict(model.vision_model.state_dict()) >>> cv.save_pretrained('/path/of/your/choice') ``` ### Expected behavior ```py >>> from transformers import CLIPVisionModel >>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") # all clear ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18853/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18852/comments
https://api.github.com/repos/huggingface/transformers/issues/18852/events
https://github.com/huggingface/transformers/pull/18852
1,358,753,716
PR_kwDOCUB6oc4-NNC5
18,852
Add X-CLIP
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Many tests fail due to the following error:\r\n\r\n> ModuleNotFoundError: No module named 'transformers.models.xclip'\r\n\r\nThis is probably because I first called the model folder \"xclip\", which is now called \"x_clip\". Still, wondering why it keeps looking for the module models.clip. If anyone has any pointers, that would be greatly appreciated.", "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger thanks a lot, that solved the issue. There seems to be another (small) issue with run_tests_hub:\r\n```\r\n==================================== ERRORS ====================================\r\n_______________ ERROR collecting tests/utils/test_file_utils.py ________________\r\ntests/utils/test_file_utils.py:26: in <module>\r\n from transformers import * # noqa F406\r\nsrc/transformers/utils/import_utils.py:1021: in __getattr__\r\n value = getattr(module, name)\r\nsrc/transformers/utils/import_utils.py:1023: in __getattr__\r\n raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\r\nE AttributeError: module transformers.models.clip has no attribute CLIPProcessor\r\n```\r\nRunning `RUN_SLOW=yes pytest tests/utils/test_file_utils.py` passes locally for me.", "That would be because you moved `CLIPProcessor` in the non-vision dependent objects in the main init (and rightly so) but did not do the same for the `models/clip/__init__.py`.", "@sgugger and @alaradirik - the PR is ready for merge. Kindly asking for your approval :)" ]
1,662
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? This PR adds [X-CLIP](https://github.com/microsoft/VideoX/tree/master/X-CLIP), which is a minimal extension of CLIP for video-language pre-training. To do: - [x] upload all checkpoints to the hub, as part of the `microsoft` organization
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18852", "html_url": "https://github.com/huggingface/transformers/pull/18852", "diff_url": "https://github.com/huggingface/transformers/pull/18852.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18852.patch", "merged_at": 1662641431000 }
https://api.github.com/repos/huggingface/transformers/issues/18851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18851/comments
https://api.github.com/repos/huggingface/transformers/issues/18851/events
https://github.com/huggingface/transformers/pull/18851
1,358,672,184
PR_kwDOCUB6oc4-M7Qa
18,851
Generate: get the correct beam index on eos token
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
MEMBER
null
# What does this PR do? Fixes #18839 We were not storing the correct beam index when an `eos_token` was generated (except for the first batch member), resulting in the issue linked above. ________________________________ Confirming the change -- let's consider the following script, which gets the scores from `output.sequences_scores` and from `model.compute_transition_beam_scores`. Since there is no length penalty, the sum of the transition scores divided by the sequence length should match `output.sequences_scores` -- with the current codebase, it was not true except for the first batch. ```python from transformers import BartTokenizer, BartForConditionalGeneration model_id = "facebook/bart-base" tokenizer = BartTokenizer.from_pretrained(model_id) model = BartForConditionalGeneration.from_pretrained(model_id) input_tokens = ["what do you think it ? huggingface is a great library. And I enjoy it very much", "transformers is so good"] batch_size = 2 num_beams = 10 max_length = 10 num_return_sequences = 5 input_ids = tokenizer(input_tokens, return_tensors='pt', padding=True).input_ids output = model.generate( input_ids, max_length=max_length, num_beams=num_beams, num_return_sequences=num_return_sequences, return_dict_in_generate=True, output_scores=True ) print("\nbeam indices:\n", output.beam_indices) beam_lengths = (output.beam_indices != -1).sum(dim=1) beam_scores = model.compute_transition_beam_scores( output.sequences, output.scores, output.beam_indices, tokenizer.eos_token_id ) print("\nsequence scores (from outputs):\n", output.sequences_scores) print("\nsequence scores (from compute_transition_beam_scores):\n", beam_scores.sum(dim=1) / beam_lengths) ``` 🚫 output before this PR: ``` beam indices: tensor([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 1, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 2, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 3, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 4, -1], [10, 10, 10, 10, 10, 10, 0, -1, -1, -1], [10, 10, 10, 10, 10, 10, 10, 0, -1, -1], [10, 10, 11, 11, 11, 11, 1, -1, -1, -1], [10, 10, 10, 10, 10, 10, 10, 1, -1, -1], [10, 10, 12, 12, 12, 12, 2, -1, -1, -1]]) sequence scores (from outputs): tensor([-2.4142e-02, -5.1596e-01, -5.2848e-01, -6.2190e-01, -6.2194e-01, -4.1643e-04, -1.0500e+00, -1.1113e+00, -1.1323e+00, -1.1955e+00]) sequence scores (from compute_transition_beam_scores): tensor([-0.0241, -0.5160, -0.5285, -0.6219, -0.6219, -2.4050, -2.5656, -3.4137, -2.3775, -3.5453]) ``` ✅ output after this PR: ``` beam indices: tensor([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 1, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 2, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 3, -1], [ 0, 0, 0, 0, 0, 0, 0, 0, 4, -1], [10, 10, 10, 10, 10, 10, 10, -1, -1, -1], [10, 10, 10, 10, 10, 10, 10, 10, -1, -1], [10, 10, 11, 11, 11, 11, 11, -1, -1, -1], [10, 10, 10, 10, 10, 10, 10, 11, -1, -1], [10, 10, 12, 12, 12, 12, 12, -1, -1, -1]]) sequence scores (from outputs): tensor([-2.4142e-02, -5.1596e-01, -5.2848e-01, -6.2190e-01, -6.2194e-01, -4.1643e-04, -1.0500e+00, -1.1113e+00, -1.1323e+00, -1.1955e+00]) sequence scores (from compute_transition_beam_scores): tensor([-2.4142e-02, -5.1596e-01, -5.2848e-01, -6.2190e-01, -6.2194e-01, -4.1643e-04, -1.0500e+00, -1.1113e+00, -1.1323e+00, -1.1955e+00]) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18851/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18851", "html_url": "https://github.com/huggingface/transformers/pull/18851", "diff_url": "https://github.com/huggingface/transformers/pull/18851.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18851.patch", "merged_at": 1662402948000 }
https://api.github.com/repos/huggingface/transformers/issues/18850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18850/comments
https://api.github.com/repos/huggingface/transformers/issues/18850/events
https://github.com/huggingface/transformers/pull/18850
1,358,579,308
PR_kwDOCUB6oc4-MnHw
18,850
[ViTMAE] Renamed variable name
{ "login": "ariG23498", "id": 36856589, "node_id": "MDQ6VXNlcjM2ODU2NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ariG23498", "html_url": "https://github.com/ariG23498", "followers_url": "https://api.github.com/users/ariG23498/followers", "following_url": "https://api.github.com/users/ariG23498/following{/other_user}", "gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}", "starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions", "organizations_url": "https://api.github.com/users/ariG23498/orgs", "repos_url": "https://api.github.com/users/ariG23498/repos", "events_url": "https://api.github.com/users/ariG23498/events{/privacy}", "received_events_url": "https://api.github.com/users/ariG23498/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Tagging @NielsRogge, as he's the vision master 👨‍🏫 ", "Just a reminder here!", "(Niels it currently off, he'll be back in a few days :) )", "FYI, I took this from the original repository: https://github.com/facebookresearch/mae/blob/efb2a8062c206524e35e47d04501ed4f544c0ae8/models_mae.py#L140", "Do you think an issue in the original repository would be a good approach to move forward? @NielsRogge ", "Yes indeed, to make the authors confirm!", "Thanks for leading the effort. Good work! \r\n\r\n@sgugger and @NielsRogge thanks for the reviews and actions. " ]
1,662
1,665
1,665
CONTRIBUTOR
null
The `sequence_masked` variable is actually the part of the sequence that is kept **unmasked** for the encoder to consume. This commit renames the variable accordingly. CC: @sayakpaul
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18850/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18850", "html_url": "https://github.com/huggingface/transformers/pull/18850", "diff_url": "https://github.com/huggingface/transformers/pull/18850.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18850.patch", "merged_at": 1665408397000 }
https://api.github.com/repos/huggingface/transformers/issues/18849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18849/comments
https://api.github.com/repos/huggingface/transformers/issues/18849/events
https://github.com/huggingface/transformers/issues/18849
1,358,533,502
I_kwDOCUB6oc5Q-ZN-
18,849
BartForSequenceClassification: Use eos_token or cls_token?
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,662
1,665
1,665
CONTRIBUTOR
null
### System Info NA ### Who can help? @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The [BartTokenizer doc](https://huggingface.co/docs/transformers/v4.21.1/en/model_doc/bart#transformers.BartTokenizer) mentions that `cls_token` is attached to the beginning of the input sentence and is used as the token for sequence classification purposes. However, in the HF code it is picking the last eos_token: https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bart/modeling_bart.py#L1514-L1521 ### Expected behavior The [Bart paper (Sec 3.1)](https://arxiv.org/pdf/1910.13461.pdf) matches with what the code does, i.e. the last token is to be used as the classification token. ``` 3.1 Sequence Classification Tasks For sequence classification tasks, the same input is fed into the encoder and decoder, and the final hidden state of the final decoder token is fed into new multi-class linear classifier. This approach is related to the CLS token in BERT; however we add the additional token to the end so that representation for the token in the decoder can attend to decoder states from the complete input (Figure 3a). ``` Should I update the Bart doc with this method of classification?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18849/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18848/comments
https://api.github.com/repos/huggingface/transformers/issues/18848/events
https://github.com/huggingface/transformers/pull/18848
1,358,470,359
PR_kwDOCUB6oc4-MP2m
18,848
Fix minor typo in prose of model outputs documentation
{ "login": "pcuenca", "id": 1177582, "node_id": "MDQ6VXNlcjExNzc1ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcuenca", "html_url": "https://github.com/pcuenca", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "repos_url": "https://api.github.com/users/pcuenca/repos", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
MEMBER
null
# What does this PR do? Fixes a very minor typo in the documentation of the model outputs section. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18848/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18848", "html_url": "https://github.com/huggingface/transformers/pull/18848", "diff_url": "https://github.com/huggingface/transformers/pull/18848.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18848.patch", "merged_at": 1662026740000 }
https://api.github.com/repos/huggingface/transformers/issues/18847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18847/comments
https://api.github.com/repos/huggingface/transformers/issues/18847/events
https://github.com/huggingface/transformers/issues/18847
1,358,430,543
I_kwDOCUB6oc5Q-AFP
18,847
Cannot import OPTForSequenceClassification on Kaggle notebooks (transformers 4.20.1, huggingface_hub 0.8.1)
{ "login": "navinelahi", "id": 74642469, "node_id": "MDQ6VXNlcjc0NjQyNDY5", "avatar_url": "https://avatars.githubusercontent.com/u/74642469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/navinelahi", "html_url": "https://github.com/navinelahi", "followers_url": "https://api.github.com/users/navinelahi/followers", "following_url": "https://api.github.com/users/navinelahi/following{/other_user}", "gists_url": "https://api.github.com/users/navinelahi/gists{/gist_id}", "starred_url": "https://api.github.com/users/navinelahi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/navinelahi/subscriptions", "organizations_url": "https://api.github.com/users/navinelahi/orgs", "repos_url": "https://api.github.com/users/navinelahi/repos", "events_url": "https://api.github.com/users/navinelahi/events{/privacy}", "received_events_url": "https://api.github.com/users/navinelahi/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @navinelahi, this class was released in v4.21; could you upgrade your `transformers` library`?", "Yes I upgraded with `pip install transformers==4.21` and now it's working. Thank you so much. The issue is solved" ]
1,662
1,662
1,662
NONE
null
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.6.4 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Below is the code snippet and the error it caused. It can import OPTForCausalLM but not OPTForSequenceClassification ``` from transformers import OPTForCausalLM from transformers import OPTForSequenceClassification, Trainer, TrainingArguments ``` **ImportError: cannot import name 'OPTForSequenceClassification' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)** Any pointers to what might be wrong? Thank you so much. ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import OPTForCausalLM from transformers import OPTForSequenceClassification, Trainer, TrainingArguments ### Expected behavior I would expect OPTForSequenceClassification to be imported normally.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18847/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18846/comments
https://api.github.com/repos/huggingface/transformers/issues/18846/events
https://github.com/huggingface/transformers/pull/18846
1,358,255,067
PR_kwDOCUB6oc4-LiKU
18,846
Unpin fsspec
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662
1,662
1,662
MEMBER
null
# What does this PR do? The `fsspec` team has made a patch release (2022.8.2) to fix their issue: - fsspec/filesystem_spec#1032 They yanked both 2022.8.0 and 2022.8.1, so no need to pin to exclude them. Follows: - #18837
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18846/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18846", "html_url": "https://github.com/huggingface/transformers/pull/18846", "diff_url": "https://github.com/huggingface/transformers/pull/18846.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18846.patch", "merged_at": 1662020415000 }
https://api.github.com/repos/huggingface/transformers/issues/18845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18845/comments
https://api.github.com/repos/huggingface/transformers/issues/18845/events
https://github.com/huggingface/transformers/pull/18845
1,358,161,072
PR_kwDOCUB6oc4-LN-w
18,845
Remove dropout in embedding layer of OPT
{ "login": "shijie-wu", "id": 2987758, "node_id": "MDQ6VXNlcjI5ODc3NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shijie-wu", "html_url": "https://github.com/shijie-wu", "followers_url": "https://api.github.com/users/shijie-wu/followers", "following_url": "https://api.github.com/users/shijie-wu/following{/other_user}", "gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions", "organizations_url": "https://api.github.com/users/shijie-wu/orgs", "repos_url": "https://api.github.com/users/shijie-wu/repos", "events_url": "https://api.github.com/users/shijie-wu/events{/privacy}", "received_events_url": "https://api.github.com/users/shijie-wu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your PR @shijie-wu! Pinging @ArthurZucker and @younesbelkada to take a look at this as soon as they're back from leave (in about a week's time)." ]
1,661
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/18844 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18845", "html_url": "https://github.com/huggingface/transformers/pull/18845", "diff_url": "https://github.com/huggingface/transformers/pull/18845.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18845.patch", "merged_at": 1662993159000 }
https://api.github.com/repos/huggingface/transformers/issues/18844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18844/comments
https://api.github.com/repos/huggingface/transformers/issues/18844/events
https://github.com/huggingface/transformers/issues/18844
1,358,157,354
I_kwDOCUB6oc5Q89Yq
18,844
Dropout in OPT embedding layer
{ "login": "shijie-wu", "id": 2987758, "node_id": "MDQ6VXNlcjI5ODc3NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shijie-wu", "html_url": "https://github.com/shijie-wu", "followers_url": "https://api.github.com/users/shijie-wu/followers", "following_url": "https://api.github.com/users/shijie-wu/following{/other_user}", "gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions", "organizations_url": "https://api.github.com/users/shijie-wu/orgs", "repos_url": "https://api.github.com/users/shijie-wu/repos", "events_url": "https://api.github.com/users/shijie-wu/events{/privacy}", "received_events_url": "https://api.github.com/users/shijie-wu/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[]
1,661
1,662
1,662
CONTRIBUTOR
null
### System Info main ### Who can help? @ArthurZucker, @patrickvonplaten, @LysandreJik ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The [OPT paper](https://arxiv.org/pdf/2205.01068.pdf) (sec 2.2) mentioned > We use a dropout of 0.1 throughout, but we do not apply any dropout to embeddings. This is also supported by loading [the official checkpoints](https://github.com/facebookresearch/metaseq/tree/main/projects/OPT) and running the following check ```python import torch data = torch.load("reshard-model_part-0.pt") assert data['cfg']['model'].no_emb_dropout ``` However, in `modeling_opt.py`, dropout is applied to the embedding layer. https://github.com/huggingface/transformers/blob/80367cd1fb6d36ca6bdd99b70586aab4ffae1ae1/src/transformers/models/opt/modeling_opt.py#L641-L642 ### Expected behavior No dropout in the embedding layer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18844/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18843/comments
https://api.github.com/repos/huggingface/transformers/issues/18843/events
https://github.com/huggingface/transformers/pull/18843
1,358,142,760
PR_kwDOCUB6oc4-LKEN
18,843
fix arg name in BLOOM testing and remove unused arg document
{ "login": "shijie-wu", "id": 2987758, "node_id": "MDQ6VXNlcjI5ODc3NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shijie-wu", "html_url": "https://github.com/shijie-wu", "followers_url": "https://api.github.com/users/shijie-wu/followers", "following_url": "https://api.github.com/users/shijie-wu/following{/other_user}", "gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions", "organizations_url": "https://api.github.com/users/shijie-wu/orgs", "repos_url": "https://api.github.com/users/shijie-wu/repos", "events_url": "https://api.github.com/users/shijie-wu/events{/privacy}", "received_events_url": "https://api.github.com/users/shijie-wu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for the fix, @shijie-wu \r\n\r\nWrt `use_cache` won't it be better to actually set the default to `True` - most models have it set to `True` and most users will get bad peformance out of the box with the default during `generate` with it being `False`.\r\n\r\nI'm aware that the cat is out of the box, but there have been a lot of tweaks to the model post its release so perhaps such change would still be in the grace period of backward compatibility. ", "actually, looking at `generate` it says it defaults to `True`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/generation_utils.py#L1037-L1039\r\n\r\nbut it doesn't appear to be so, as the default it uses is `None`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/generation_utils.py#L919\r\n\r\nbut since most models have `use_cache == True` it's sometimes true.\r\n\r\ntagging @patrickvonplaten - should the `generate` doc be corrected to say that the default is `None` to match the actual code? and say that unless set explicitly the model's config's `use_cache` setting is used?", "I missed that `use_cache = True` for most models by default. If that's the case, I could update the default instead and revert the change to the doc.", "So it's pretty clear it's most likely a unintended default as 1/3rd but 2 models have it set to `True` and 2/3 of models don't have it set at all:\r\n\r\n```\r\n$ grep -Ir 'use_cache=True' src/transformers/models/*/config* | wc -l\r\n45\r\n$ grep -Ir 'use_cache=False' src/transformers/models/*/config* | wc -l\r\n2\r\n$ grep -Ir 'use_cache=False' src/transformers/models/*/config*\r\nsrc/transformers/models/bloom/configuration_bloom.py: use_cache=False,\r\nsrc/transformers/models/trocr/configuration_trocr.py: use_cache=False,\r\n```\r\n\r\nso `generate's default was relying on all the models having it set to `True` which as we can see isn't always the case.\r\n\r\nthough I think my regex missed many models, let me check where the missing 100+ are.\r\n\r\nedit: the remaining ones don't have `use_cache` set in the `configuration_foo.py` files.", "~If all models set `use_cache=True` shouldn't `.generate` also have `use_cache=True` by default?~\r\n\r\nedit: based on new info, this is no longer a question. I am happy to fix default for `bloom` and `trocr`.", "Please let's wait for others to chime in. Perhaps setting it to `False` was not an omission but an intentional move. \r\n\r\nalso tagging @younesbelkada ", "The `use_cache` interior mechanisms are indeed a bit hard to understand. Think we haven't done a good job here. Thanks for pointing this out. \r\n\r\nIMO, `use_cache` should be set to `True` for all models and IMO it was probably a mistake to not set it to `True` for BLOOM.\r\nOr is there a maybe a reason behind it (cc @thomasw21 @younesbelkada ?). If `use_cache=False` it means that no tensors containing past generated keys and values are moved around - was this maybe done on purpose given the size of the model? More specifically, by setting it to False we save `num_layers` x `hidden_size` x `2` x `num_previous_generated_tokens` memory.\r\n\r\n Regarding the generate docs we decided with @gante to write the defaults to how the default to depending on what's set in the config, so I think the docstring is ok/correct there. Also note that `generate` will soon go through a major refactor regarding the configuration.\r\n\r\nRegarding whether we should change the default to True - I actually don't know really. If it would have been for a small model I would have said definitely yes, but for BLOOM which requires multi-gpu for inference I'm not 100% sure how much the memory consumption goes up when enabling it and running generate with `accelerate` - could we try it out ? cc @younesbelkada \r\n", "Thank you for this great feedback, Patrick.\r\n\r\nIn addition as I have just discovered there are many models that don't set `use_cache` in their config at all. And I don't think there is a super-class default that is inherited. Perhaps there should be one?\r\n\r\nAlso it appears that `use_cache` is often a tri-state:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4157e3cd7e2bb5a7be6dc065a3e20c49cc1300ab/src/transformers/models/t5/modeling_t5.py#L918\r\n\r\nso perhaps `generate` could reflect that - or if things change with refactor then I trust the new way will take care of this.\r\n\r\nwrt memory usage in Bloom - recalculating say 100 tokens for each new token would be quite slow. So it's a question of whether `use_cache=False` + large BS gives better throughput than `use_cache=True`+small BS using the same amount of memory - probably can measure empirically? though the outcome would be highly hardware dependent.", "@LysandreJik, could we please make a note in the next release that \r\n\r\n* BLOOM's `use_cache`'s original default value was incorrectly set to `False` and this release it has been corrected to `use_cache=True` - as this is somewhat backward compatibility breakage. and we hope our users will forgive us as this is still a new model that is going through minor fixes. `use_cache=True` leads to a much faster generation at the cost of larger memory requirements to store the cached values.\r\n\r\nThank you!", "@stas00 I was thinking that we should probably also change it for tcocr model for consistency with other models on the library as suggested by @shijie-wu ?", "I thought so too, but perhaps let's check in with the porter of that model to see if perhaps it was by design?\r\n\r\nWould you like to do that? and perhaps in a separate PR so that it's loud and clear in the history of the project? with reference to this PR for context.\r\n\r\nThank you!", "Sure, happy to take care of that! \nWill tag also the porter of the model and double check \nThanks again!" ]
1,661
1,663
1,663
CONTRIBUTOR
null
# What does this PR do? * Fix argument name in BLOOM testing (`hidden_dropout` and `attention_dropout`) * Remove document of unused arguments (`skip_bias_add`, `skip_bias_add_qkv` and `attn_pdrop`) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @thomasw21 @stas00 @TevenLeScao
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18843/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18843", "html_url": "https://github.com/huggingface/transformers/pull/18843", "diff_url": "https://github.com/huggingface/transformers/pull/18843.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18843.patch", "merged_at": 1663266333000 }
https://api.github.com/repos/huggingface/transformers/issues/18842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18842/comments
https://api.github.com/repos/huggingface/transformers/issues/18842/events
https://github.com/huggingface/transformers/pull/18842
1,358,130,154
PR_kwDOCUB6oc4-LHbb
18,842
Remove unused `activation_dropout` in OPT
{ "login": "shijie-wu", "id": 2987758, "node_id": "MDQ6VXNlcjI5ODc3NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shijie-wu", "html_url": "https://github.com/shijie-wu", "followers_url": "https://api.github.com/users/shijie-wu/followers", "following_url": "https://api.github.com/users/shijie-wu/following{/other_user}", "gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions", "organizations_url": "https://api.github.com/users/shijie-wu/orgs", "repos_url": "https://api.github.com/users/shijie-wu/repos", "events_url": "https://api.github.com/users/shijie-wu/events{/privacy}", "received_events_url": "https://api.github.com/users/shijie-wu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ArthurZucker I am not sure who is the owner of the models repo but the following PRs also need to be merged.\r\n\r\n> \r\n> https://huggingface.co/facebook/opt-125m/discussions/19\r\n> https://huggingface.co/facebook/opt-350m/discussions/5\r\n> https://huggingface.co/facebook/opt-1.3b/discussions/7\r\n> https://huggingface.co/facebook/opt-2.7b/discussions/5\r\n> https://huggingface.co/facebook/opt-6.7b/discussions/8\r\n> https://huggingface.co/facebook/opt-13b/discussions/7\r\n> https://huggingface.co/facebook/opt-30b/discussions/8\r\n> https://huggingface.co/facebook/opt-66b/discussions/7\r\n> ", "Hey, thanks for notifying, I will take care of it 😃 " ]
1,661
1,663
1,662
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/18309 PR for corresponding models * https://huggingface.co/facebook/opt-125m/discussions/19 * https://huggingface.co/facebook/opt-350m/discussions/5 * https://huggingface.co/facebook/opt-1.3b/discussions/7 * https://huggingface.co/facebook/opt-2.7b/discussions/5 * https://huggingface.co/facebook/opt-6.7b/discussions/8 * https://huggingface.co/facebook/opt-13b/discussions/7 * https://huggingface.co/facebook/opt-30b/discussions/8 * https://huggingface.co/facebook/opt-66b/discussions/7 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18842/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18842", "html_url": "https://github.com/huggingface/transformers/pull/18842", "diff_url": "https://github.com/huggingface/transformers/pull/18842.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18842.patch", "merged_at": 1662973225000 }
https://api.github.com/repos/huggingface/transformers/issues/18840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18840/comments
https://api.github.com/repos/huggingface/transformers/issues/18840/events
https://github.com/huggingface/transformers/pull/18840
1,357,823,191
PR_kwDOCUB6oc4-KC9f
18,840
Generate: smaller TF serving test
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,662
1,662
MEMBER
null
# What does this PR do? `tests/test_modeling_tf_common.py::UtilsFunctionsTest::test_generate_tf_function_export` is often failing because it times out (>60s). The previous version took ~35s in my machine. This PR's takes ~19s, which may avoid the time out issue. If it still fails, a custom config must be added :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18840", "html_url": "https://github.com/huggingface/transformers/pull/18840", "diff_url": "https://github.com/huggingface/transformers/pull/18840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18840.patch", "merged_at": 1662026019000 }
https://api.github.com/repos/huggingface/transformers/issues/18839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18839/comments
https://api.github.com/repos/huggingface/transformers/issues/18839/events
https://github.com/huggingface/transformers/issues/18839
1,357,599,096
I_kwDOCUB6oc5Q61F4
18,839
BUG for beam_indices from model.generate()
{ "login": "Hannibal046", "id": 38466901, "node_id": "MDQ6VXNlcjM4NDY2OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hannibal046", "html_url": "https://github.com/Hannibal046", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "repos_url": "https://api.github.com/users/Hannibal046/repos", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Also, could you please check this ? https://discuss.huggingface.co/t/larger-sum-logits-larger-sum-probability/22358", "Also cc @gante for `generate` :)", "Hey @Hannibal046 👋 Thank you so much for raising this issue, there was indeed a problem with the last beam index for all batches except the first one!\r\n\r\nCheck #18851 for the fix and for snippets that confirm the correctness after the fix 🤗 \r\n\r\nRegarding the forum issue -- it seems like a relevant problem. Could you please open an issue here on GitHub? ❤️ " ]
1,661
1,662
1,662
NONE
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.8.0-51-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import BartTokenizer,BartForConditionalGeneration model_path = "/data/pretrained_model/bart_base" toker = BartTokenizer.from_pretrained(model_path) model = BartForConditionalGeneration.from_pretrained(model_path) input_tokens = ["what do you think it ? huggingface is a great library. And I enjoy it very much", "transformers is so good"] batch_size = 2 num_beams = 10 max_length = 10 num_return_sequences = 5 input_ids = toker(input_tokens,return_tensors='pt',padding=True).input_ids output=model.generate(input_ids,max_length=max_length,\ num_beams=num_beams,num_return_sequences=num_return_sequences,\ return_dict_in_generate=True,output_scores=True) print(output.beam_indices) ``` ![image](https://user-images.githubusercontent.com/38466901/187733097-195fda80-3b1f-4b59-898f-e2eacf10729d.png) ![image](https://user-images.githubusercontent.com/38466901/187734309-9fde1b06-3172-4730-97d6-42e953cbffc9.png) ### Expected behavior This is super weird that `beam_indices` of second batch has indices in the first 10 beams. If calculate the average logits across the sentence according to this `beam_indices`, we won't get the `output.sequences_scores` So I think the number in the red box of the first picture should be added 10 (num_beams), if we add 10, we can get the correct token to be generated in `output.sequences[5]` as shown in the second picture
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18839/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18838/comments
https://api.github.com/repos/huggingface/transformers/issues/18838/events
https://github.com/huggingface/transformers/pull/18838
1,357,588,633
PR_kwDOCUB6oc4-JQsZ
18,838
Skip XNLI test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Preceded by https://github.com/huggingface/transformers/pull/18837", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18838). All of your documentation changes will be reflected on that endpoint." ]
1,661
1,661
1,661
MEMBER
null
Skips an XNLI test that currently fails due to https://github.com/fsspec/filesystem_spec/issues/1034
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18838/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18838", "html_url": "https://github.com/huggingface/transformers/pull/18838", "diff_url": "https://github.com/huggingface/transformers/pull/18838.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18838.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18837/comments
https://api.github.com/repos/huggingface/transformers/issues/18837/events
https://github.com/huggingface/transformers/pull/18837
1,357,586,551
PR_kwDOCUB6oc4-JQQH
18,837
Pin ffspec
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
COLLABORATOR
null
# What does this PR do? The recent test failures on XLNI are due to a release of ffspec. This PR excludes the problematic version to avoid the test failures.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18837/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18837", "html_url": "https://github.com/huggingface/transformers/pull/18837", "diff_url": "https://github.com/huggingface/transformers/pull/18837.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18837.patch", "merged_at": 1661965444000 }
https://api.github.com/repos/huggingface/transformers/issues/18836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18836/comments
https://api.github.com/repos/huggingface/transformers/issues/18836/events
https://github.com/huggingface/transformers/issues/18836
1,357,391,509
I_kwDOCUB6oc5Q6CaV
18,836
Input_embeds for Albert
{ "login": "lsyysl9711", "id": 71112568, "node_id": "MDQ6VXNlcjcxMTEyNTY4", "avatar_url": "https://avatars.githubusercontent.com/u/71112568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lsyysl9711", "html_url": "https://github.com/lsyysl9711", "followers_url": "https://api.github.com/users/lsyysl9711/followers", "following_url": "https://api.github.com/users/lsyysl9711/following{/other_user}", "gists_url": "https://api.github.com/users/lsyysl9711/gists{/gist_id}", "starred_url": "https://api.github.com/users/lsyysl9711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsyysl9711/subscriptions", "organizations_url": "https://api.github.com/users/lsyysl9711/orgs", "repos_url": "https://api.github.com/users/lsyysl9711/repos", "events_url": "https://api.github.com/users/lsyysl9711/events{/privacy}", "received_events_url": "https://api.github.com/users/lsyysl9711/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
NONE
null
Hello! I am checking the source codes for Albert's inputs_embeds. But I think the open sourced codes do not match with the document: In the document, the description for inputs_embeds says that the shape should be the of shape (batch_size, sequence_length, hidden_size). But in the source codes(https://huggingface.co/transformers/v3.4.0/_modules/transformers/modeling_albert.html) we have the followings: self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size) self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size) position_embeddings = self.position_embeddings(position_ids) token_type_embeddings = self.token_type_embeddings(token_type_ids) embeddings = inputs_embeds + position_embeddings + token_type_embeddings So in order to make the final additions' operations valid, the shape between inputs_embeds and the other two should be the same. But the default embedding size is 128 while the default hidden size is 768("albert-base-v2"). So if we use the method described in document, the compiler will output errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18836/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18835/comments
https://api.github.com/repos/huggingface/transformers/issues/18835/events
https://github.com/huggingface/transformers/issues/18835
1,357,337,280
I_kwDOCUB6oc5Q51LA
18,835
Adding multiprocessing option to transformers.pipelines.automatic_speech_recognition
{ "login": "mehrzadai", "id": 90762060, "node_id": "MDQ6VXNlcjkwNzYyMDYw", "avatar_url": "https://avatars.githubusercontent.com/u/90762060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mehrzadai", "html_url": "https://github.com/mehrzadai", "followers_url": "https://api.github.com/users/mehrzadai/followers", "following_url": "https://api.github.com/users/mehrzadai/following{/other_user}", "gists_url": "https://api.github.com/users/mehrzadai/gists{/gist_id}", "starred_url": "https://api.github.com/users/mehrzadai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mehrzadai/subscriptions", "organizations_url": "https://api.github.com/users/mehrzadai/orgs", "repos_url": "https://api.github.com/users/mehrzadai/repos", "events_url": "https://api.github.com/users/mehrzadai/events{/privacy}", "received_events_url": "https://api.github.com/users/mehrzadai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "WDYT @Narsil?", "Hi, I don't mind having a parameter for that for sure.\r\n\r\nThe biggest reason I don't think it should be the defaults is that some users might already be using different processes for different pipelines so doing parallelism twice is usually hurtful.\r\n\r\nAlso, do you mind providing small benchmarks to see the performance improvement ?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
NONE
null
### Feature request Hi, in `transformers.pipelines.automatic_speech_recognition` in case of `self.type = ctc_wit_lm`, the `postprocess` method uses `self.decoder.decode_beams(items)`. This is too slow and does decoding iterative. `decoder.decode_beams_batch(pool,items)` is able to make things faster and pharallel. ### Motivation `transformers.pipelines.automatic_speech_recognition` works really slow in complex `ctc_with_lm` scenarios. ### Your contribution `None`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18834/comments
https://api.github.com/repos/huggingface/transformers/issues/18834/events
https://github.com/huggingface/transformers/pull/18834
1,357,308,977
PR_kwDOCUB6oc4-IUm_
18,834
Fix add model like
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18834). All of your documentation changes will be reflected on that endpoint." ]
1,661
1,661
1,661
MEMBER
null
It seems that `pip show` does not return the same location as before for editable packages. However, listing editable packages (`pip list -e`) returns the correct location in which it was installed. Doing it locally returns the following: ``` Package Version Editable project location ------------ ----------- --------------------------------------------- transformers 4.22.0.dev0 /home/lysandre/Workspaces/python/transformers ``` I'm therefore updating the way to identify if we're looking at the correct install to use `pip list -e` instead of `pip show`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18834/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18834", "html_url": "https://github.com/huggingface/transformers/pull/18834", "diff_url": "https://github.com/huggingface/transformers/pull/18834.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18834.patch", "merged_at": 1661951426000 }
https://api.github.com/repos/huggingface/transformers/issues/18833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18833/comments
https://api.github.com/repos/huggingface/transformers/issues/18833/events
https://github.com/huggingface/transformers/pull/18833
1,357,284,504
PR_kwDOCUB6oc4-IPZZ
18,833
TF: TFMarianMTModel final logits bias as a layer
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@gante Thanks a lot. It looks like it works well!\r\n\r\nHowever, there is one thing I don't understand quite well.\r\n\r\n\r\n```bash\r\n(Pdb) [x.name for x in model.non_trainable_weights]\r\n['final_logits_bias:0']\r\n```\r\nand this is good as it makes loading correctly. But I was thinking I will see `['final_logits_bias.final_logits_bias:0']`, as you pass the name to the layer as well as in `add_weight`.\r\n\r\nIs it true that when we use `add_weight` inside a layer, that layer name won't appear in the variable name for that weight?\r\n\r\n(I set a breakpoint at in `src/transformers/modeling_tf_utils.py` at line 847)", "@ydshieh hah, I had the same question but I tried, it worked, and I forgot to dig deeper to understand why :D \r\n\r\nAfter some digging, I found that it is poorly documented -- variables created with `.add_weight` are set without any name scope, i.e. their name consists of the name set in `name`. This is opposed to the weights from layers, such as `tf.keras.layers.Dense`, that automatically get a scoped name according to the `name` of the layers (e.g. `foo/bar/weights:0`).\r\n\r\nThis implies that initializing `BiasLayer` with a `name` has no effect whatsoever regarding weight storing/loading. If we wanted the weights to have a scoped name (we don't here), we could either hardcode it in `name` ([example](https://github.com/huggingface/transformers/blob/811c4c9f79758235762b4f70ffae00deae494fb1/src/transformers/models/albert/modeling_tf_albert.py#L493)) or use `tf.name_scope` ([example](https://github.com/huggingface/transformers/blob/811c4c9f79758235762b4f70ffae00deae494fb1/src/transformers/models/albert/modeling_tf_albert.py#L150)).\r\n\r\nI'm adding a link to this comment in the code, for future reference.", "Thanks a lot @gante , you are the best!" ]
1,661
1,662
1,662
MEMBER
null
# What does this PR do? Fixes #18802 As stated in the issue above, `final_logits_bias` in `TFMarianMTModel` are not being loaded at `from_pretrained(...)` time. The PT model has this variable defined, and thus the outputs of the model in the two frameworks are very different (>1e-1). Actually, these weights are also not being stored when the TF version is saved, for the same reason -- only layers are stored/loaded with the functions we are using (`.save_weights` and `.load_weights`), and this bias weight is not inside a layer. As a solution, this PR moves the bias to a layer and creates an alias for it, resulting in no interface changes. After this change, the models from `Helsinki-NLP` can be converted with the `pt-to-tf` CLI, passing all the quality checks. ⚠️ Other models have this pattern, so I will apply the change to them in a separate PR if this one gets approved.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18833", "html_url": "https://github.com/huggingface/transformers/pull/18833", "diff_url": "https://github.com/huggingface/transformers/pull/18833.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18833.patch", "merged_at": 1662366027000 }
https://api.github.com/repos/huggingface/transformers/issues/18832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18832/comments
https://api.github.com/repos/huggingface/transformers/issues/18832/events
https://github.com/huggingface/transformers/pull/18832
1,357,260,009
PR_kwDOCUB6oc4-IKHo
18,832
Delete `state_dict` to release memory as early as possible
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The change regarding having a new argument `state_dict` in the nested function `load` is to pass black check, otherwise we get \r\n\r\n```bash\r\nsrc/transformers/modeling_utils.py:422:17: F821 undefined name 'state_dict'\r\n```\r\nwith the new line `del state_dict`. (It's quite strange though)", "Ready for review.", "The failing test is `test_encodings_from_xnli_dataset` which is irrelevant to this PR." ]
1,661
1,662
1,662
COLLABORATOR
null
# What does this PR do? Fix #18782. Note that this is not a real memory issue. A call to `gc.collect()` at the end of `from_pretrained()` works well too. However this PR finds and simply `del state_dict` at the end of `_load_state_dict_into_model()`, and `GC` is able to perform housekeeping on its own at a earlier time.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18832/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18832", "html_url": "https://github.com/huggingface/transformers/pull/18832", "diff_url": "https://github.com/huggingface/transformers/pull/18832.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18832.patch", "merged_at": 1662022530000 }
https://api.github.com/repos/huggingface/transformers/issues/18831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18831/comments
https://api.github.com/repos/huggingface/transformers/issues/18831/events
https://github.com/huggingface/transformers/issues/18831
1,357,140,460
I_kwDOCUB6oc5Q5FHs
18,831
Add support for open_clip
{ "login": "apolinario", "id": 788417, "node_id": "MDQ6VXNlcjc4ODQxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/788417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apolinario", "html_url": "https://github.com/apolinario", "followers_url": "https://api.github.com/users/apolinario/followers", "following_url": "https://api.github.com/users/apolinario/following{/other_user}", "gists_url": "https://api.github.com/users/apolinario/gists{/gist_id}", "starred_url": "https://api.github.com/users/apolinario/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apolinario/subscriptions", "organizations_url": "https://api.github.com/users/apolinario/orgs", "repos_url": "https://api.github.com/users/apolinario/repos", "events_url": "https://api.github.com/users/apolinario/events{/privacy}", "received_events_url": "https://api.github.com/users/apolinario/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "(Is the \"new model\" tag adequate here or it would be considered adapting to the existing CLIP model?) ", "cc'ing @patil-suraj. From the README, it seems that they replaced OpenAI's `quickgelu` with `torch.nn.GELU`, which is apparently better.\r\n\r\nNormally the strategy is to add a new model, no matter how small the changes to an existing model.", "@apolinario @NielsRogge \r\n\r\n@rom1504 and were discussing moving OpenCLIP LAION trained weights to the HF hub under the LAION org https://huggingface.co/laion ... for enhanced visibility. It'd be nice if the PyTorch model.bin could be shared with the OpenCLIP use (so remap if the HF transformers keys are a bit different). \r\n\r\nI can't comment on the 'add a new model bit', seems unecessary for adding a changeable activation but not a big deal either way. However, there will may be some (small) architecture additions in future LAION + OpenCLIP model releases so requiring yet another model for those would be, well a bit exessive. They will all be done in a manner that can be dynamically enabled/disabled without breaking backwards weight compat.\r\n\r\nRomain and I are currently training larger models on LAION-2B (english). I'm using remainder of JUWELS research grant for an L/14, Romain is working via Stability on H/14 and possibly g/14. We've both run into stability problems at this data + model scale, hence the 'might need arch' additions. The checkpoints for H/14 and g/14 will be 3.7G and 5G respectively, so one model hub instance per model would also avoid some waste here :) \r\n\r\nAlso, since OpenCLIP is a relatively small and unknown project, it would be nice to keep some pointers and links back there for people looking to train from scratch and/or fine-tune the models.\r\n\r\nEDIT re the 'visibility', and LAION org, we've been working on this paper and will likely do a splash once the next revision is out and we get some more results https://openreview.net/forum?id=M3Y74vmsMcY\r\n", "Hey @rwightman, excited to hear you'd like to contribute Open CLIP to `transformers`!\r\n\r\nThe implementation of `CLIP` is done using the `ACT2FN` activation function dictionary: https://github.com/huggingface/transformers/blob/a26c752353a127ba8e4728413806f545718a8d78/src/transformers/models/clip/modeling_clip.py#L281\r\n\r\nIf this is the only change necessary, then it should be loadable directly in the existing architecture by specifying the appopriate `hidden_act` configuration option.\r\n\r\nDo you have in mind what other changes might be needed down the road for the support of additional checkpoints? I would be personally be open to having an `OpenCLIP` model archigtecture which could be the host for current checkpoints and upcoming checkpoints, even while unaware of the changes that might need to be done in the future (therefore with modeling code that would be a bit more dynamic than others), but I'm pinging @patrickvonplaten and @sgugger for their opinion as well.", "Agreed with @LysandreJik . The change of activation function does not require a new model by itself (since you can set the right one in the config) but if you anticipate other modeling tweaks, a new architecture definitely makes sense.", "I was hoping to have transformers CLIP, OpenCLIP, and timm (for vision backbone finetune) all use the same hub weights ... ie something like,\r\n* `https://huggingface.co/laion/vit_base_patch32_laion400m`\r\n* `https://huggingface.co/laion/vit_base_patch32_laion2b`\r\netc\r\n\r\nFor `timm`, I'll just remux the weight into timm vit style on the fly. I was thinking the weights would natively match their source (ie OpenCLIP, which is just OpenAI model names w/o any jit / typing mess). Is there any precedent for doing remap of a pytorch bin file on the fly in transformers or are hub weights always native without any on the fly conversion?\r\n\r\n", "No, there is no on-the-fly conversion in Transformers. The state dict is loaded as is.", "Closing as support has been added, see e.g. https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K", "> cc'ing @patil-suraj. From the README, it seems that they replaced OpenAI's quickgelu with torch.nn.GELU, which is apparently better.\r\n> \r\n> Normally the strategy is to add a new model, no matter how small the changes to an existing model.\r\n\r\nShould the default `hidden_act` be `gelu` then in `CLIPConfig`?", "@fxmarty changing the default activation in the config would be way too breaking though :-)", "Hi! We have a similar issue; we want to bring a CLIP model we fine-tuned using Open AI's CLIP [implementation](https://github.com/openai/CLIP/blob/main/clip/model.py) to the Hub. As far as I understand, the two implementations do not match 1-to-1, so... is there any public script to readapt the weights?\r\n\r\nI am asking here since it seems related. Let me know if it's better to open a new issue.\r\n\r\nP.S. In the process of analysis of the two models, we also noticed that our model, as well as Open AI's weights (e.g., on [azure](https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt)), weigh 300MB while HF's checkpoints weigh 600MB. Do you know why?", "@g8a9 checkpoint size differences are likely either due to train checkpoint (ie incl optimizer state) vs state dict only, or one is float32 and the other is float16 (since it's exactly 2x I'm guessing the latter).\r\n\r\nOriginal OpenAI -> Transformers conversion code is in transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py ... it's error prone though, so be careful, the params are just being copied so if you have any mismatch in sizing it will fail silently.\r\n\r\nI have a modified conversion script as a gist that I used to convert OpenCLIP models to Transformers (the ViT OpenCLIP models w/ standard text tower match OpenAI checkpoint naming). It uses copy_ so you get an error if param sizes don't match, but it was hacked together so I manually plugged each model config in.\r\n\r\nhttps://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990\r\n\r\n", "> Original OpenAI -> Transformers conversion code is in transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py ... it's error prone though, so be careful, the params are just being copied so if you have any mismatch in sizing it will fail silently.\r\n\r\nInteresting, I usually use `model.load_state_dict` with the default `strict=True` to make sure any missing or unexpected keys as well as size mismatches are caught when porting models. I can add a similar script to convert OpenCLIP models to Transformers if you want.", "> I have a modified conversion script as a gist that I used to convert OpenCLIP models to Transformers (the ViT OpenCLIP models w/ standard text tower match OpenAI checkpoint naming). It uses copy_ so you get an error if param sizes don't match, but it was hacked together so I manually plugged each model config in.\r\n> \r\n> https://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990\r\n\r\nThanks @rwightman ! We'll start tweaking from here. " ]
1,661
1,671
1,666
NONE
null
### Feature request Add `open_clip` (https://github.com/mlfoundations/open_clip) support to Transformers ### Motivation open_clip has released ViT-B-32, ViT-B/16, ViT-B/16+, ViT-L/14 trained on LAION-400M and LAION-2B which are very relevant models - matching and sometimes surpassing OAI models benchmarks - but are not yet compatible with Transformers. Also, [soon a ViT-H is going to drop](https://twitter.com/EMostaque/status/1558851591469400066) which will be the SOTA open source CLIP (since OAI never open sourced their ViT-H used to train DALL-E 2) - so it will also make it even more relevant to support OpenCLIP models and code cc @rwightman
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18831/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18831/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18830/comments
https://api.github.com/repos/huggingface/transformers/issues/18830/events
https://github.com/huggingface/transformers/issues/18830
1,356,988,567
I_kwDOCUB6oc5Q4gCX
18,830
ValueError: Unknown layer: Custom>TFViTMainLayer when using a Google transformer model in Streamlit
{ "login": "bluetail14", "id": 47062263, "node_id": "MDQ6VXNlcjQ3MDYyMjYz", "avatar_url": "https://avatars.githubusercontent.com/u/47062263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bluetail14", "html_url": "https://github.com/bluetail14", "followers_url": "https://api.github.com/users/bluetail14/followers", "following_url": "https://api.github.com/users/bluetail14/following{/other_user}", "gists_url": "https://api.github.com/users/bluetail14/gists{/gist_id}", "starred_url": "https://api.github.com/users/bluetail14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bluetail14/subscriptions", "organizations_url": "https://api.github.com/users/bluetail14/orgs", "repos_url": "https://api.github.com/users/bluetail14/repos", "events_url": "https://api.github.com/users/bluetail14/events{/privacy}", "received_events_url": "https://api.github.com/users/bluetail14/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "here is the link to model_vit.h5 https://drive.google.com/file/d/1ASXJ6-QVxV7W-rVUV57pUy5sYK1BokZ4/view?usp=sharing", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "try this \r\n\r\nfrom transformers import TFViTModel\r\n\r\n\r\ncustom_objects = {'TFViTMainLayer': TFViTModel}\r\n\r\nwith custom_object_scope(custom_objects):\r\n # Load your models\r\n vit_model = tf.keras.models.load_model(vit_model_path)\r\n\r\n\r\n\r\n\r\n# this solved my issue ,it turns out this is syntax that we need to follow to register our custom layer used" ]
1,661
1,691
1,665
NONE
null
### System Info transformers == 4.21.1 tensorflow == 2.9.1 streamlit ==1.11.1 Windows 10 ### Who can help? @Rocketknight1 @NielsRogge ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import streamlit as st import numpy as np from PIL import Image import tensorflow as tf st.title("Binary Human Detection Web App") st.markdown("Is there a human in office space? 🧍") ## Initialize tensorflow model (This can be loaded before anything else) path_to_model = "C:/Users/myname/Jupiter_Notebooks/Dataset_Thermal_Project/Camera_videos/Saved_models/model_vit.h5" model_loader = tf.keras.models.load_model(path_to_model) model_vit = tf.keras.models.Model(model_loader.inputs, model_loader.outputs) ## Preprocess images def preprocessImage(photo): resize_photo = photo.resize((224,224)) normalized_photo = np.array(resize_photo)/255 # a normalised 2D array reshaped_photo = normalized_photo.reshape(-1, 224, 224, 3) # to shape as (1, 224, 224, 3) return reshaped_photo uploaded_file = st.sidebar.file_uploader(" ",type=['jpg', 'jpeg']) if uploaded_file is not None: ## Use a context manager to make sure to close the file!! with Image.open(uploaded_file) as photo: tensorflow_image = preprocessImage(photo) ## Show preprocessed image streamlit_widget_image = st.image(tensorflow_image, 'Uploaded Image', use_column_width=True) ## Do prediction if st.sidebar.button("Click Here to Predict"): if uploaded_file is None: st.sidebar.write("Please upload an Image to Classify") else: ## Pass the preprocessed image to the vit model (not the streamlit widget) pred_label = model_vit.predict(tensorflow_image)[0] ## Print prediction st.sidebar.header("ViT model results:") if pred_label > 0.5: st.sidebar.info('Human is detected') else: st.sidebar.info('No human is detected') ``` ### when I run this I get the ValueError ValueError: Unknown layer: Custom>TFViTMainLayer. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details. ``` #### my model in Tensorflow # Base model pre-trained on ImageNet-21k with the 224x224 image resolution base_model = TFViTModel.from_pretrained('google/vit-base-patch16-224-in21k') # Freeze base model base_model.trainable = False # Create new model inputs = keras.Input(shape = (3, 224, 224)) x = data_augmentation_vit(inputs) vit = base_model.vit(inputs)[0] vit = keras.layers.GlobalAveragePooling1D()(vit) vit = tf.keras.layers.Dense(256, activation='relu')(vit) vit = tf. keras.layers.Dropout(0.15)(vit) outputs = tf.keras.layers.Dense(1, activation='sigmoid', name='outputs')(vit) model_vit = tf.keras.Model(inputs, outputs) print(model_vit.summary()) Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 3, 224, 224)] 0 vit (TFViTMainLayer) TFBaseModelOutputWithPoo 86389248 ling(last_hidden_state=( None, 197, 768), pooler_output=(None, 76 8), hidden_states=None, att entions=None) global_average_pooling1d (G (None, 768) 0 lobalAveragePooling1D) dense_2 (Dense) (None, 256) 196864 dropout_37 (Dropout) (None, 256) 0 outputs (Dense) (None, 1) 257 ================================================================= model_vit.save("Saved_models/model_vit.h5") ``` ### Expected behavior I have a model, model_vit.h5, trained and saved in Tensorflow based on google's vit-base-patch16-224-in21k model. I expect it to make a prediction in my app like other models. Yet I am not sure how to register custom object for the TFViTMainLayer in this model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18830/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18829/comments
https://api.github.com/repos/huggingface/transformers/issues/18829/events
https://github.com/huggingface/transformers/pull/18829
1,356,982,515
PR_kwDOCUB6oc4-HOYd
18,829
follow layoutlmv3 to avoid device error
{ "login": "allanj", "id": 3351187, "node_id": "MDQ6VXNlcjMzNTExODc=", "avatar_url": "https://avatars.githubusercontent.com/u/3351187?v=4", "gravatar_id": "", "url": "https://api.github.com/users/allanj", "html_url": "https://github.com/allanj", "followers_url": "https://api.github.com/users/allanj/followers", "following_url": "https://api.github.com/users/allanj/following{/other_user}", "gists_url": "https://api.github.com/users/allanj/gists{/gist_id}", "starred_url": "https://api.github.com/users/allanj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/allanj/subscriptions", "organizations_url": "https://api.github.com/users/allanj/orgs", "repos_url": "https://api.github.com/users/allanj/repos", "events_url": "https://api.github.com/users/allanj/events{/privacy}", "received_events_url": "https://api.github.com/users/allanj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18829). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? Slightly modify the way that we calculate the height and width embeddings for LayoutLMv2. The calculation is simply same as LayoutLMv3 to avoid the device-assert error when we run experiments on GPU. https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlmv3/modeling_layoutlmv3.py#L270-L271 Fixes # (issue) ## Who can review? Models: - LayoutLMv2: @patrickvonplaten @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18829/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18829", "html_url": "https://github.com/huggingface/transformers/pull/18829", "diff_url": "https://github.com/huggingface/transformers/pull/18829.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18829.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18828/comments
https://api.github.com/repos/huggingface/transformers/issues/18828/events
https://github.com/huggingface/transformers/issues/18828
1,356,704,120
I_kwDOCUB6oc5Q3al4
18,828
Add a vit-based ocr model to hugging face
{ "login": "wdp-007", "id": 4025053, "node_id": "MDQ6VXNlcjQwMjUwNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4025053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wdp-007", "html_url": "https://github.com/wdp-007", "followers_url": "https://api.github.com/users/wdp-007/followers", "following_url": "https://api.github.com/users/wdp-007/following{/other_user}", "gists_url": "https://api.github.com/users/wdp-007/gists{/gist_id}", "starred_url": "https://api.github.com/users/wdp-007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wdp-007/subscriptions", "organizations_url": "https://api.github.com/users/wdp-007/orgs", "repos_url": "https://api.github.com/users/wdp-007/repos", "events_url": "https://api.github.com/users/wdp-007/events{/privacy}", "received_events_url": "https://api.github.com/users/wdp-007/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Sure, do you have an email address? We can set up a slack channel for easier communication ", "Yes, my email address is wdp0072012@gmail.com.\r\nDo I need to register for slack in advance?", "You should have received an invite by email :)" ]
1,661
1,678
1,678
CONTRIBUTOR
null
### Model description We want to add MGPSTR model(ECCV 2022) to hugging face. MGPSTR is a ViT(Vision Transformer)-based pure vision model for STR, which shows its superiority in recognition accuracy. It has a Multi-Granularity Prediction (MGP) strategy to inject information from the language modality. MGPSTR algorithm achieves state-of-the-art performance. We followed the guidance of https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model, but encountered some problems, such as being unable to find a suitable huggingface-hub version when installing the environment. ``` ERROR: Could not find a version that satisfies the requirement huggingface-hub<1.0,>=0.8.1 (from transformers[dev]) (from versions: 0.0.1, 0.0.2, 0.0.3rc1, 0.0.3rc2, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.1.0, 0.1.1, 0.1.2, 0.2.0, 0.2.1, 0.4.0) ERROR: No matching distribution found for huggingface-hub<1.0,>=0.8.1 ``` Can I get some help or guidance? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The paper will be published soon.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18828/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18827/comments
https://api.github.com/repos/huggingface/transformers/issues/18827/events
https://github.com/huggingface/transformers/issues/18827
1,356,616,300
I_kwDOCUB6oc5Q3FJs
18,827
[HF Trainer] [new optimizer] add `AnyPrecisionAdamW` (bf16)
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" }, { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
closed
false
null
[]
[ "Hello, I'll like to be assigned to this issue. ", "Yes, please, @Zeesky-code - once you have a working PR please tag me there.\r\n\r\nThank you!", "No longer working on this issue and it's now open to the community.\r\nThank you :)\r\n\r\n> Hello, I'll like to be assigned to this issue.\r\n\r\n", "Hi, may I take it if it's available?", "Yes, please. But please read the OP first and see if you understand what needs to be done. Thank you!", "@stas00, where do I import `AnyPrecisionAdamW` from? I tried installing `pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu`", "Oh! my bad! it's not pt-nightly but `torchdistx`\r\n\r\n```\r\n$ git clone https://github.com/pytorch/torchdistx\r\n$ cd torchdistx/\r\n$ grep -Ir AnyPrecisionAdamW\r\nsrc/optimizers/anyprecision_optimizer.py:# AnyPrecisionAdamW: a flexible precision AdamW optimizer\r\nsrc/optimizers/anyprecision_optimizer.py:class AnyPrecisionAdamW(Optimizer):\r\nsrc/optimizers/anyprecision_optimizer.py: \"AnyPrecisionAdamW does not support sparse gradients\"\r\n```\r\n\r\nWe can probably try something like this:\r\n```\r\ntry:\r\n from optimizers.anyprecision_optimizer import AnyPrecisionAdamW\r\nexcept:\r\n raise ValueError(\"please install https://github.com/pytorch/torchdistx\")\r\n```", "also please note that the import is about to move and once this PR is merged https://github.com/pytorch/torchdistx/pull/60 please update your clone. Thank you!\r\n\r\nso it'll become:\r\n\r\n```\r\nfrom torchdistx.optimizers.anyprecision_optimizer import AnyPrecisionAdamW\r\n```", "Hi @stas00 and @atturaioe - \r\nJust wanted to drop in here to say thanks for integrating AnyPrecision! \r\nAlso wanted to let you know I'm adding a bfloat16 check internal to the anyPrecisionAdamW optimizer as a safety mechanism, and working on the documentation as well. \r\nPlease let me know if you hit specific integration issues or questions, but for now the very short documentation preview is that there are two primary use cases for AnyPrecision currently:\r\na - successfully training *entirely* in pure BF16 - you can do this b/c kahan summation ensures high precision updates to the weights. Without that you will hit 'weight stagnation' where BF16 can't keep up like FP32 over the training cycle. This gives you faster training with lower memory requirements and generally meets or even exceeds full FP32 results (some regularization effect). \r\nHas been tested up to 11B param size models. \r\n\r\nBasic process:\r\n~~~\r\n# init model\r\nmy_model = build_model(config)\r\n\r\n# move model to all BF16\r\nmy_model.to(torch.bfloat16)\r\n\r\n# run AnyPrecision in pure BF16 with Kahan - pass in usual adamW options in ... below:\r\noptimizer = AnyPrecisionAdamW(my_model.parameters(),..., \r\n momentum_dtype=torch.bfloat16, variance_dtype=torch.bfloat16, \r\n use_kahan_summation=True) \r\n~~~\r\n\r\nb - Training with AdamW variance state in BF16 - results in both memory savings and training speed up, and in limited testing up to 1B, matches mixed precision results after second epoch. (variance state (the variance of the variance) typically rapidly declines after second epoch or so, which was the intuition that fp32 precision probably is not needed after that). \r\n~~~\r\noptimizer = AnyPrecisionAdamW(my_model.parameters(),..., momentum_dtype=torch.float32, variance_dtype=torch.bfloat16, use_kahan_summation=False)\r\n~~~\r\n\r\nHope that the above is helpful for now. Will have more detailed docs etc. this week but please let me know if any questions in the interim and thanks again for the integration work here!\r\n\r\n", "That's very helpful, Less - thank you for sharing these use cases and the details!\r\n\r\nI will leave to @atturaioe the stage to ask questions as he has been performing the heavy lifting on this task.", "I should add - setting momentum_dtype and variance_dtype to torch.float32 and use_kahan_summation=False, brings AnyPrecision to the traditional AdamW optimizer so you can quickly compare using BF16, pure or variance only, for your training. ", "> setting momentum_dtype and variance_dtype to torch.float32 and use_kahan_summation=False, brings AnyPrecision to the traditional AdamW optimizer so you can quickly compare using BF16, pure or variance only, for your training.\r\n\r\nawesome, that would make a good quality test then.\r\n\r\nLet's continue the discussion in the PR https://github.com/huggingface/transformers/pull/18961 so it's more \"actionable\" :)", "@stas00 Hi! I'd like to pick up this issue if no one else is working on it at the moment.", "Oh, thank you for bringing this up, @mollerup23 - and wanting to contribute! \r\n\r\nThis has already been resolved in https://github.com/huggingface/transformers/pull/18961\r\n\r\nWe just forgot to close this issue." ]
1,661
1,676
1,676
CONTRIBUTOR
null
### Feature request pytorch just merged https://github.com/pytorch/torchdistx/pull/52, which adds `AnyPrecisionAdamW` (bf16-support, and future new dtypes) we should add it to our HF Trainer arsenal This is open to the community - it shouldn't be too difficult to add by just checking the existing optimizers. Here are some pointers to start unraveling: https://github.com/huggingface/transformers/blob/e88e9ff045347c9d92d85806a6987dc7ebcbdd5b/src/transformers/training_args.py#L393-L394 and https://github.com/huggingface/transformers/blob/e88e9ff045347c9d92d85806a6987dc7ebcbdd5b/src/transformers/training_args.py#L94-L106 the key of course is the documentation and tests. checking the existing tests and working from there is what's needed. One would start looking at mimicking the integration of other optimizers, So in this case it'd follow the path of `adamw_torch` , as it's the nearest similar optimizer. it might help to look at the previous PRs that added new optimizers, e.g. find the PR that added `adamw_bnb_8bit` - that could be a good model to copy from. And you can see the scope of work that needs to be done. Except this one should be simpler than `adamw_bnb_8bit` as it just plugs in a core pytorch optimizer, that's why I said `adamw_torch` is another good model. Please remember that this requires pytorch-nightly as this new feature hasn't made it yet into pytorch-1.13. So you will need to install it from https://pytorch.org/get-started/locally/ (Choose Preview (Nightly)) Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18827/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18826/comments
https://api.github.com/repos/huggingface/transformers/issues/18826/events
https://github.com/huggingface/transformers/issues/18826
1,356,435,348
I_kwDOCUB6oc5Q2Y-U
18,826
Examples do not seem to work on any spaces right now (possible downtime?)
{ "login": "ankrgyl", "id": 565363, "node_id": "MDQ6VXNlcjU2NTM2Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankrgyl", "html_url": "https://github.com/ankrgyl", "followers_url": "https://api.github.com/users/ankrgyl/followers", "following_url": "https://api.github.com/users/ankrgyl/following{/other_user}", "gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions", "organizations_url": "https://api.github.com/users/ankrgyl/orgs", "repos_url": "https://api.github.com/users/ankrgyl/repos", "events_url": "https://api.github.com/users/ankrgyl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankrgyl/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @ankrgyl! I tried using them a few seconds ago and it seems to work; is the issue still happening from your side? It may have been a transient error.", "It wasn't working until around 11:15 PM PST, at which point the website seem to have reset (see screenshot below) and ~10 minutes later, it started working.\r\n\r\n![image](https://user-images.githubusercontent.com/565363/187689585-8ba4f00a-6606-4d04-9b7a-66f300bee25b.png)\r\n\r\nDuring this period, I also noticed that deploying a gradio space with `enable_queue=False` would not work -- it seemed like something was broken with the `/predict` handler (which gets called in that case). I did some extensive testing while working with the Gradio team on https://github.com/gradio-app/gradio/issues/2132.", "Understood, thank you! Should this be closed as it seems the error has been resolved?", "Yes it can definitely be closed! I mostly opened it in case it was helpful as an alert while things were behaving weirdly.", "Sounds good! Let me close it and feel free to reopen if you ever run into something similar." ]
1,661
1,662
1,662
CONTRIBUTOR
null
### System Info This is observed online on spaces: E.g. https://huggingface.co/spaces/nielsr/donut-docvqa, if you click any of the examples, you see ![Screen Shot 2022-08-30 at 4 06 41 PM](https://user-images.githubusercontent.com/565363/187558797-d0858c72-a580-4650-8f2d-b451f294da3d.png) Similarly, https://huggingface.co/spaces/impira/docquery produces console errors like: ``` POST https://hf.space/embed/impira/docquery/api/predict/ 500 post_data @ index.09173af6.js:7790 (anonymous) @ index.09173af6.js:7872 (anonymous) @ index.09173af6.js:6566 (anonymous) @ index.09173af6.js:506 (anonymous) @ index.09173af6.js:505 click_handler_1 @ index.d284cf1a.js:1881 click_handler_1 @ index.d284cf1a.js:1346 ``` I also tried https://huggingface.co/spaces/Epoching/DocumentQA. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Visit any space with examples, and try clicking on them. ### Expected behavior The examples should populate.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18826/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18825/comments
https://api.github.com/repos/huggingface/transformers/issues/18825/events
https://github.com/huggingface/transformers/issues/18825
1,356,379,618
I_kwDOCUB6oc5Q2LXi
18,825
Wav2Vec2ProcessorWithLM in pipeline issue
{ "login": "anderleich", "id": 29381188, "node_id": "MDQ6VXNlcjI5MzgxMTg4", "avatar_url": "https://avatars.githubusercontent.com/u/29381188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anderleich", "html_url": "https://github.com/anderleich", "followers_url": "https://api.github.com/users/anderleich/followers", "following_url": "https://api.github.com/users/anderleich/following{/other_user}", "gists_url": "https://api.github.com/users/anderleich/gists{/gist_id}", "starred_url": "https://api.github.com/users/anderleich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anderleich/subscriptions", "organizations_url": "https://api.github.com/users/anderleich/orgs", "repos_url": "https://api.github.com/users/anderleich/repos", "events_url": "https://api.github.com/users/anderleich/events{/privacy}", "received_events_url": "https://api.github.com/users/anderleich/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @anderleich,\r\n\r\nI am encountering the same issue when I try to use the AutomaticSpeechRecognitionPipeline in combination with a Languague Model. Have you been able to find a solution? I've tracked the issue down to the same lines of code as you, I cannot get self.type to evaluate to \"ctc_with_lm\" with the models I am using, even though they work fine when I use them outside of the pipeline. \r\n\r\nBest wishes,\r\nJudith", "Hi @judithvdw ,\r\n\r\nNot yet. I decided to use the LM outside the pipeline for the moment", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @patrickvonplaten ,\r\n\r\nAny suggestions on this?", "cc @sanchit-gandhi ", "Hey @anderleich,\r\n\r\nAs a temporary fix, could you set the feature extractor's `_processor_class` attribute manually?\r\n\r\n```python\r\nmodel = Wav2Vec2ForCTC.from_pretrained(\"./results/checkpoint-11600\").to(\"cuda\")\r\ntokenizer = Wav2Vec2CTCTokenizer.from_pretrained(\"./\", unk_token=\"[UNK]\", pad_token=\"[PAD]\", word_delimiter_token=\"|\")\r\nfeature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)\r\nprocessor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)\r\n\r\nvocab_dict = processor.tokenizer.get_vocab()\r\nsorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}\r\n\r\nfrom pyctcdecode import build_ctcdecoder\r\ndecoder = build_ctcdecoder(\r\n\tlabels=list(sorted_vocab_dict.keys()),\r\n\tkenlm_model_path=\"lm.small_3gram_correct.arpa\",\r\n)\r\n\r\n# set class manually\r\nfeature_extractor._set_processor_class(\"Wav2Vec2ProcessorWithLM\")\r\n\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM(\r\n\tfeature_extractor= feature_extractor,\r\n\ttokenizer=tokenizer,\r\n\tdecoder=decoder,\r\n)\r\n\r\npipe = AutomaticSpeechRecognitionPipeline(\r\n\tmodel=model,\r\n\ttokenizer=processor_with_lm.tokenizer,\r\n\tfeature_extractor=processor_with_lm.feature_extractor,\r\n\tdecoder=processor_with_lm.decoder,\r\n\tdevice=0,\r\n)\r\n```\r\nI'll take a deeper look into why the class is defaulting to None", "Did you happen get a chance to look further into this? Not to push you, but just to make sure the bot doesn't close the issue again for a lack of activity.", "I haven't had the chance sadly - keeping the bot from closing the issue! Maybe if you have the chance to look into this @hollance?", "Sure, I'll have a look.\r\n", "I can't reproduce this. I used the following code:\r\n\r\n```python\r\nfrom transformers import (\r\n AutomaticSpeechRecognitionPipeline,\r\n Wav2Vec2ForCTC, \r\n Wav2Vec2CTCTokenizer, \r\n Wav2Vec2FeatureExtractor, \r\n Wav2Vec2Processor, \r\n Wav2Vec2ProcessorWithLM\r\n)\r\n\r\nmodel = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-100h\")\r\ntokenizer = Wav2Vec2CTCTokenizer.from_pretrained(\"facebook/wav2vec2-base-100h\")\r\nfeature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(\"facebook/wav2vec2-base-100h\")\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-100h\")\r\n\r\n# without LM\r\npipe = AutomaticSpeechRecognitionPipeline(\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor\r\n)\r\n\r\nprint(pipe.type) # \"ctc\"\r\nprint(processor.feature_extractor._processor_class) # None\r\n\r\npipe(\"https://huggingface.co/spaces/Matthijs/speecht5-asr-demo/resolve/main/examples/hmm_i_dont_know.wav\")\r\n\r\n# {'text': \"I DON'T KNOW I THINK MAY BE ITS EASY TO GET A NEW ONE WE CAN GO TO THE STORL LATER TO SEE IF THEY HAVE ANY IN STOCK\"}\r\n\r\n# note that STORL is spelled wrong\r\n\r\n\r\n# with LM\r\nprocessor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained(\"patrickvonplaten/wav2vec2-base-100h-with-lm\")\r\n\r\npipe_with_lm = AutomaticSpeechRecognitionPipeline(\r\n model=model,\r\n tokenizer=processor_with_lm.tokenizer,\r\n feature_extractor=processor_with_lm.feature_extractor,\r\n decoder=processor_with_lm.decoder\r\n)\r\n\r\nprint(pipe_with_lm.type) # \"ctc_with_lm\"\r\nprint(processor_with_lm.feature_extractor._processor_class) # Wav2Vec2ProcessorWithLM\r\n\r\npipe_with_lm(\"https://huggingface.co/spaces/Matthijs/speecht5-asr-demo/resolve/main/examples/hmm_i_dont_know.wav\")\r\n\r\n# {'text': \"I DON'T KNOW I THINK MAY BE ITS EASY TO GET A NEW ONE WE CAN GO TO THE STORE LATER TO SEE IF THEY HAVE ANY IN STOCK\"}\r\n\r\n# and now STORE is spelled correctly\r\n```\r\n\r\nI verified that the decoder was indeed called on the `pipe_with_lm` pipeline.\r\n\r\nThere might still be an issue with your own models but I can't tell that without having access to those models.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Going to leave this one as is since the issue is not reproducible using a public checkpoint and we don't have access to a local model that demonstrates this behaviour, so we're unable to pinpoint where the bug potentially lies in transformers\r\n\r\nThe thread did result in two workarounds for this issue that you can try:\r\n* Start from a pre-trained checkpoint like [facebook/wav2vec2-base-100h](hf.co/faceobook/wav2vec2-base-100h) that works as expected\r\n* Use the 'hack' described in https://github.com/huggingface/transformers/issues/18825#issuecomment-1410416281", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,690
1,690
NONE
null
Opening a new issue for better tracking purposes. This issue follows: https://github.com/huggingface/transformers/issues/16759 > Hey @gxbag, > > Please make sure to provide a reproducible code snippet. I cannot run the above snippet because I don't have access to `"language_model/vocabulary.txt"`. > > Regarding the issue, you should not pass a processor object as the model object. The model object should only be used for models of type `PreTrainedModel`. To pass the model with the processor you could do the following: > > ```python > from transformers import AutoProcessor > processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") > vocab_dict = processor.tokenizer.get_vocab() > > from pyctcdecode import build_ctcdecoder > unigrams_file = open("language_model/vocabulary.txt", "r") > unigrams_list = unigrams_file.readlines() > decoder = build_ctcdecoder( > labels=list(vocab_dict.keys()), > kenlm_model_path="language_model/5gram.bin", > unigrams=unigrams_list > ) > > from transformers import Wav2Vec2ProcessorWithLM > processor_with_lm = Wav2Vec2ProcessorWithLM( > feature_extractor=processor.feature_extractor, > tokenizer=processor.tokenizer, > decoder=decoder > ) > > from transformers import pipeline > pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-large-960h-lv60-self", tokenizer=processor_with_lm, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder, device=0) > ``` > > This should correctly initialize the pipeline. Hi @patrickvonplaten , I've just tried your solution. However, it does not use the LM for decoding. `self.type` is always `"ctc"` as `feature_extractor._processor_class` is alway `None`. See here: https://github.com/huggingface/transformers/blob/b487096b02307cd6e0f132b676cdcc7255fe8e74/src/transformers/pipelines/automatic_speech_recognition.py#L127 And this is my code: ``` python model = Wav2Vec2ForCTC.from_pretrained("./results/checkpoint-11600").to("cuda") tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|") feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True) processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) vocab_dict = processor.tokenizer.get_vocab() sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} from pyctcdecode import build_ctcdecoder decoder = build_ctcdecoder( labels=list(sorted_vocab_dict.keys()), kenlm_model_path="lm.small_3gram_correct.arpa", ) processor_with_lm = Wav2Vec2ProcessorWithLM( feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer, decoder=decoder ) pipe = AutomaticSpeechRecognitionPipeline( model=model, tokenizer=processor_with_lm.tokenizer, feature_extractor=processor_with_lm.feature_extractor, decoder=processor_with_lm.decoder, device=0) ``` Any clues?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18825/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18824/comments
https://api.github.com/repos/huggingface/transformers/issues/18824/events
https://github.com/huggingface/transformers/issues/18824
1,356,161,125
I_kwDOCUB6oc5Q1WBl
18,824
model with longformer encoder cannot be saved due to OperatorNotAllowedInGraphError
{ "login": "rdisipio", "id": 7974270, "node_id": "MDQ6VXNlcjc5NzQyNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7974270?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rdisipio", "html_url": "https://github.com/rdisipio", "followers_url": "https://api.github.com/users/rdisipio/followers", "following_url": "https://api.github.com/users/rdisipio/following{/other_user}", "gists_url": "https://api.github.com/users/rdisipio/gists{/gist_id}", "starred_url": "https://api.github.com/users/rdisipio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rdisipio/subscriptions", "organizations_url": "https://api.github.com/users/rdisipio/orgs", "repos_url": "https://api.github.com/users/rdisipio/repos", "events_url": "https://api.github.com/users/rdisipio/events{/privacy}", "received_events_url": "https://api.github.com/users/rdisipio/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "@rdisipio I tried to reproduce this error, But I was able to save model. I followed [this](https://huggingface.co/docs/transformers/model_doc/longformer#transformers.TFLongformerForTokenClassification.call.example) example to build a simple model. Can you add some steps to reproduce this bug ? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
NONE
null
### System Info - `transformers` version: 4.21.2 - Platform: Linux-5.10.102-99.473.amzn2.x86_64-x86_64-with-glibc2.10 (AWS SageMaker) - Python version: 3.8.12 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I created a NER model which makes use of the longformer encoder. I can train it successfully. However, if I try to save the model like this: tf.keras.models.save_model(self.model, model_path) I get an `OperatorNotAllowedInGraphError` error. The full message is below. As far as I can tell, the function `_pad_to_window_size` calculates `padding_len` which in my execution happens to be a tf.Tensor object. The statement `if padding_len > 0` fails to evaluate, since a tensor cannot be compared to a bool directly. This smells like a bug. Besides, the only purpose of this check seems to be to print out a message in the log, so perhaps it's not strictly necessary. Cheers, Riccardo ``` --------------------------------------------------------------------------- OperatorNotAllowedInGraphError Traceback (most recent call last) /tmp/ipykernel_29348/3622010497.py in <cell line: 2>() 1 output_path = "trained_models/jd_parser_baseline_all-longformer" ----> 2 t.save_model(output_path) ~/SageMaker/jd-parser/jd_parser/trainer.py in save_model(self, model_path) 381 if model_path is None: 382 model_path = f"trained_models/{self.model.name}" --> 383 tf.keras.models.save_model(self.model, model_path) 384 385 def upload_to_s3(self, model_local_dir=None): ~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb ~/anaconda3/envs/tensorflow2_p38/lib/python3.8/contextlib.py in __exit__(self, type, value, traceback) 118 if type is None: 119 try: --> 120 next(self.gen) 121 except StopIteration: 122 return False ~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs) 411 412 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs) --> 413 return func(self, **unpacked_inputs) 414 415 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This ~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py in call(self, input_ids, attention_mask, head_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict, training) 1728 position_ids, 1729 inputs_embeds, -> 1730 ) = self._pad_to_window_size( 1731 input_ids=input_ids, 1732 attention_mask=attention_mask, ~/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/transformers/models/longformer/modeling_tf_longformer.py in _pad_to_window_size(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, pad_token_id) 1814 padding_len = (attention_window - seq_len % attention_window) % attention_window 1815 -> 1816 if padding_len > 0: 1817 logger.info( 1818 f"Input ids are automatically padded from {seq_len} to {seq_len + padding_len} to be a multiple of " OperatorNotAllowedInGraphError: Exception encountered when calling layer "longformer" (type TFLongformerMainLayer). Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received by layer "longformer" (type TFLongformerMainLayer): • args=({'input_ids': 'tf.Tensor(shape=(None, None), dtype=int32)', 'attention_mask': 'tf.Tensor(shape=(None, None), dtype=int32)'},) • kwargs={'training': 'False'} ``` ### Expected behavior The model should be saved to disk correctly as it happens with other encoder models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18824/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18823/comments
https://api.github.com/repos/huggingface/transformers/issues/18823/events
https://github.com/huggingface/transformers/issues/18823
1,356,105,373
I_kwDOCUB6oc5Q1Iad
18,823
Memory is not released when moving model to CUDA
{ "login": "piEsposito", "id": 47679710, "node_id": "MDQ6VXNlcjQ3Njc5NzEw", "avatar_url": "https://avatars.githubusercontent.com/u/47679710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/piEsposito", "html_url": "https://github.com/piEsposito", "followers_url": "https://api.github.com/users/piEsposito/followers", "following_url": "https://api.github.com/users/piEsposito/following{/other_user}", "gists_url": "https://api.github.com/users/piEsposito/gists{/gist_id}", "starred_url": "https://api.github.com/users/piEsposito/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piEsposito/subscriptions", "organizations_url": "https://api.github.com/users/piEsposito/orgs", "repos_url": "https://api.github.com/users/piEsposito/repos", "events_url": "https://api.github.com/users/piEsposito/events{/privacy}", "received_events_url": "https://api.github.com/users/piEsposito/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Pinging the king of memory, @ydshieh :raised_hands: ", "Hi @piEsposito \r\n\r\nAs I have seen quite a few times `torch` has some of its own memory management, and related memory issues, it would be great if you can provide and example that creates a (big enough) PyTorch models (not from `transformers`) on CPU, send it to CUDA, and see if you get the memory been released.\r\n\r\nThe goal is to make sure this issue is not coming from `PyTorch`. Would you like to work on an example, please? Thanks in advance.", "@ydshieh just added it to the notebook. It seems like this issue comes from PyTorch. Thanks! " ]
1,661
1,661
1,661
CONTRIBUTOR
null
### System Info - `transformers` version: 4.21.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: GPU - Using distributed or parallel set-up in script?: No ### Who can help? @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce behavior: 1. Run Colab: https://colab.research.google.com/drive/1NWJPqwe7MOJIWd4w5LGYGaflkWXTkHGB?usp=sharing 2. Check results: ``` model device = cuda:0 Filename: memory_leak.py Line # Mem usage Increment Occurrences Line Contents ============================================================= 7 239.4 MiB 239.4 MiB 1 @profile 8 def main(): 9 271.6 MiB 32.2 MiB 1 processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch16") 10 1405.8 MiB 1134.2 MiB 1 model = CLIPModel.from_pretrained("openai/clip-vit-base-patch16") 11 12 1405.8 MiB 0.0 MiB 1 device = torch.device("cuda") 13 2644.0 MiB 1238.2 MiB 1 model = model.to(device) 14 15 2644.5 MiB 0.5 MiB 1 print(f"model device = {model.device}") 16 2644.5 MiB 0.0 MiB 1 gc.collect() ``` ### Expected behavior RAM should be released when a model is moved to GPU. This bug can be reproduced for lots of different models within the lib.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18823/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18822/comments
https://api.github.com/repos/huggingface/transformers/issues/18822/events
https://github.com/huggingface/transformers/pull/18822
1,356,035,026
PR_kwDOCUB6oc4-D9PU
18,822
add a script to get time info. from GA workflow jobs
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes, I will move it to `utils`." ]
1,661
1,662
1,662
COLLABORATOR
null
# What does this PR do? As we might need to get the running time for workflow jobs again in the future, here is a simple script. It's probably better to move to the directory `utils`. I put it under `.github/scripts/` to emphasize this is really for GitHub Actions only. The output looks like ```bash (py39) λ python get_github_job_time.py --workflow_run_id 2945609517 Model tests (onnx, multi-gpu): 337 Model tests (onnx, single-gpu): 334 Torch CUDA extension tests (multi-gpu): 44 Torch CUDA extension tests (single-gpu): 43 TensorFlow pipelines (multi-gpu): 20 TensorFlow pipelines (single-gpu): 19 ... ``` P.S. I will add another simple script to get test failures and their counts in another PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18822/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18822", "html_url": "https://github.com/huggingface/transformers/pull/18822", "diff_url": "https://github.com/huggingface/transformers/pull/18822.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18822.patch", "merged_at": 1662026572000 }
https://api.github.com/repos/huggingface/transformers/issues/18821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18821/comments
https://api.github.com/repos/huggingface/transformers/issues/18821/events
https://github.com/huggingface/transformers/pull/18821
1,356,027,970
PR_kwDOCUB6oc4-D7st
18,821
Add Image To Text Generation pipeline
{ "login": "OlivierDehaene", "id": 23298448, "node_id": "MDQ6VXNlcjIzMjk4NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/23298448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OlivierDehaene", "html_url": "https://github.com/OlivierDehaene", "followers_url": "https://api.github.com/users/OlivierDehaene/followers", "following_url": "https://api.github.com/users/OlivierDehaene/following{/other_user}", "gists_url": "https://api.github.com/users/OlivierDehaene/gists{/gist_id}", "starred_url": "https://api.github.com/users/OlivierDehaene/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OlivierDehaene/subscriptions", "organizations_url": "https://api.github.com/users/OlivierDehaene/orgs", "repos_url": "https://api.github.com/users/OlivierDehaene/repos", "events_url": "https://api.github.com/users/OlivierDehaene/events{/privacy}", "received_events_url": "https://api.github.com/users/OlivierDehaene/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @mishig25, the inference widgets for models like TrOCR, Donut, [image captioning](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning) can now be created! 🥳 " ]
1,661
1,662
1,662
MEMBER
null
# What does this PR do? Add Image To Text Generation pipeline. The pipeline currently defaults to [nlpconnect/vit-gpt2-image-captioning](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? This features was asked for by @Narsil.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18821/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18821/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18821", "html_url": "https://github.com/huggingface/transformers/pull/18821", "diff_url": "https://github.com/huggingface/transformers/pull/18821.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18821.patch", "merged_at": 1662048434000 }
https://api.github.com/repos/huggingface/transformers/issues/18820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18820/comments
https://api.github.com/repos/huggingface/transformers/issues/18820/events
https://github.com/huggingface/transformers/pull/18820
1,355,970,324
PR_kwDOCUB6oc4-Dvd_
18,820
Disable nightly CI temporarily
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
COLLABORATOR
null
# What does this PR do? Disable nightly CI temporarily until the test suite can be run under 12 hours.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18820", "html_url": "https://github.com/huggingface/transformers/pull/18820", "diff_url": "https://github.com/huggingface/transformers/pull/18820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18820.patch", "merged_at": 1661877189000 }
https://api.github.com/repos/huggingface/transformers/issues/18819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18819/comments
https://api.github.com/repos/huggingface/transformers/issues/18819/events
https://github.com/huggingface/transformers/issues/18819
1,355,946,097
I_kwDOCUB6oc5Q0hhx
18,819
ONNX test suite is slow - run in 5.5 hours
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" }, { "id": 2604155188, "node_id": "MDU6TGFiZWwyNjA0MTU1MTg4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Benchmarks", "name": "Benchmarks", "color": "2DF372", "default": false, "description": "Issues related to Memory regressions in tests and scripts" }, { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "Thanks for raising this - it's an issue we've also faced with `optimum`'s test suite. Let me take a look and see if it's possible to use the tiny models as you suggest\r\n\r\ncc @echarlaix @philschmid ", "Keep the issue alive :-)", "Thanks for the ping - on my TODO list this week!" ]
1,661
1,692
1,692
COLLABORATOR
null
### Who can help? @lewtun @LysandreJik As shown in [this job](https://github.com/huggingface/transformers/runs/8074936385?check_suite_focus=true) run, the ONNX tests now run in 5.5 hours. From https://github.com/huggingface/transformers/blob/73c6273d481f9052098a2a5a5f001fa75daaace9/tests/onnx/test_onnx_v2.py#L182 we see that the tests use real model checkpoints. As ONNX graph compile is known to be slow, and the models are quite big, it makes the tests very slow. The whole scheduled CI test suite now run in 14.5 hours, and we have 2 test suites to run each day, so it requires 29 hours and can't finish in one day. This causes the test suite and their reports being delayed a lot. We are wondering if it is possible to use tiny models from [hf-internal-testing](https://huggingface.co/hf-internal-testing) for ONNX tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18819/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18818/comments
https://api.github.com/repos/huggingface/transformers/issues/18818/events
https://github.com/huggingface/transformers/pull/18818
1,355,928,925
PR_kwDOCUB6oc4-Dmcl
18,818
Pin maximum TF version
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh the CI fail seems unrelated 🤔 do you know potential causes?", "Unfortunately no. Let me find some time to take a look, but I think you are good to merge :-)", "Yes I think that's an unrelated error linked to a new cache being made as the `setup.py` is updated. It's unrelated to the PR, but we need to have a look at what's going on.\r\n\r\nLet's merge this PR!\r\n\r\nThanks for your contribution :)" ]
1,661
1,666
1,661
MEMBER
null
# What does this PR do? We now also depend on `tensorflow-text`, whose minor versions are typically released a few days after new `tensorflow` releases. From tests against the `tensorflow` release candidate, `tensorflow-text`-based functions fail when these two libraries do not have the same version. This PR pins the maximum TF version so that our CI doesn't break with the upcoming `tensorflow` release. When the corresponding `tensorflow-text` library gets released we should be able to unpin it again.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18818/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18818", "html_url": "https://github.com/huggingface/transformers/pull/18818", "diff_url": "https://github.com/huggingface/transformers/pull/18818.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18818.patch", "merged_at": 1661933273000 }
https://api.github.com/repos/huggingface/transformers/issues/18817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18817/comments
https://api.github.com/repos/huggingface/transformers/issues/18817/events
https://github.com/huggingface/transformers/issues/18817
1,355,875,673
I_kwDOCUB6oc5Q0QVZ
18,817
Identifying backend compatibility versions
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "### **Past CI - PyTorch 1.11 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n#### General\r\n| no. | error | status |\r\n|-:|:-|:-|\r\n| 32 | NameError: name 'kenlm' is not defined | not installed |\r\n| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | fixed in #18303 |\r\n| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | fixed in #18531 |\r\n| 6 | NameError: name 'GPT2Tokenizer' is not defined | fixed in #19010 |\r\n| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | fixed in #18303 |\r\n| 3 | ImportError: | `detectron2` and `accelerate` not installed |\r\n| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | fixed in #18303 |\r\n| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | fixed in #18303 |\r\n\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |\r\n| owlvit | 21 | RuntimeError: Expected all tensors to be on the same device, | 12 |\r\n| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |\r\n| bloom | 6 | OSError: gs555750 is not a valid git identifier (branch name | 6 |\r\n| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |\r\n| layoutlmv2 | 2 | ImportError: | 2 |", "### **Past CI - PyTorch 1.10 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n#### General\r\n| no. | error | status |\r\n|-:|:-|:-|\r\n| 32 | NameError: name 'kenlm' is not defined | see PT 11 |\r\n| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | see PT 11 |\r\n| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | see PT 11 |\r\n| 6 | NameError: name 'GPT2Tokenizer' is not defined | see PT 11 |\r\n| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | see PT 11 |\r\n| 4 | RuntimeError: Index is supposed to be an empty tensor or a vector | `torch._C` issue -> works wth PT >= 11 Fixed in #19122 |\r\n| 3 | ImportError: | see PT 11 |\r\n| 2 | AssertionError: 1.9311904907226562e-05 != 1.9431114196777344e-05 | `self.assertEqual` is too strict. Fixed in #19200\r\n| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | see PT 11 |\r\n| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | see PT 11 |\r\n\r\n\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |\r\n| owlvit | 21 | RuntimeError: Expected all tensors to be on the same device, | 12 |\r\n| bloom | 8 | OSError: gs555750 is not a valid git identifier (branch name | 6 |\r\n| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |\r\n| longt5 | 4 | RuntimeError: Index is supposed to be an empty tensor or a v | 4 |\r\n| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |\r\n| layoutlmv2 | 2 | ImportError: | 2 |", "### **Past CI - PyTorch 1.9 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n[errors-pt-1-9.txt](https://github.com/huggingface/transformers/files/9678016/errors-pt-1-9.txt)\r\n\r\n#### General\r\n| no. | error | status |\r\n|-:|:-|:-|\r\n| 50 | AttributeError: module 'torch' has no attribute 'pi' | Need PT >= 1.10. But we can use np.pi. See #19201 |\r\n| 44 | TypeError: meshgrid() got an unexpected keyword argument 'indexing' | `Vilt` needs PT >= 1.10 |\r\n| 32 | NameError: name 'kenlm' is not defined | see PT 11 |\r\n| 18 | AttributeError: module 'torchaudio.functional' has no attribute 'melscale_fbanks' | Need torchaudio >= 0.10. See #19203 |\r\n| 15 | RuntimeError: CUDA error: an illegal memory access was encountered | LeViT re-run OK |\r\n| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | see PT 11 |\r\n| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | see PT 11 |\r\n| 6 | NameError: name 'GPT2Tokenizer' is not defined | see PT 11 |\r\n| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | see PT 11 |\r\n| 3 | ImportError: | see PT 11 |\r\n| 2 | RuntimeError: \"LayerNormKernelImpl\" not implemented for 'BFloat16' | fixed in #19261 |\r\n| 2 | AssertionError: 1.9311904907226562e-05 != 1.9431114196777344e-05 | See PT 10 |\r\n| 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 places (5.7006835930906163e-05 difference | diff acceptable |\r\n| 2 | RuntimeError: Index is supposed to be an empty tensor or a vector | torch._C issue -> works wth PT >= 11 Fixed in https://github.com/huggingface/transformers/pull/19122 |\r\n| 2 | RuntimeError: Expected node type 'onnx::Constant' for argument 'num_classes' of node 'one_hot', got | test already skipped in #19122 (due to another error)|\r\n| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | see PT 11 |\r\n| 2 | TypeError: Caught TypeError in replica 0 on device 0. | Vilt needs PT >= 1.10 (`meshgrid` error) |\r\n| 1 | RuntimeError: transform: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access wa | See #20859 (opened) |\r\n| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | see PT 11 |\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| maskformer | 50 | AttributeError: module 'torch' has no attribute 'pi' | 50 |\r\n| vilt | 46 | TypeError: meshgrid() got an unexpected keyword argument 'in | 44 |\r\n| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |\r\n| owlvit | 21 | RuntimeError: Expected all tensors to be on the same device, | 12 |\r\n| mctct | 18 | AttributeError: module 'torchaudio.functional' has no attrib | 18 |\r\n| levit | 16 | RuntimeError: CUDA error: an illegal memory access was encou | 15 |\r\n| bloom | 10 | OSError: gs555750 is not a valid git identifier (branch name | 6 |\r\n| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |\r\n| longt5 | 4 | RuntimeError: Index is supposed to be an empty tensor or a v | 2 |\r\n| flava | 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 p | 2 |\r\n| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |\r\n| layoutlmv2 | 2 | ImportError: | 2 |", "### **Past CI - PyTorch 1.8 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n[errors-pt-1-8.txt](https://github.com/huggingface/transformers/files/9678007/errors-pt-1-8.txt)\r\n\r\n#### General\r\n| no. | error | status |\r\n|-:|:-|:-|\r\n| 570 | AttributeError: module 'torch.jit._state' has no attribute '_clear_class_state' | WIP |\r\n| 50 | AttributeError: module 'torch' has no attribute 'pi' | See PT 1.9 |\r\n| 44 | TypeError: conv1d(): argument 'padding' (position 5) must be tuple of ints, not str | WIP |\r\n| 44 | TypeError: meshgrid() got an unexpected keyword argument 'indexing' | See PT 1.9 |\r\n| 30 | NameError: name 'kenlm' is not defined | see PT 11 |\r\n| 26 | AttributeError: module 'torch' has no attribute 'permute' | WIP |\r\n| 18 | AttributeError: module 'torchaudio.functional' has no attribute 'melscale_fbanks' | See PT 1.9 |\r\n| 12 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 | see PT 1.11 |\r\n| 8 | RuntimeError: einsum() operand subscript must be in range [a, z] but found B for operand 0 | WIP |\r\n| 6 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for | see PT 1.11 |\r\n| 6 | NameError: name 'GPT2Tokenizer' is not defined | see PT 1.11 |\r\n| 6 | TypeError: forward() missing 1 required positional argument: 'attention_mask' | see PT 1.11 |\r\n| 4 | TypeError: Caught TypeError in replica 0 on device 0. | see PT 1.10 |\r\n| 3 | ImportError: | See PT 1.11 |\r\n| 2 | RuntimeError: \"LayerNormKernelImpl\" not implemented for 'BFloat16' | See PT 1.9 |\r\n| 2 | RuntimeError: \"min_cuda\" not implemented for 'BFloat16' | WIP |\r\n| 2 | AssertionError: 1.9311904907226562e-05 != 1.9431114196777344e-05 | See PT 10 |\r\n| 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 places (5.7006835930906163e-05 difference | See PT 9 |\r\n| 2 | AssertionError: False is not true |\r\n| 2 | RuntimeError: Expected node type 'onnx::Constant' for argument 'num_classes' of node 'one_hot', got | See PT 1.9 |\r\n| 2 | AssertionError: torch.Size([1, 2]) != torch.Size([1, 32]) | see PT 11 |\r\n| 2 | TypeError: CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got tuple) for re | WIP |\r\n| 2 | TypeError: save_for_backward can only save variables, but argument 2 is of type bool | WIP |\r\n| 2 | AttributeError: module 'torchaudio.functional' has no attribute 'resample' | WIP |\r\n| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. | see PT 11 |\r\n| 1 | AssertionError: 2.9253265857696533 != 2.925307273864746 within 1e-05 delta (1.9311904907226562e-05 d | diff acceptable\r\n\r\n\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| mctct | 64 | TypeError: conv1d(): argument 'padding' (position 5) must be | 44 |\r\n| maskformer | 50 | AttributeError: module 'torch' has no attribute 'pi' | 50 |\r\n| vilt | 46 | TypeError: meshgrid() got an unexpected keyword argument 'in | 44 |\r\n| owlvit | 33 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| wav2vec2_with_lm | 30 | NameError: name 'kenlm' is not defined | 30 |\r\n| longt5 | 26 | AttributeError: module 'torch.jit._state' has no attribute ' | 24 |\r\n| perceiver | 26 | AttributeError: module 'torch' has no attribute 'permute' | 26 |\r\n| bloom | 18 | OSError: gs555750 is not a valid git identifier (branch name | 6 |\r\n| prophetnet | 18 | AttributeError: module 'torch.jit._state' has no attribute ' | 18 |\r\n| data2vec | 18 | AttributeError: module 'torch.jit._state' has no attribute ' | 18 |\r\n| hubert | 14 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| realm | 14 | RuntimeError: einsum() operand subscript must be in range [a | 8 |\r\n| wav2vec2 | 14 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| clip | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| marian | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| opt | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| blenderbot | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| pegasus | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| t5 | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| funnel | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| mvp | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| mbart | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| plbart | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| blenderbot_small | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| bart | 12 | AttributeError: module 'torch.jit._state' has no attribute ' | 12 |\r\n| swin | 8 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| sew | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| resnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| xlm_roberta_xl | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| splinter | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| dpt | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| xlm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| speech_to_text_2 | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| dpr | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| squeezebert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| vit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| mobilebert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| convnext | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| xlnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| glpn | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| segformer | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| cpm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| nezha | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| bigbird_pegasus | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| megatron_bert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| trocr | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| rembert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| van | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| mobilevit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| gpt_neox | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| openai | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| albert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| nystromformer | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| distilbert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| gpt2 | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| mpnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| roberta | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| deit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| unispeech | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| flaubert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| codegen | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| wavlm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| xglm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| roformer | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| regnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| bert_generation | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| convbert | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| beit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| transfo_xl | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| electra | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| ctrl | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| canine | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| groupvit | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| gptj | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| gpt_neo | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| fnet | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| fsmt | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| m2m_100 | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| layoutlm | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| layoutlmv2 | 2 | ImportError: | 2 |\r\n| flava | 2 | AssertionError: -198.98219299316406 != -198.98225 within 4 p | 2 |\r\n| trajectory_transformer | 2 | TypeError: save_for_backward can only save variables, but ar | 2 |", "### **Past CI - TensorFlow 2.8 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n#### General\r\n| no. | error |\r\n|-:|:-|\r\n| 66 | RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found. |\r\n| 30 | NameError: name 'kenlm' is not defined |\r\n| 6 | NameError: name 'GPT2Tokenizer' is not defined |\r\n| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |\r\n| 4 | ImportError: |\r\n| 2 | NameError: name 'MaskFormerModel' is not defined |\r\n| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: required broadcastable shapes [Op:Equa |\r\n| 1 | ValueError: You called `set_weights(weights)` on layer \"tf_segformer_for_image_classification_8\" wit |\r\n\r\n\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |\r\n| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |\r\n| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |\r\n| speech_to_text | 4 | ImportError: | 4 |\r\n| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |\r\n| rembert | 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 2 |\r\n| segformer | 1 | ValueError: You called `set_weights(weights)` on layer \"tf_s | 1 |", "### **Past CI - TensorFlow 2.7 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n#### General\r\n| no. | error |\r\n|-:|:-|\r\n| 66 | RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found. |\r\n| 30 | NameError: name 'kenlm' is not defined |\r\n| 6 | TypeError: Invalid keyword argument(s) in `compile()`: ({'jit_compile'},). Valid keyword arguments i |\r\n| 6 | NameError: name 'GPT2Tokenizer' is not defined |\r\n| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |\r\n| 4 | ImportError: |\r\n| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 2 | NameError: name 'MaskFormerModel' is not defined |\r\n| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: required broadcastable shapes [Op:Equa |\r\n| 1 | ValueError: You called `set_weights(weights)` on layer \"tf_segformer_for_image_classification_8\" wit |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n\r\n\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |\r\n| t5 | 10 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 1 |\r\n| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |\r\n| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |\r\n| speech_to_text | 4 | ImportError: | 4 |\r\n| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |\r\n| gptj | 2 | TypeError: Invalid keyword argument(s) in `compile()`: ({'ji | 2 |\r\n| bart | 2 | TypeError: Invalid keyword argument(s) in `compile()`: ({'ji | 2 |\r\n| gpt2 | 2 | TypeError: Invalid keyword argument(s) in `compile()`: ({'ji | 2 |\r\n| rembert | 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 2 |\r\n| segformer | 1 | ValueError: You called `set_weights(weights)` on layer \"tf_s | 1 |", "### **Past CI - TensorFlow 2.6 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n#### General\r\n| no. | error |\r\n|-:|:-|\r\n| 66 | RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found. |\r\n| 30 | NameError: name 'kenlm' is not defined |\r\n| 10 | ValueError: in user code: |\r\n| 6 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_compile'} |\r\n| 6 | NameError: name 'GPT2Tokenizer' is not defined |\r\n| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |\r\n| 4 | ImportError: |\r\n| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Detected unsupported operations when t |\r\n| 2 | NameError: name 'MaskFormerModel' is not defined |\r\n| 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError: required broadcastable shapes [Op:Equa |\r\n| 1 | ValueError: You called `set_weights(weights)` on layer \"tf_segformer_for_image_classification_8\" wit |\r\n| 1 | ValueError: Unable to save function b'__inference_tf_speech2text_model_25_layer_call_and_return_cond |\r\n| 1 | tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False, shape=(), d |\r\n| 1 | ValueError: Unable to save function b'__inference_tf_speech2text_model_25_layer_call_and_return_cond |\r\n\r\n\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |\r\n| t5 | 10 | ValueError: in user code: | 10 |\r\n| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |\r\n| opt | 6 | NameError: name 'GPT2Tokenizer' is not defined | 6 |\r\n| speech_to_text | 6 | ImportError: | 4 |\r\n| bart | 3 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_c | 2 |\r\n| wav2vec2 | 2 | NameError: name 'kenlm' is not defined | 2 |\r\n| gptj | 2 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_c | 2 |\r\n| gpt2 | 2 | TypeError: Invalid keyword argument(s) in `compile`: {'jit_c | 2 |\r\n| rembert | 2 | tensorflow.python.framework.errors_impl.InvalidArgumentError | 2 |\r\n| segformer | 1 | ValueError: You called `set_weights(weights)` on layer \"tf_s | 1 |", "### **Past CI - TensorFlow 2.5 (Patch release: v4.21.2 | b487096b0)**\r\n\r\n#### General\r\n| no. | error |\r\n|-:|:-|\r\n| 70 | RuntimeError: Failed to import transformers.models.albert.modeling_tf_albert because of the followin |\r\n| 28 | NameError: name 'kenlm' is not defined |\r\n| 18 | RuntimeError: Failed to import transformers.models.gpt2.modeling_tf_gpt2 because of the following er |\r\n| 4 | NameError: name 'MaskFormerForInstanceSegmentation' is not defined |\r\n| 2 | RuntimeError: Failed to import transformers.models.t5.modeling_tf_t5 because of the following error |\r\n| 2 | RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the |\r\n| 2 | RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following er |\r\n| 2 | NameError: name 'MaskFormerModel' is not defined |\r\n\r\n\r\n\r\n#### Per model\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| wav2vec2_with_lm | 28 | NameError: name 'kenlm' is not defined | 28 |\r\n| maskformer | 6 | NameError: name 'MaskFormerForInstanceSegmentation' is not d | 4 |\r\n| squeezebert | 4 | RuntimeError: Failed to import transformers.models.albert.mo | 4 |\r\n| xglm | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| bert_generation | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| byt5 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| bloom | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| perceiver | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| layoutlmv2 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| bort | 2 | RuntimeError: Failed to import transformers.models.bert.mode | 2 |\r\n| tapex | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| plbart | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| barthez | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| layoutxlm | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| nllb | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| canine | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| layoutlmv3 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| xlm_prophetnet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| luke | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| mbart50 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| realm | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| mluke | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| bertweet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| mvp | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| big_bird | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| phobert | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| fnet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| speech_to_text_2 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| prophetnet | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| herbert | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| fsmt | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| codegen | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| retribert | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| m2m_100 | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| bartpho | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |\r\n| reformer | 2 | RuntimeError: Failed to import transformers.models.albert.mo | 2 |", "I was trying to fix the kenlm issue, but I see it's correctly installed [here](https://github.com/huggingface/transformers/blame/main/docker/transformers-all-latest-gpu/Dockerfile#L49) and has been for a while. \r\n\r\nI guess it is an image issue?", "> I was trying to fix the kenlm issue, but I see it's correctly installed [here](https://github.com/huggingface/transformers/blame/main/docker/transformers-all-latest-gpu/Dockerfile#L49) and has been for a while.\r\n> \r\n> I guess it is an image issue?\r\n\r\nHi @LysandreJik. In fact, Past CI use `transformers-past-gpu/Dockerfile`:\r\nhttps://github.com/huggingface/transformers/blame/main/docker/transformers-past-gpu/Dockerfile\r\n\r\nIt's probably arguable if we should (or should not) include `kenlm`. I don't remember well if I got issue when installing it. Maybe yes for more elder versions, so I decide not to install it for all versions (to avoid confusion).\r\n\r\nWe can try with it in the next launch.", "I think we can add it, we've had it in the main file for 8 months so it's unlikely to cause an issue. Looking forward to the next launch!" ]
1,661
1,671
null
MEMBER
null
We are currently working on identifying the backend versions with which we are compatible and with which we want to be compatible. These backends are PyTorch and TensorFlow. We will be considering Flax at a later point in time. The first step was to identify the number of failures in each PyTorch/TensorFlow version and was done in https://github.com/huggingface/transformers/issues/18181. Total number of tests: 38,991. | Framework | No. Failures | Release date | Older than 2 years | | :--------------- | ---------- | ---------- | ---------- | | PyTorch 1.10 | 50 | Mar 10 2021 | No | | PyTorch 1.9 | 710 | Jun 15 2021 | No | | PyTorch 1.8 | 1301 | Mar 4 2021 | No | | PyTorch 1.7 | 1567 | Oct 27 2020 | No | | PyTorch 1.6 | 2342 | Jul 28 2020 | Yes | | PyTorch 1.5 | 3315 | Apr 21 2020 | Yes | | PyTorch 1.4 | 3949 | Jan 16 2020 | Yes | | TensorFlow 2.8 | 118 | Feb 2 2022 | No | | TensorFlow 2.7 | 122 | Nov 4 2021 | No | | TensorFlow 2.6 | 122 | Aug 11 2021 | No | | TensorFlow 2.5 | 128 | May 13 2021 | No | | TensorFlow 2.4 | 167 | Dec 14 2020 | No | We're proposing to drop versions older than 2 years old and to work towards providing support (support = 0 tests failing) for versions we aim to support. We will drop support for older versions once we reach their two-year-old date. Here is the proposed plan moving forward: - [ ] Have a detailed breakdown of failures for the following versions: - [ ] Torch 1.7 - [ ] Torch 1.8 - [ ] Torch 1.9 - [ ] Torch 1.10 - [ ] Torch 1.11 - [ ] Torch 1.12 - [ ] TensorFlow 2.4 - [ ] TensorFlow 2.5 - [ ] TensorFlow 2.6 - [ ] TensorFlow 2.7 - [ ] TensorFlow 2.8 - [ ] TensorFlow 2.9 - [ ] Start with an initial compatibility document to mention which models are supported in which versions - [ ] Open good first issues to improve compatibility for models not compatible with all versions, starting from the latest one and moving back in time. - [ ] As versions become supported, run tests on older versions to ensure no regression. Work by @ydshieh and @LysandreJik ---------- ### Some context and tips when working on Past CI 1. The Past CI runs against a specific commit/tag: - **Motivation**: To be able to run the test against the **same** commit to see if a set of fixes improves the overall backward compatibility without new issues introduced. - The chosen commit could be changed (to more recent ones) along the time, but it should never be `main`. - When working on the fix for Past CI , keeping in mind that we should **check the source code in the commit that is chosen for that particular Past CI run**. The commit given at the beginning of each report provided in the following comments. 2. For each report, there is an attached `errors.txt` where you can find more information to ease the fix process: - The file contains a list whose elements have the following content: - The line where an error occurs - The error message - The complete name of the failed test - The link to the job that ran that failed test - The errors in the reports sometimes don't contain enough information to make the decision/action. You can use the corresponding links provided in `errors.txt` to see the full trackback on the job run pages. 3. One (possible) fix process would be like: - For a framework and a particular version, go to the corresponding reporting table provided in the following comments. - Make sure you have a preferred way to navigate the source code in a specific commit. - Download/Open the corresponding `errors.txt`. - From the `General` table, take a row whose `status` is empty. Ideally, take the ones with higher value in `no.` column. - Search in `errors.txt` for the `error` in the picked row. You get information about the failed line, failed test, and the job link. - Navigate to the failed line or failed test in your workspace (or in a browser) that checks out to the specific commit for the run. - Use the job link to go to the job run page if you need more information about the error. - Then you might come up with a solution :-), or decide a fix is not necessary with good reasons. - Update the `status` column with a comment once a fix or a decision is made. 4. Some guides/hints for the fix: - 🔥 To install a specific framework version, `utils/past_ci_versions.py` can help! - ⚠️ As the tests are run against a chosen commit, which may not contain some fixes in the `main` branch. (This is particular confusing if you try to run the failed test without checking out to that commit.). - If the test passes when you run a failed test (in the report) against the `main` branch, with the target framework version, it's very likely a fix exists on `main` that applies to the target framework version too. - In this case, - either update `status` with `fixed in #XXXXX` (if you know clearly that PR fixes that error) - or `works for commits since **b487096**` - a commit sha (It's not always trivial to find out which PR fixed a particular error - especially when working with Past CI) - We decide to focus on the PyTorch and TensorFlow version, and not to consider other 3rd libraries. Therefore, some packages are not installed, like `kenlm` or `detectorn2`. We could just simply update the `status` column with `XXX not installed`. - When an error is coming from a C/C++ exception, and the same code and inputs work for new framework versions, we could skip that failed test with a `@unittest.skipIf`, and update the status like `torch._C issue -> works wth PT >= 11 Fixed in #19122`. - PR [#19122](https://github.com/huggingface/transformers/pull/19122) is one such example. - If an error occurs in several framework versions, say, PT 11 and PT 10, and a status is updated for the newer version (here PT 11), we can simply put `see PT 11` in the report `status` column for older versions. - Some old framework versions lack attributes or arguments introduced in newer versions. See [#19201](https://github.com/huggingface/transformers/pull/19201) and [#19203](https://github.com/huggingface/transformers/pull/19203) for how a fix would look like in such cases. If a similar warning (to the one in [#19203](https://github.com/huggingface/transformers/pull/19203)) already exists, we could update `status` with, for example, `Vilt needs PT >= 1.10`. - Adding such warning is not a fix in a strict sense, but at least it provides some information. Together with the updated `status`, we keep information tracked.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18817/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/18816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18816/comments
https://api.github.com/repos/huggingface/transformers/issues/18816/events
https://github.com/huggingface/transformers/issues/18816
1,355,756,345
I_kwDOCUB6oc5QzzM5
18,816
New update breaks T5, gpt2, opt models (probably all models actually) if bitsandbytes is installed
{ "login": "ViktorThink", "id": 35969959, "node_id": "MDQ6VXNlcjM1OTY5OTU5", "avatar_url": "https://avatars.githubusercontent.com/u/35969959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ViktorThink", "html_url": "https://github.com/ViktorThink", "followers_url": "https://api.github.com/users/ViktorThink/followers", "following_url": "https://api.github.com/users/ViktorThink/following{/other_user}", "gists_url": "https://api.github.com/users/ViktorThink/gists{/gist_id}", "starred_url": "https://api.github.com/users/ViktorThink/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ViktorThink/subscriptions", "organizations_url": "https://api.github.com/users/ViktorThink/orgs", "repos_url": "https://api.github.com/users/ViktorThink/repos", "events_url": "https://api.github.com/users/ViktorThink/events{/privacy}", "received_events_url": "https://api.github.com/users/ViktorThink/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @ViktorThink . Could you try if the following version works?\r\n\r\n`bitsandbytes==0.31.8` or `bitsandbytes==0.31.5`", "Oh, I got what was the issue now...\r\n\r\nI had bitsandbytes installed on an instance with no cuda device, and that raised the error.\r\n\r\nThank you very much for the quick reply! Highly appreciated!", "Thanks for pointing out this issue @ViktorThink !\r\nThis should be fixed in https://github.com/huggingface/transformers/pull/18859 that has been merged recently 💪 " ]
1,661
1,662
1,661
NONE
null
### System Info It seems like a new update in the repo is causing an error for all models tested, including all the T5, gpt2 and opt models. The error only occurs if bitsandbytes is installed, I tried an earlier version of bitesandbytes and same problem occured. I made a colab to showcase it: https://colab.research.google.com/drive/1TSMLP3oPkAb-sBL_9l9KmXtRpP314Axc?usp=sharing It must have been an update made in the past few hours, since code I used earlier today suddenly raised this error: ``` Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1030, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 36, in <module> from ...modeling_utils import PreTrainedModel File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/modeling_utils.py", line 88, in <module> from .utils.bitsandbytes import get_key_to_not_convert, replace_8bit_linear, set_module_8bit_tensor_to_device File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/bitsandbytes.py", line 10, in <module> import bitsandbytes as bnb File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/__init__.py", line 6, in <module> from .autograd._functions import ( File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py", line 4, in <module> import bitsandbytes.functional as F File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/functional.py", line 14, in <module> from .cextension import COMPILED_WITH_CUDA, lib File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 41, in <module> lib = CUDALibrary_Singleton.get_instance().lib File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 37, in get_instance cls._instance.initialize() File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 15, in initialize binary_name = evaluate_cuda_setup() File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py", line 136, in evaluate_cuda_setup cc = get_compute_capability(cuda) File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py", line 112, in get_compute_capability return ccs[-1] IndexError: list index out of range The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 462, in from_pretrained model_class = _get_model_class(config, cls._model_mapping) File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 359, in _get_model_class supported_models = model_mapping[type(config)] File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 583, in __getitem__ return self._load_attr_from_module(model_type, model_name) File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 597, in _load_attr_from_module return getattribute_from_module(self._modules[module_name], attr) File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 553, in getattribute_from_module if hasattr(module, attr): File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1020, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1032, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback): list index out of range ``` ### Who can help? @younesbelkada @TimDettmers ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction !pip install git+https://github.com/huggingface/transformers.git !pip install bitsandbytes==0.32.1 from transformers import AutoModel model = AutoModel.from_pretrained("gpt2") ### Expected behavior It should load the model, but it never does.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18816/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18815/comments
https://api.github.com/repos/huggingface/transformers/issues/18815/events
https://github.com/huggingface/transformers/pull/18815
1,355,631,199
PR_kwDOCUB6oc4-CmQk
18,815
MSN (Masked Siamese Networks) for ViT
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge, after studying the [pretraining script of MSN](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py) thoroughly I am still unsure of how to put together a `ViTMSNForPretraining` similar to `ViTMAEForPreTraining`. There are multiple moving pieces that I think are best off residing inside a standalone pretraining script:\r\n\r\n* A target encoder [updated with EMA](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py#L373).\r\n* [Learnable prototypes](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py#L217) that are needed to compute the final MSN loss. \r\n* [Target sharpening](https://github.com/facebookresearch/msn/blob/main/src/msn_train.py#L347) amongst other things. \r\n\r\nBoth the EMA and sharpening components operate with their own schedules. \r\n\r\nGiven this, I think it's best to resort to a separate pre-training script and use this model for feature extraction and fine-tuning. \r\n\r\nThere's an [ongoing discussion](https://github.com/facebookresearch/msn/issues/7) around releasing the weights of the linear classification layers and fine-tuned models. So when that's available, we could directly support those via `ViTMSNForImageClassification`. Regardless, I am happy to add a `ViTMSNForImageClassification` for easy access. \r\n\r\nWhat do you think? ", "Thanks for your PR! It would be great to have the `ViTMSNForImageClassification` even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.\r\n\r\nFor pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?", "> For pretraining, if multiple new pieces are needed, maybe it could go in a research project at first, where you can add more modules?\r\n\r\nSounds good to me. \r\n\r\n> Thanks for your PR! It would be great to have the ViTMSNForImageClassification even if there are no released weights for image classification, so users can already fine-tune the main checkpoint if they want.\r\n\r\nSure, I will continue the work from here on then. Thank you! ", "@sgugger @NielsRogge @amyeroberts ready for review.", "@sgugger @NielsRogge @amyeroberts a friendly nudge on the PR. ", "@sgugger addressed your comments. After the weights are transferred to the right org, I will open a PR there adding README. ", "Hi @sayakpaul . First, thank you for this PR 🤗 .\r\n\r\nThe doctest for this model is currently failing, as \r\nhttps://github.com/huggingface/transformers/blob/7e84723fe4e9a232e5e27dc38aed373c0c7ab94a/src/transformers/models/vit_msn/modeling_vit_msn.py#L646\r\nthis outputs the predicted label, but there is no expected value provided.\r\n\r\nThe [config](https://huggingface.co/facebook/vit-msn-small/blob/main/config.json) has `LABEL_0` ... `LABEL_999` in `id2label`, but I feel it should be the actual labels for the COCO dataset. \r\n\r\nCould you take a look for this config, as well as the missing expected outputs for the doctest? Thank you!\r\n\r\nHere is the failing doctest job:\r\n\r\nhttps://github.com/huggingface/transformers/actions/runs/3109562462/jobs/5039877349", "> The config has LABEL_0 ... LABEL_999 in id2label, but I feel it should be the actual labels for the COCO dataset. \r\n\r\nThe model was trained on ImageNet-1k. \r\n\r\nI will add the expected outputs. Thanks for flagging it. " ]
1,661
1,663
1,663
MEMBER
null
# What does this PR do? Adds the [MSN](https://arxiv.org/abs/2204.07141) checkpoints for ViT. MSN shines in the few-shot regimes which would benefit real-world use cases. Later we could add a pre-training script so that people can actually perform pre-training with MSN with their own datasets. Closes #18758 ## Who can review? @sgugger @NielsRogge @amyeroberts ## TODO - [x] Add documentation - [x] Add rest of the files for repo consistency - [ ] Host MSN weights on the Facebook org on HF Hub (@NielsRogge ?) - [ ] Change the checkpoint paths wherever needed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18815", "html_url": "https://github.com/huggingface/transformers/pull/18815", "diff_url": "https://github.com/huggingface/transformers/pull/18815.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18815.patch", "merged_at": 1663845303000 }
https://api.github.com/repos/huggingface/transformers/issues/18814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18814/comments
https://api.github.com/repos/huggingface/transformers/issues/18814/events
https://github.com/huggingface/transformers/pull/18814
1,355,442,002
PR_kwDOCUB6oc4-B9Aw
18,814
Add support for Japanese GPT-NeoX-based model by ABEJA, Inc.
{ "login": "SO0529", "id": 67080255, "node_id": "MDQ6VXNlcjY3MDgwMjU1", "avatar_url": "https://avatars.githubusercontent.com/u/67080255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SO0529", "html_url": "https://github.com/SO0529", "followers_url": "https://api.github.com/users/SO0529/followers", "following_url": "https://api.github.com/users/SO0529/following{/other_user}", "gists_url": "https://api.github.com/users/SO0529/gists{/gist_id}", "starred_url": "https://api.github.com/users/SO0529/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SO0529/subscriptions", "organizations_url": "https://api.github.com/users/SO0529/orgs", "repos_url": "https://api.github.com/users/SO0529/repos", "events_url": "https://api.github.com/users/SO0529/events{/privacy}", "received_events_url": "https://api.github.com/users/SO0529/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Impressive PR, @SO0529!\r\n\r\nI'd like either @younesbelkada or @ArthurZucker (or both :smile:) to have a look at your PR; they are both well versed in Japanese and have reviewed models in the past.\r\n\r\nThey're both on leave until the 9th of September, is it okay with you if we review this PR in about 1.5 weeks' time?\r\n\r\nThanks for your understanding!", "@LysandreJik \r\nThank you for a quick comment. Of course we can wait! We look forward to having you review this PR! ", "Hey @SO0529 ! We just came back, gonna take a look at this with @younesbelkada asap 😄 ", "@ArthurZucker @younesbelkada Thank you in advance for taking your time to review this PR!:smiley:", "Thanks for your valuable comments! I correct them one by one:muscle:", "Awesome, I will review again once you are done with that 😉 ", "Sorry I found mistakes, I re-push later when I fix.", "@younesbelkada @ArthurZucker \r\nI think I could fix all your comments like below. I'm glad if you could review this PR again:smile:\r\n\r\n> - We might need a bit more documentation about RowParrallelLinear as it is a new module that is not present in the original GPT-NeoX implementation. I also think that the name can be changed as I think that there is no model parallelism involved (we just do a F.linear). Feel free to have a look at the BLOOM implementation to see [how we got rid of the output_bias variable. ](https://github.com/huggingface/transformers/blob/9faa9f9dacf8c818ab2513da3ef92ce66f39515d/src/transformers/models/bloom/modeling_bloom.py#L383)there. But I am not sure yet how to adapt this with your model since the argument skip_bias_add might be important (for BLOOM this argument was set to False on all models).\r\n- I removed both `RowParrallelLinear` and `ColumnParrallelLinear` from this model.\r\n\r\n> - A small nit on the Attention block that can be easily fixed with the small change that I proposed, but I feel that the error initially came from using F.linear on the RowParrallelLinear and ColumnParrallelLinear modules (accelerate does not support torch.functionnal functions so that is why it might be related).\r\n- I used `nn.Linear` instead of `F.linear`.\r\n\r\n> - If some modules are entirely copied from the original GPT-NeoX implementation, you can just add a # Copied from... statement on the top of the class definition for better tracking 💪\r\n- I put a comment at the top of class definition.\r\n\r\n> - Also not sure if we have to keep the bias_dropout_fn as I experienced a very small throughput enhancement that we can neglect for better code readability. Feel free to have a look [https://github.com/huggingface/transformers/blob/9faa9f9dacf8c818ab2513da3ef92ce66f39515d/src/transformers/models/bloom/modeling_bloom.py#L130](https://github.com/huggingface/transformers/pull/here) as we went through the same dilemma when integrating BLOOM from Megatron-DeepSpeed\r\n- I keep `bias_dropout_add` function to use the bias param, but it has been changed from an unnecessarily difficult-to-read writing style to a simple description.\r\n\r\nIn addition to above, I added a simple `test_generate` function.\r\n\r\nAgain, thank you for taking your time to review this PR! ", "By the way, `build_and_test ` and `ci/circleci: run_tests_torch` looks to be in error with the model `tests/models/pegasus/test_modeling_pegasus.py`, how do I address this error?:worried:", "Hey! Awesome work, we will review again today! \nThe Pegasus test should not really be related to you, I will have a look 😊", "@younesbelkada \r\nThank you for quick review!\r\nI modified below 2 items following your comments.\r\n- I put a documentation for `bias_dropout_add`.\r\n- I changed coding style regarding `nn.ModuleList`.", "@sgugger \r\nThank you for taking your time to review PR!\r\nI think I could modify all your comments. I would like you to look the correction please.", "@sgugger \r\nThank you for taking your time again and again! I've changed your last comment:fire:\r\nWe are looking forward to being merge!" ]
1,661
1,663
1,663
CONTRIBUTOR
null
# What does this PR do? This PR adds a new GPT NeoX Japanese Model and a new Tokenizer. The specific features are, - Trained for Japanese Dataset with specific preprocess. - Used [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) to train with Pipe. In addition, we removed bias parameters from the transformer blocks following [PaLM from Google](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html). - Applied Japanese special sub-word tokenizer to accommodate the distinctive structure of the Japanese. Japanese has a relatively large vocabulary and there is no separation between words. Furthermore, the language is a combination of hiragana, katakana, and kanji, and variants such as "1" and "①" are often used. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Thank you in advance to review this PR! - GPT model and tokenizer: @patrickvonplaten, @LysandreJik - Documentation: @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18814/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18814/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18814", "html_url": "https://github.com/huggingface/transformers/pull/18814", "diff_url": "https://github.com/huggingface/transformers/pull/18814.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18814.patch", "merged_at": 1663165060000 }
https://api.github.com/repos/huggingface/transformers/issues/18813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18813/comments
https://api.github.com/repos/huggingface/transformers/issues/18813/events
https://github.com/huggingface/transformers/issues/18813
1,355,437,443
I_kwDOCUB6oc5QylWD
18,813
Feature to highlight or color code the text from the NER output of token classification having offsets using python
{ "login": "pratikchhapolika", "id": 11159549, "node_id": "MDQ6VXNlcjExMTU5NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pratikchhapolika", "html_url": "https://github.com/pratikchhapolika", "followers_url": "https://api.github.com/users/pratikchhapolika/followers", "following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}", "gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}", "starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions", "organizations_url": "https://api.github.com/users/pratikchhapolika/orgs", "repos_url": "https://api.github.com/users/pratikchhapolika/repos", "events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}", "received_events_url": "https://api.github.com/users/pratikchhapolika/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!", "> Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n> \r\n> Thanks!\r\n\r\nIts a feature request", "Ah, sorry, I misunderstood! In that case, I would say that this is unfortunately out of the scope of the repository. I would look into other utilities to provide color highlighting in outputs", "> Ah, sorry, I misunderstood! In that case, I would say that this is unfortunately out of the scope of the repository. I would look into other utilities to provide color highlighting in outputs\r\n\r\nWill it be possible to integrate `Displacy`?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
NONE
null
### Feature request I have fine tuned a `Hugging face` `token classification` model for `NER task`. I use `pipeline` from Hugging face to do prediction on test text data. I tag the data as `BIOL` format. `B stands of Beginning, I stand for Including, O means no entity, L means Last` **Example:** `Joh J Mathew` will be tagged as `B_PERSON` `I_PERSON` `L_PERSON` **Here is how the output looks like:** model = AutoModelForTokenClassification.from_pretrained("model_x") tokenizer = AutoTokenizer.from_pretrained("model_x") token_classifier = pipeline("token-classification", model=model, aggregation_strategy="max",tokenizer=tokenizer) text=("""'IOWA DRIVER LICENSE 1 SAMPLE 2 MARK LIMITED-TERM 8 123 NORTH STREET APT 201 DES MOINES, IA 50301-1234 Onom d DL No. 123XX6789 4a iss 1107/2016 4b exp 01/12/2021 15 Sex M 16 Hgt 5\'-08" 18 Eyes BRO 9a End NONE 9 Class C 12 Rest NONE Mark Sample DONOR MED ALERT: Y HEARING IMP: Y MED ADV DIR: Y 3 OOB 01/12/1967 5 DD 12345678901234567890123 NIVIA AL NA LANG ---- QUE EROL DE USA 01/12/67""") for ent in token_classifier(text): print(ent) {'entity_group': 'B_LAST_NAME', 'score': 0.9999994, 'word': 'SAMPLE', 'start': 23, 'end': 29} {'entity_group': 'B_FIRST_NAME', 'score': 0.99999905, 'word': '', 'start': 32, 'end': 33} {'entity_group': 'L_FIRST_NAME', 'score': 0.9999949, 'word': 'MARK', 'start': 32, 'end': 36} {'entity_group': 'B_ADDRESS', 'score': 0.9999989, 'word': '123', 'start': 52, 'end': 55} {'entity_group': 'I_ADDRESS', 'score': 0.99999917, 'word': 'NORTHSTREETAPT201DESMOINES,IA', 'start': 56, 'end': 91} {'entity_group': 'I_DRIVER_LICENSE_NUMBER', 'score': 0.9999995, 'word': '123XX6789', 'start': 118, 'end': 127} {'entity_group': 'L_ISSUE_DATE', 'score': 0.99999964, 'word': '1107/2016', 'start': 135, 'end': 144} {'entity_group': 'I_EXPIRY_DATE', 'score': 0.99999964, 'word': '01/12/2021', 'start': 152, 'end': 162} {'entity_group': 'B_PERSON_NAME', 'score': 0.99999905, 'word': 'Mark', 'start': 234, 'end': 238} {'entity_group': 'I_PERSON_NAME', 'score': 0.9999993, 'word': 'Sample', 'start': 239, 'end': 245} {'entity_group': 'L_DATE_OF_BIRTH', 'score': 0.99999976, 'word': '01/12/1967', 'start': 301, 'end': 311} So, given the offset values `entity_group`, `word`, `start`, `end` how can I highlight the original text with the `entity_group` so that it is easy to visulize. **Final Output** [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/Ef8WB.png `Is there any python-library that I can use to do it.?` ### Motivation Makes easy to visualise the NER output ### Your contribution NA
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18813/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18812/comments
https://api.github.com/repos/huggingface/transformers/issues/18812/events
https://github.com/huggingface/transformers/pull/18812
1,355,430,282
PR_kwDOCUB6oc4-B6c2
18,812
Add TF implementation of LongT5
{ "login": "stancld", "id": 46073029, "node_id": "MDQ6VXNlcjQ2MDczMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stancld", "html_url": "https://github.com/stancld", "followers_url": "https://api.github.com/users/stancld/followers", "following_url": "https://api.github.com/users/stancld/following{/other_user}", "gists_url": "https://api.github.com/users/stancld/gists{/gist_id}", "starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stancld/subscriptions", "organizations_url": "https://api.github.com/users/stancld/orgs", "repos_url": "https://api.github.com/users/stancld/repos", "events_url": "https://api.github.com/users/stancld/events{/privacy}", "received_events_url": "https://api.github.com/users/stancld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18812). All of your documentation changes will be reflected on that endpoint.", "Hi @patrickvonplaten and @gante,\r\nFYI I've been gradually fixing some PT-TF discrepancies -- I should have some spare time again next weekend, so hopefully, then it should be ready for review :]", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@stancld should I reopen the PR? :)", "Hi @gante, yes, I'd open that again.\r\n\r\nI apologize for being so slow here, but I've been pretty busy now. I'll try to finish this.", "No worries @stancld, take your time 🤗 And thank you for working on it!", "Hi @gante, I managed to fix some bugs. There are still some minor discrepancies between PT and TF implementations. Would you mind having a first look if you spot any obvious differences, please? :]\r\n\r\nOtherwise, TF-only tests seem to be passing 🐰 \r\n\r\n(Btw, CI is passing, but the tests are failing locally, so I'm not really sure :D )", "@stancld Will have a look 👍 Can I have a copy of the error(s) you see locally? (I'm assuming on the slow tests)", "Also cc @ArthurZucker here", "> @stancld Will have a look 👍 Can I have a copy of the error(s) you see locally? (I'm assuming on the slow tests)\r\n\r\nSorry for the late reply. I fail on `PT-TF` equivalence tests, basically, saying there's a too high difference between outputs.", "Hey @stancld ! Thanks for the addition! There are a few approaches we can take here. Sometimes the tolerance is a bit too high and part of the hidden states don't match but the final output does, in that case, we can lower the tolerance (maybe to around `4e-2` other wise, I will have a look ! ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18812). All of your documentation changes will be reflected on that endpoint.", "Almost there I think :-) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,675
1,675
CONTRIBUTOR
null
Fixes #18063 Add: - [ ] Fix PT-TF equivalence (Local) - [ ] Fix PT-TF equivalence (TGlobal) - [ ] Run all slow tests - [ ] Prepare TF checkpoints - [long-t5-local-base](https://huggingface.co/Stancld/long-t5-local-base) - [long-t5-local-large](https://huggingface.co/Stancld/long-t5-local-large) - [long-t5-tglobal-base](https://huggingface.co/Stancld/long-t5-tglobal-base) - [long-t5-tglobal-large](https://huggingface.co/Stancld/long-t5-tglobal-large) - long-t5-tglobal-xl https://github.com/huggingface/transformers/issues/19965 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18812", "html_url": "https://github.com/huggingface/transformers/pull/18812", "diff_url": "https://github.com/huggingface/transformers/pull/18812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18812.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18811/comments
https://api.github.com/repos/huggingface/transformers/issues/18811/events
https://github.com/huggingface/transformers/issues/18811
1,355,407,975
I_kwDOCUB6oc5QyeJn
18,811
Inconsistencies between `nn.functional.interpolate` and `tf.image.resize`
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Note that `align_corners=None` does give the same result as `tf.image.resize`, to an absolute tolerance of 1e-6 or so:\r\n\r\n```python\r\nupsampled_logits_pt = torch.nn.functional.interpolate(\r\n dummy_logits_pt, size=interp_shape, mode=\"bilinear\", align_corners=None\r\n)\r\n```\r\n", "I had to face a similar probelem.\r\n\r\nWith the `torch.nn.functional.interpolate`:\r\n- On `align_corners=True`, the best option is to use the `tf.compat.v1.image.resize`\r\n- On `align_corners=False`, the `tf.image.resize` does the trick.\r\n\r\nHere is a [colab notebook](https://colab.research.google.com/gist/ariG23498/39a20bd536ffaedd145310e2b1c4a1b6/scratchpad.ipynb) that details the solution that I proposed.\r\n\r\n@amyeroberts and @gante were against using the `tf.compat.v1.image.resize` for obvious reasons. @amyeroberts did come up with a solution which is documented in this [comment](https://github.com/huggingface/transformers/pull/18020#discussion_r953674162). I hope this provides some value to this thread.\r\n", "Thanks @hollance and @ariG23498 for chiming in. \r\n\r\nWhen there's one / two interpolation ops, the small differences are fine but when they are done several times (as mentioned earlier), these differences compound up which creates mismatches in the final outputs. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Pinging this to keep it open - we've implemented TF versions of Torch ops like `AdaptivePool`, so this might be something to explore as well. I hope a performant solution can be implemented with native ops (with or without XLA compilation) without needing to write our own CUDA, though!", "> Pinging this to keep it open - we've implemented TF versions of Torch ops like AdaptivePool, so this might be something to explore as well. \r\n\r\nCould you point me to something relevant? Would love to see the new implementations. ", "Hi @sayakpaul, I'm extremely sorry! I thought we'd implemented it in `data2vec2` already, but checking the code it seems like we still have the old version there. Here's a [much, much more performant version](https://gist.github.com/Rocketknight1/efc47242914788def0144b341b1ad638).", "The new version will also allow XLA compilation, unlike the original sparse implementation", "Oh yeah, I remember this one. I need to update the data2vec code with your latest implementation. Reminded me of that. \r\n\r\nThanks, Matt! ", "Although, checking out that notebook, it seems like Pytorch's `interpolate` and TF's `resize` are using the same algorithm, and the differences are mostly numerical; when I switch the dtype to `float64`, the max absolute differences is 1e-7.\r\n\r\nI think this will make it extremely hard to bring the accuracies any closer - we would have to rewrite the internals of the algorithm so that they're not just mathematically equivalent, but they lose precision in the same places as well. I think we're stuck with the issue, unfortunately!", "Fair enough. The tiny differences just add up when there's multiple such interpolations. Otherwise, it's not a big deal. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Ping.\n\nI don't think it is resolved yet.", "I doubt it will be apples-to-apples resolved ever, considering https://github.com/huggingface/transformers/issues/18811#issuecomment-1262563941. \r\n\r\nWe need to be aware when comparing predictions of models (PT vs. TF) with stacks of interpolations, especially focusing on what tolerances we're using. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,669
1,669
MEMBER
null
### System Info Upsampling of intermediate feature maps is common for computer vision models. In PyTorch, it's usually implemented using [`nn.functional.interpolate`](https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html). For TensorFlow, it's usually [`tf.image.resize`](https://www.tensorflow.org/api_docs/python/tf/image/resize). But there is an inconsistency between what these two methods yield ([Colab Notebook](https://colab.research.google.com/gist/sayakpaul/be24f152d91d0f1cbe95d5cea9ae8b14/scratchpad.ipynb)). Sometimes the differences in their outputs are small enough to ignore ([ViT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit), for example). But when interpolation is repeated several times in a model ([MobileViT](https://github.com/huggingface/transformers/tree/main/src/transformers/models/mobilevit), for example), these small differences can add up and shake the final outputs of a model quite a bit. More details [here](https://github.com/huggingface/transformers/pull/18555#issuecomment-1229703811). @hollance wrote an [amazing blog post](https://machinethink.net/blog/coreml-upsampling/) discussing this issue. ### Who can help? @amyeroberts @gante @Rocketknight1 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The Colab Notebook mentioned in the description. ### Expected behavior We should work on a TF utility that yields the same output as `nn.functional.interpolate`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18811/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18810/comments
https://api.github.com/repos/huggingface/transformers/issues/18810/events
https://github.com/huggingface/transformers/issues/18810
1,355,360,023
I_kwDOCUB6oc5QyScX
18,810
SSLError
{ "login": "LaddieTJC", "id": 103995451, "node_id": "U_kgDOBjLYOw", "avatar_url": "https://avatars.githubusercontent.com/u/103995451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LaddieTJC", "html_url": "https://github.com/LaddieTJC", "followers_url": "https://api.github.com/users/LaddieTJC/followers", "following_url": "https://api.github.com/users/LaddieTJC/following{/other_user}", "gists_url": "https://api.github.com/users/LaddieTJC/gists{/gist_id}", "starred_url": "https://api.github.com/users/LaddieTJC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LaddieTJC/subscriptions", "organizations_url": "https://api.github.com/users/LaddieTJC/orgs", "repos_url": "https://api.github.com/users/LaddieTJC/repos", "events_url": "https://api.github.com/users/LaddieTJC/events{/privacy}", "received_events_url": "https://api.github.com/users/LaddieTJC/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "It seems there is an issue with your SSL module and `requests`, not with `transformers`; I would head to Stack Overflow or another forum focused on these issues, we're unfortunately unlikely to be able to help you out here.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
NONE
null
Version: Python 3.8 transformer 4.21.0 I am trying to use DistilBertTokenizer using the following line: `tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased", do_lower_case=True)` but SSLerror arise from the line above `requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /distilbert-base-uncased/resolve/main/vocab.txt (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))` Does anyone manage to solve this? I tried some methods online but still got the same error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18810/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18809/comments
https://api.github.com/repos/huggingface/transformers/issues/18809/events
https://github.com/huggingface/transformers/issues/18809
1,354,911,525
I_kwDOCUB6oc5Qwk8l
18,809
Changing a single example for BLOOM 176-B affects forward pass for other examples in a batch
{ "login": "mayank31398", "id": 32954280, "node_id": "MDQ6VXNlcjMyOTU0Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/32954280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayank31398", "html_url": "https://github.com/mayank31398", "followers_url": "https://api.github.com/users/mayank31398/followers", "following_url": "https://api.github.com/users/mayank31398/following{/other_user}", "gists_url": "https://api.github.com/users/mayank31398/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayank31398/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayank31398/subscriptions", "organizations_url": "https://api.github.com/users/mayank31398/orgs", "repos_url": "https://api.github.com/users/mayank31398/repos", "events_url": "https://api.github.com/users/mayank31398/events{/privacy}", "received_events_url": "https://api.github.com/users/mayank31398/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey! It's a bit hard to run a testing env with bloom, can you share a reproductible script with a smaller model? \r\n\r\nThis looks like some instabilities from torch.bfloat16, and I'm willing to bet that those values come from there (both 3.28 occurences are exactly the same, so seems like a rounding error to me, we can perhaps check that those values are consecutive values in bfloat16, ie there's no value between 3.28 and 3.29). What I think might be happening is you're adding `pad` as you increase the length of the labels and those pad values change the behaviour of previous values. I don't think we have much control over this as this relies on `torch` operators usually.\r\n\r\nAlso if you can run on `main` that'd be great, typically https://github.com/huggingface/transformers/pull/18344 hasn't been incorporated yet in a release and I think it fixed a bunch of instabilities.", "Thanks @thomasw21 for taking a look at this. I will try to reproduce this with a smaller model (say GPT-2) and get back on this. I will also try main branch.", "Also, since there are no batch-norm ops in BLOOM. I don't really understand why this should happen. Also, since the pads have been given an attention mask = 0. Shouldn't the output be the same?\r\nMaybe I am understanding this incorrectly.", "hi @mayank31398 !\r\nThanks for pointing out this issue 💪 \r\nIf I wrap up what I have understood from your issue, when doing batched generation changing the value of one of the label changes the value of the loss function. If I understood correctly the labels are not used when inferring there, so the problem should occur when computing the loss (*i.e.,* the input text is always fixed, right?).\r\nI tried your script on the `main` branch using `gpt2` as below:\r\n```\r\nimport torch\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\nmodel_name = \"gpt2\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_name,\r\n device_map=\"auto\",\r\n torch_dtype=torch.bfloat16,\r\n)\r\n\r\n# lm_logits = torch.randn((4, 11, 250880), dtype=torch.bfloat16)\r\n\r\ndef compute_gen_loss(lm_logits: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:\r\n batch_size = labels.shape[0]\r\n shift_logits = lm_logits[..., :-1, :].contiguous()\r\n shift_labels = labels[..., 1:].contiguous()\r\n\r\n loss_fct = torch.nn.CrossEntropyLoss(reduction=\"none\")\r\n loss = loss_fct(\r\n shift_logits.view(-1, shift_logits.size(-1)),\r\n shift_labels.view(-1)\r\n )\r\n loss = loss.reshape(batch_size, -1)\r\n loss = loss.sum(dim=-1) / (shift_labels != -100).sum(dim=-1)\r\n return loss\r\n\r\ndef pad_ids(arrays, padding, max_length=-1):\r\n if (max_length < 0):\r\n max_length = max(list(map(len, arrays)))\r\n\r\n arrays = [[padding] * (max_length - len(array)) +\r\n array for array in arrays]\r\n\r\n return arrays\r\n\r\n\r\ndef forward(text: list, labels: str, conditional: bool = True):\r\n input_tokens = tokenizer(text).input_ids\r\n label_tokens = tokenizer(labels).input_ids\r\n\r\n input_ids = [x + y for (x, y) in zip(input_tokens, label_tokens)]\r\n attention_mask = [(len(x) + len(y)) * [1]\r\n for (x, y) in zip(input_tokens, label_tokens)]\r\n if (conditional):\r\n labels = [[-100] * len(x) + y for (x, y)\r\n in zip(input_tokens, label_tokens)]\r\n else:\r\n labels = input_ids\r\n\r\n pad = 3\r\n input_ids = pad_ids(input_ids, pad)\r\n attention_mask = pad_ids(attention_mask, 0)\r\n # labels need to be on output device\r\n labels = pad_ids(labels, -100)\r\n\r\n input_ids = torch.tensor(input_ids)\r\n attention_mask = torch.tensor(attention_mask)\r\n labels = torch.tensor(labels)\r\n lm_logits = model(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask\r\n ).logits\r\n\r\n print(compute_gen_loss(lm_logits, labels).cpu().tolist())\r\n\r\ntext = [\r\n \"DeepSpeed\",\r\n \"DeepSpeed is a\",\r\n \"DeepSpeed is a machine\",\r\n \"DeepSpeed is a machine learning framework\",\r\n]\r\nlabels = [\r\n \" is awesome.\",\r\n \" good person.\",\r\n \" that can wipe out the planet.\",\r\n \" for generating memes.\",\r\n]\r\nforward(text, labels)\r\n\r\nlabels[0] = \" is awesome. really awesome\"\r\nforward(text, labels)\r\n\r\nlabels[0] = \" is awesome. really awesome. Try it.\"\r\nforward(text, labels)\r\n\r\nlabels[0] = \" is awesome. really awesome. Try it. You'll be surprised\"\r\nforward(text, labels)\r\n\r\nlabels[0] = \" is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed.\"\r\nforward(text, labels)\r\n\r\nlabels[0] = \" is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed. Oh no the values are bugging out now.\"\r\nforward(text, labels)\r\n``` \r\nand getting \r\n```\r\n[10.3125, 7.0, 3.609375, 7.65625]\r\n[8.25, 7.0, 3.609375, 7.65625]\r\n[6.84375, 7.0, 3.609375, 7.65625]\r\n[3.78125, 7.09375, 6.9375, 8.5625]\r\n[4.34375, 9.5, 8.6875, 10.75]\r\n[4.53125, 9.6875, 9.0, 12.125]\r\n```\r\n\r\nI suspect that logits may be flaky when using half-precision models, therefore I second what @thomasw21 \r\n suspected ;) !", "Hey, first of all: sorry for late reply.\r\nThanks for trying out my example with gpt2 @younesbelkada \r\nAny way to get around this then?\r\nI guess computing logits in bf16 might not be the best we can do?", "Okay I think gpt2 test isn't instability. Essentially it's absolute positional embeddings that's screwing with you as you move things to the right and adding padding to the left as you increase the label size, which is why you see big shifts in the loss.\r\n\r\nI do think that the bloom test is instability. Typically `3.28125` and `3.296875` are consecutive.\r\n\r\n```\r\n>>> import torch\r\n>>> torch.set_printoptions(precision=10)\r\n>>> torch.frombuffer(bytes(np.array([83,64], np.int8)), dtype=torch.bfloat16)\r\ntensor([3.2968750000], dtype=torch.bfloat16)\r\n>>> torch.frombuffer(bytes(np.array([82,64], np.int8)), dtype=torch.bfloat16) # replace 83 with 82\r\ntensor([3.2812500000], dtype=torch.bfloat16)\r\n\r\n>>> torch.frombuffer(bytes(np.array([-94,64], np.int8)), dtype=torch.bfloat16)\r\ntensor([5.0625000000], dtype=torch.bfloat16)\r\n>>> torch.frombuffer(bytes(np.array([-93,64], np.int8)), dtype=torch.bfloat16)\r\ntensor([5.0937500000], dtype=torch.bfloat16)\r\n```\r\n\r\nSo as you said you can try computing the logits in fp32, which will increase precision (but will be slower). There's a bit of a workaround as you need to cast the embedding layers to fp32 and such.", "Everything makes sense in your explanation @thomasw21 ! Missed the absolute positional embedding part. Thanks for explaining it 💪 ", "I guess this is not a fixable problem then right?\r\nI think even in BLOOM AliBi might be screwing up with attention values right?\r\nSo, even if we have padded, the result will change.\r\nThanks for clarificatioon @thomasw21 .\r\n\r\nI think we can close this?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
CONTRIBUTOR
null
### System Info - `transformers` version: 4.21.2 - Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @thomasw21, @younesbelkada This issue if for unexpected BLOOM outputs. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I wrote this script to do get the conditional NLL for the labels given the context. Tried different batches with only the first example changing and rest of the examples fixed in the batch. However, after a certain point, the changing of first examples, affects the NLL for other examples. This is not supposed to happen. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "bigscience/bloom" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", max_memory={0: '0GIB', 1: '51GIB', 2: '51GIB', 3: '51GIB', 4: '51GIB', 5: '51GIB', 6: '51GIB', 7: '51GIB'}, torch_dtype=torch.bfloat16, ) model.eval() def compute_gen_loss(lm_logits: torch.Tensor, labels: torch.Tensor) -> torch.Tensor: batch_size = labels.shape[0] shift_logits = lm_logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() loss_fct = torch.nn.CrossEntropyLoss(reduction="none") loss = loss_fct( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1) ) loss = loss.reshape(batch_size, -1) loss = loss.sum(dim=-1) / (shift_labels != -100).sum(dim=-1) return loss def pad_ids(arrays, padding, max_length=-1): if (max_length < 0): max_length = max(list(map(len, arrays))) arrays = [[padding] * (max_length - len(array)) + array for array in arrays] return arrays def forward(text: list, labels: str, conditional: bool = True): input_tokens = tokenizer(text).input_ids label_tokens = tokenizer(labels).input_ids input_ids = [x + y for (x, y) in zip(input_tokens, label_tokens)] attention_mask = [(len(x) + len(y)) * [1] for (x, y) in zip(input_tokens, label_tokens)] if (conditional): labels = [[-100] * len(x) + y for (x, y) in zip(input_tokens, label_tokens)] else: labels = input_ids pad = 3 input_ids = pad_ids(input_ids, pad) attention_mask = pad_ids(attention_mask, 0) # labels need to be on output device labels = pad_ids(labels, -100) input_ids = torch.tensor(input_ids) attention_mask = torch.tensor(attention_mask) labels = torch.tensor(labels) lm_logits = model( input_ids=input_ids, attention_mask=attention_mask ).logits print(compute_gen_loss(lm_logits, labels).cpu().tolist()) text = [ "DeepSpeed", "DeepSpeed is a", "DeepSpeed is a machine", "DeepSpeed is a machine learning framework", ] labels = [ " is awesome.", " good person.", " that can wipe out the planet.", " for generating memes.", ] forward(text, labels) labels[0] = " is awesome. really awesome" forward(text, labels) labels[0] = " is awesome. really awesome. Try it." forward(text, labels) labels[0] = " is awesome. really awesome. Try it. You'll be surprised" forward(text, labels) labels[0] = " is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed." forward(text, labels) labels[0] = " is awesome. really awesome. Try it. You'll be surprised. BLOOM was trained using DeepSpeed. Oh no the values are bugging out now." forward(text, labels) ``` ```shell [4.8125, 5.1875, 3.296875, 5.09375] [5.625, 5.1875, 3.296875, 5.09375] [4.375, 5.1875, 3.296875, 5.09375] [4.0625, 5.1875, 3.28125, 5.09375] [3.953125, 5.1875, 3.28125, 5.0625] [4.25, 5.1875, 3.296875, 5.09375] ``` Value drops from 3.29 to 3.28 in column 2 when only example for column 0 is changed. Even column 3 changes in last case. Only column 0 is supposed to change here. ### Expected behavior ```shell [4.8125, 5.1875, 3.296875, 5.09375] [5.625, 5.1875, 3.296875, 5.09375] [4.375, 5.1875, 3.296875, 5.09375] [4.0625, 5.1875, 3.296875, 5.09375] [3.953125, 5.1875, 3.296875, 5.09375] [4.25, 5.1875, 3.296875, 5.09375] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18809/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18808/comments
https://api.github.com/repos/huggingface/transformers/issues/18808/events
https://github.com/huggingface/transformers/pull/18808
1,354,249,381
PR_kwDOCUB6oc4998nn
18,808
[Update README] Add SegFormer and ViLT links
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Very cool! We've got a finetune guide for semantic segmentation in the works at #18640 right now, with hopefully VQA to come soon." ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? As we now also have inference widgets for semantic segmentation & visual question answering, it makes sense to add them to the main README.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18808/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18808", "html_url": "https://github.com/huggingface/transformers/pull/18808", "diff_url": "https://github.com/huggingface/transformers/pull/18808.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18808.patch", "merged_at": 1661791568000 }
https://api.github.com/repos/huggingface/transformers/issues/18807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18807/comments
https://api.github.com/repos/huggingface/transformers/issues/18807/events
https://github.com/huggingface/transformers/pull/18807
1,354,228,483
PR_kwDOCUB6oc4994Es
18,807
supported dynamic batch for torchscript trace
{ "login": "MenglingD", "id": 9418558, "node_id": "MDQ6VXNlcjk0MTg1NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9418558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MenglingD", "html_url": "https://github.com/MenglingD", "followers_url": "https://api.github.com/users/MenglingD/followers", "following_url": "https://api.github.com/users/MenglingD/following{/other_user}", "gists_url": "https://api.github.com/users/MenglingD/gists{/gist_id}", "starred_url": "https://api.github.com/users/MenglingD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MenglingD/subscriptions", "organizations_url": "https://api.github.com/users/MenglingD/orgs", "repos_url": "https://api.github.com/users/MenglingD/repos", "events_url": "https://api.github.com/users/MenglingD/events{/privacy}", "received_events_url": "https://api.github.com/users/MenglingD/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThanks for your PR. Note that, when contributing, you need to run `make fixup` from the root of the repo, which will fix code style & checks quality of the code. Normally here, it will complain that you need to run `make fix-copies`, which ensures that other models that rely on Swin's implementation also get updated (in this case, Donut).\r\n\r\n Thanks! ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18807). All of your documentation changes will be reflected on that endpoint.", "Hi, @MenglingD .\r\n\r\nIn order to make CircleCI tests run, could you follow [this instruction](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-) to refresh your CircleCI token, and let's see if it fixes the test not running issue. \r\n\r\nThank you!" ]
1,661
1,661
1,661
NONE
null
# What does this PR do? Fixup for supporting dynamic batch for Swin-Transformer. Fixes #18806 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @novice03 @sgugger @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18807/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18807", "html_url": "https://github.com/huggingface/transformers/pull/18807", "diff_url": "https://github.com/huggingface/transformers/pull/18807.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18807.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18806/comments
https://api.github.com/repos/huggingface/transformers/issues/18806/events
https://github.com/huggingface/transformers/issues/18806
1,354,218,700
I_kwDOCUB6oc5Qt7zM
18,806
Swin trace is not correctedly for dynamic batch.
{ "login": "MenglingD", "id": 9418558, "node_id": "MDQ6VXNlcjk0MTg1NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9418558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MenglingD", "html_url": "https://github.com/MenglingD", "followers_url": "https://api.github.com/users/MenglingD/followers", "following_url": "https://api.github.com/users/MenglingD/following{/other_user}", "gists_url": "https://api.github.com/users/MenglingD/gists{/gist_id}", "starred_url": "https://api.github.com/users/MenglingD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MenglingD/subscriptions", "organizations_url": "https://api.github.com/users/MenglingD/orgs", "repos_url": "https://api.github.com/users/MenglingD/repos", "events_url": "https://api.github.com/users/MenglingD/events{/privacy}", "received_events_url": "https://api.github.com/users/MenglingD/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Wondering why this wasn't caught by the torchscript tests, cc @michaelbenayoun ", "The dimension of batch is casted to integer forcelly, which leads to swin cann't support dynamic batch [modeling_swin.py#L218](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/modeling_swin.py#L218):\r\n\r\n```python3\r\ndef window_reverse(windows, window_size, height, width):\r\n \"\"\"\r\n Merges windows to produce higher resolution features.\r\n \"\"\"\r\n batch_size = math.floor(windows.shape[0] / (height * width / window_size / window_size))\r\n windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1)\r\n windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(batch_size, height, width, -1)\r\n return windows\r\n```\r\n\r\nAnd the following modifications has same semantics but supports dynamic batch:\r\n\r\n```python3\r\ndef window_reverse(windows, window_size, height, width):\r\n \"\"\"\r\n Merges windows to produce higher resolution features.\r\n \"\"\"\r\n channels = int(windows.shape[-1])\r\n windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, channels)\r\n windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, height, width, channels)\r\n return windows\r\n```\r\n\r\nps: I have pulled request before #18807, but I can't `make fixup` as the python version is too old(python3.6.8), and I have close that PR. I may need to trouble you to fix it.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @MenglingD,\r\nI am not even able to trace the model except if I disable `SwinLayer.set_shift_and_window_size` [here](https://github.com/huggingface/transformers/blob/v4.21-release/src/transformers/models/swin/modeling_swin.py#L649).\r\nBut when disabling it, I am able to get it to work with the change you recommended.\r\n\r\nI think we can apply this fix, but as long as `SwinLayer.set_shift_and_window_size` is not tracing friendly, it won't fix a thing.", "> Hi @MenglingD, I am not even able to trace the model except if I disable `SwinLayer.set_shift_and_window_size` [here](https://github.com/huggingface/transformers/blob/v4.21-release/src/transformers/models/swin/modeling_swin.py#L649). But when disabling it, I am able to get it to work with the change you recommended.\r\n> \r\n> I think we can apply this fix, but as long as `SwinLayer.set_shift_and_window_size` is not tracing friendly, it won't fix a thing.\r\n\r\nOK, thanks.\r\n\r\nI wander why that `SwinLayer.set_shift_and_window_size is not tracing friendly`. The `input_dimensions` is fixed at the deployment phase, and it is ok for jit.trace which will choose one of branch of if-else statement. I am interesting in this trace error, could you please provide more message for it?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,668
1,668
NONE
null
### System Info transformers: v4.21.2 system: centos python version: 3.6 ### Who can help? @sgugger @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python3 import torch from models import build_model from config import _C, _update_config_from_file config = _C.clone() _update_config_from_file(config, "configs/swin/swin_tiny_patch4_window7_224.yaml") model = build_model(config).cuda().eval() input1 = torch.randn((1, 3, 224, 224)).to("cuda") input2 = torch.randn((2, 3, 224, 224)).to("cuda") jit_model = torch.jit.trace(model, input1) assert((model(input1) - jit_model(input1)).abs().sum() == 0) assert((model(input2) - jit_model(input2)).abs().sum() == 0) ``` ### Expected behavior Expected nothing, but got `AssertionError`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18806/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18805/comments
https://api.github.com/repos/huggingface/transformers/issues/18805/events
https://github.com/huggingface/transformers/pull/18805
1,354,055,280
PR_kwDOCUB6oc499SJ-
18,805
Fix luke docstring
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? The output logits of `LukeForEntitySpanClassification` is in `(batch_size, entity_length, config.num_labels)` shape instead of `(batch_size, config.num_labels)`, which is that of `LukeForEntityClassification`. https://github.com/huggingface/transformers/blob/8b67f20935e48b26c5803cf31e0e89b9cfaa22ab/tests/models/luke/test_modeling_luke.py#L402-L404 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18805/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18805", "html_url": "https://github.com/huggingface/transformers/pull/18805", "diff_url": "https://github.com/huggingface/transformers/pull/18805.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18805.patch", "merged_at": 1661855446000 }
https://api.github.com/repos/huggingface/transformers/issues/18804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18804/comments
https://api.github.com/repos/huggingface/transformers/issues/18804/events
https://github.com/huggingface/transformers/pull/18804
1,354,053,484
PR_kwDOCUB6oc499Rxd
18,804
Fix mock in `test_cached_files_are_used_when_internet_is_down`
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you, @Wauplin . As my knowledge of `Mock` is also mocked, here is my noob question:\r\n\r\nThe `response` object here\r\n```\r\nserver_message = response.json().get(\"error\", None)\r\n```\r\nis the `response_mock` in\r\n```\r\nresponse_mock.json.return_value = {}\r\n```\r\n?\r\n\r\nAnd if we don't set `return_value` as done in this PR, `response.json()[\"error\"]` is also a mocked object? (Is it `response_mock` itself, or another one).\r\n", "Hi @ydshieh, I'll try to explain the PR a bit.\r\n\r\nIn the tests, we are defining a Mock object `response_mock`. In python, Mock are objects on which you can call any attribute and it will return a new mock object. Since object methods are also attributes, they are mocked as well.\r\nHere is a short example to understand it better:\r\n\r\n```py\r\n>>> from unittest.mock import Mock\r\n>>> my_mock = Mock()\r\n\r\n# the mock we defined\r\n>>> my_mock \r\n<Mock id='140556566369952'>\r\n\r\n# `foo` is also a mock we different id\r\n>>> my_mock.foo \r\n<Mock name='mock.foo' id='140556566370816'>\r\n\r\n# `foo` can be called as a function and return a new mock with different id\r\n>>> my_mock.foo() \r\n<Mock name='mock.foo()' id='140556566424112'>\r\n\r\n# `.foo()` is a mock so you can call anything from it\r\n>>> my_mock.foo().bar \r\n<Mock name='mock.foo().bar' id='140556530612016'>\r\n\r\n# now we set a custom return value for `.foo()`\r\n>>> my_mock.foo.return_value = 4 \r\n\r\n # `.foo` still a mock\r\n>>> my_mock.foo\r\n<Mock name='mock.foo' id='140556566370816'>\r\n\r\n# `.foo()` is now a \"normal\" value, here the integer 4\r\n>>> my_mock.foo() \r\n4\r\n\r\n# not a mock so cannot call `.bar` from `4`\r\n>>> my_mock.foo().bar # not a mock so cannot call `.bar` from `4`\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nAttributeError: 'int' object has no attribute 'bar'\r\n```\r\n\r\nThe goal of a mock is that almost any manipulation done on it will not fail. It will continue to pass mocked values which in the end you can test. There are a few tweaks you can do on a mock like raising an Error (which is done with the `side_effect` in the implemented test or returning a special value (like in this PR)\r\n\r\n---\r\nTo come back to you initial question, when \"requests.request\" is patch, it means if in this context `requests.request` is called then the actual code will not be returned but instead the `response_mock` will be returned. This is done so that the actual call is not done.\r\n\r\nSo yes, when in `huggingface_hub` there is a `response.json().get(\"error\", None)`, it is actually `response_mock.json().get(\"error\", None)`. Since a return value is set to `response_mock.json()`, this is equivalent to do `{}.get(\"error\", None)` (which is None).\r\n\r\n```py\r\n with mock.patch(\"requests.request\", return_value=response_mock) as mock_head:\r\n _ = BertConfig.from_pretrained(\"hf-internal-testing/tiny-random-bert\")\r\n```\r\n\r\nAnd finally to be complete, mock objects are also nice because they count how much calls have been made to them. So at the end of test the `mock_head.assert_called()` means that we expect that the mock has been called at least once otherwise it would fail.\r\n\r\nHope this help you understand what mocks are doing :)", "@ydshieh I answered to your question in the previous comment but posted it when it was half-written by mistake. Now the explanation is complete :)\r\nPlease let me know if you have other questions. Otherwise I let you merge it." ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? Fix the CI that is currently breaking because of the [0.9.1 patch release](https://github.com/huggingface/huggingface_hub/releases/tag/v0.9.1) of `huggingface_hub`. Problem is that we are looking at the response from the server when having a HTTPError. In the tests, the response is mocked which makes `response.json()` a mock instead of a dictionary. I now set it to `{}` which means an empty response from the server. See [slack thread](https://huggingface.slack.com/archives/C01NE71C4F7/p1661526937492729) (internal link) for more context. # Expected result The CI should now pass correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18804", "html_url": "https://github.com/huggingface/transformers/pull/18804", "diff_url": "https://github.com/huggingface/transformers/pull/18804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18804.patch", "merged_at": 1661781368000 }
https://api.github.com/repos/huggingface/transformers/issues/18803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18803/comments
https://api.github.com/repos/huggingface/transformers/issues/18803/events
https://github.com/huggingface/transformers/pull/18803
1,353,971,320
PR_kwDOCUB6oc498_0W
18,803
[Swin, Swinv2] Fix attn_mask dtype
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? This PR fixes the dtype of the `attn_mask`, making mixed precision training possible. Fixes #17481
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18803/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18803", "html_url": "https://github.com/huggingface/transformers/pull/18803", "diff_url": "https://github.com/huggingface/transformers/pull/18803.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18803.patch", "merged_at": 1661855494000 }
https://api.github.com/repos/huggingface/transformers/issues/18802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18802/comments
https://api.github.com/repos/huggingface/transformers/issues/18802/events
https://github.com/huggingface/transformers/issues/18802
1,353,931,331
I_kwDOCUB6oc5Qs1pD
18,802
`load_tf_weights` doesn't handle the weights added to the TF models at the top level
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "Related to #18149.", "cc @patrickvonplaten as we might need to change the core method `load_tf_weights`." ]
1,661
1,662
1,662
COLLABORATOR
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.9.11 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @gante ### Reproduction (TF)MarianMTModel has weights `final_logits_bias` added at the top-level (i.e. not under any layer) https://github.com/huggingface/transformers/blob/5f06a09b9f3f05b4860f11bbbe22861923b49d81/src/transformers/models/marian/modeling_tf_marian.py#L1287 However, the method `load_tf_weights` only handle weights under some layers https://github.com/huggingface/transformers/blob/5f06a09b9f3f05b4860f11bbbe22861923b49d81/src/transformers/modeling_tf_utils.py#L850 This causes problem when we load TF checkpoints for `TFMarianMTModel`, i.e. `final_logits_bias` is not loaded. ```python from transformers import MarianMTModel, TFMarianMTModel model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" pt_model = MarianMTModel.from_pretrained(model_name) tf_model_from_pt = TFMarianMTModel.from_pretrained(model_name, from_pt=True) tf_model = TFMarianMTModel.from_pretrained(model_name, from_pt=False) # Only has `TFMarianMainLayer` in `layers` print(tf_model.layers) print(pt_model.final_logits_bias.numpy()) print(tf_model_from_pt.final_logits_bias.numpy()) print(tf_model.final_logits_bias.numpy()) ``` Outputs: ```bash [<transformers.models.marian.modeling_tf_marian.TFMarianMainLayer object at 0x000001F00ECE9940>] [[11.757146 -1.7759448 -7.3816853 ... -1.6559223 -1.6663467 0. ]] [[11.757146 -1.7759448 -7.3816853 ... -1.6559223 -1.6663467 0. ]] [[0. 0. 0. ... 0. 0. 0.]] ``` ### Expected behavior `load_tf_weights` should be able to load weights like `final_logits_bias`, and the TF checkpoint should be loaded correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18802/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18801/comments
https://api.github.com/repos/huggingface/transformers/issues/18801/events
https://github.com/huggingface/transformers/pull/18801
1,353,914,414
PR_kwDOCUB6oc498zow
18,801
Add security warning about the from_pretrained() method
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Does the [malware scanner](https://huggingface.co/docs/hub/security-malware) catch malicious code injection for all Hub repos (public and private)?\r\n\r\nIt doesn't \"catch malicious code injection\" per se, it extracts the list of module-function pairs that can be called when unpickling. We still haven't implemented anything on top of that.\r\n\r\nTo answer your question, we're only scanning public repositories atm.", "_The documentation is not available anymore as the PR was closed or merged._", "Should a mention of the malware scanner also be added to the sentence? Right now it reads like using the Hub is a security issue whereas it's really `torch.load` that has a security issue; and the Hub enables verifying that the code is malware-free with both the malware scanner and signed commits.\r\n\r\nI'd rather focus on showing how using the Hub is likely a smaller security risk than using an external tool like GDrive where no such verification can be done.", "> Should a mention of the malware scanner also be added to the sentence? Right now it reads like using the Hub is a security issue whereas it's really `torch.load` that has a security issue; and the Hub enables verifying that the code is malware-free with both the malware scanner and signed commits.\r\n> \r\n> I'd rather focus on showing how using the Hub is likely a smaller security risk than using an external tool like GDrive where no such verification can be done.\r\n\r\nGood idea! Added a sentence about the scanner and reworded the text in [8098c4b](https://github.com/huggingface/transformers/pull/18801/commits/8098c4bede79f2e3f95308fbd47fd5b326f502e3)", "I'm not disagreeing that the HF hub is a smaller risk compared to other things, just pointing out that the malware scanner currently lists my model as safe (and due to the mechanics of torch.save, it will always be evadable). And signed commits are super useful, but they don't mitigate this particular problem, as the signatory would be me, and therefore the signature valid.", "> due to the mechanics of torch.save, it will always be evadable\r\n\r\n@yk would you mind expanding on that ?", "> @yk would you mind expanding on that ?\r\n\r\ntorch.save uses pickle, which in turn allows for arbitrary code execution. If the malware scanner detects loaded modules, I can simply un-load them after I've used them. If the malware scanner looks for the presence of certain instructions, strings, etc. I can always evade it somehow, that's just the nature of turing-completeness. I could probably even DDOS the malware scanner by just running an infinite loop. I'm happy to show more, and I'm going to, but I was waiting to release that info publicly before going through the responsible disclosure process here. DM me in case you want an early insight.", "> If the malware scanner detects loaded modules, I can simply un-load them after I've used them.\r\n\r\nI get the list of module-function pairs directly from the pickle opcodes ([`pickletools.genops`](https://docs.python.org/3/library/pickletools.html#pickletools.genops)), without executing anything. We went through some pretty [sophisticated exploits](https://ctftime.org/writeup/16723) and from what I saw when replicating there is always a trace of the module/function, e.g. if you wanted to run `eval` but alias it, you would still see the original `eval` reference.\r\n\r\n> I could probably even DDOS the malware scanner by just running an infinite loop.\r\n\r\nSince I'm going through the instruction list in the pickle file, there would be no DDOS possible via code execution. You could always generate a massive pickle file to bloat the scanner, but then you'd run into issues like uploading files to the hub before our scanner even goes through them + we can easily add heuristics to mark these files as inherently unsafe.\r\n\r\n> If the malware scanner looks for the presence of certain instructions, strings, etc. I can always evade it somehow, that's just the nature of turing-completeness.\r\n\r\nI'd like to see how you do that, happy to have the early insight, but I can wait for the public release :)\r\n\r\nNote that you cannot serialize functions or lambdas in a pickle file, you can only execute functions that are in scope at deserialization time.\r\n\r\ncc @adrinjalali\r\n\r\nEDIT: thank you for expanding !", "It seems there are actually ways to DDOS the scanner, we currently wouldn't catch the sploits [referenced here](https://github.com/moreati/pickle-fuzz#denial-of-service) (meaning we'd proceed to deserialization).", "> I get the list of module-function pairs directly from the pickle opcodes\r\n\r\nfair point, then I interpreted your previous statement in the wrong way. yea that could actually work to mitigate some of these things. At this time, my model here is still marked as safe, though: https://huggingface.co/ykilcher/totally-harmless-model/tree/main", "> At this time, my model here is still marked as safe, though: https://huggingface.co/ykilcher/totally-harmless-model/tree/main\r\n\r\nThat's normal, at the time we haven't implemented any checks per se, we just extract the data from the pickles on the hub.\r\n\r\n(coming soon)", "> Should a mention of the malware scanner also be added to the sentence? Right now it reads like using the Hub is a security issue whereas it's really torch.load that has a security issue; and the Hub enables verifying that the code is malware-free with both the malware scanner and signed commits.\r\n\r\n@LysandreJik saving and loading pickle files is only insecure if the author is not _trusted_, which is the case for the HF hub. The hub is a place where people share arbitrary pickle files, so it becomes a hub issue. It wouldn't be an issue if people kept using pickles from sources they trust. And the scanner doesn't really detect all vulnerabilities, and can't. For instance, when joblib does an `eval` on a given `n_jobs` parameter, there you could do whatever you want. In terms of signatures, we check signatures, but we don't check who the people behind the accounts are (and I don't think we should).\r\n\r\nSo I do think people should be wary when they load pickles from the hub.\r\n\r\nNote that AFAIK in pretty much all major communities (like pytorch, sklearn, etc) people know about this and are working on having better solutions. But for now, users should be aware of the risks.", "BTW @McPatate as discussed it would be awesome if we could write a post (or even some documentation) about what we currently know about pickle safety, and the potential next steps we are working on.\r\n\r\nAnd maybe @yk you'd be interested in taking a look? Would be awesome to have your insights :)", "@julien-c sure, I'm happy to contribute what I know", "hi @yk the team (@McPatate in particular) wrote this document https://huggingface.co/docs/hub/security-pickle the doc's source is at https://github.com/huggingface/hub-docs/pull/294 – would love to get your feedback (including on the proposed solutions) Thanks! " ]
1,661
1,662
1,661
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds a warning to the docs about the `from_pretrained()` method being susceptible to malicious code injection. This is similar to other warnings provided in the [PyTorch](https://pytorch.org/docs/stable/generated/torch.load.html#torch.load) and [Python](https://docs.python.org/3/library/pickle.html#module-pickle) docs. For now I've put this in the autoclass tutorial, but can also put it in the API docs if we agree this makes sense. Questions: * Does the same issue apply to TensorFlow models? * Does the [malware scanner](https://huggingface.co/docs/hub/security-malware) catch malicious code injection for _all_ Hub repos (public and private)? h/t to @yk who pointed this out to me. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> cc @sgugger @McPatate
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18801/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18801/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18801", "html_url": "https://github.com/huggingface/transformers/pull/18801", "diff_url": "https://github.com/huggingface/transformers/pull/18801.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18801.patch", "merged_at": 1661975321000 }
https://api.github.com/repos/huggingface/transformers/issues/18800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18800/comments
https://api.github.com/repos/huggingface/transformers/issues/18800/events
https://github.com/huggingface/transformers/pull/18800
1,353,882,528
PR_kwDOCUB6oc498s1w
18,800
Fix gradient checkpointing tests for `encoder-decoder` models
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry about that! Thanks for fixing @ydshieh " ]
1,661
1,662
1,661
COLLABORATOR
null
# What does this PR do? The recently added tests (#18697) didn't send the model to the correct device and caused some test failed. This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18800", "html_url": "https://github.com/huggingface/transformers/pull/18800", "diff_url": "https://github.com/huggingface/transformers/pull/18800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18800.patch", "merged_at": 1661791590000 }
https://api.github.com/repos/huggingface/transformers/issues/18799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18799/comments
https://api.github.com/repos/huggingface/transformers/issues/18799/events
https://github.com/huggingface/transformers/issues/18799
1,353,835,104
I_kwDOCUB6oc5QseJg
18,799
torchmetrics support
{ "login": "izapolsk", "id": 21039333, "node_id": "MDQ6VXNlcjIxMDM5MzMz", "avatar_url": "https://avatars.githubusercontent.com/u/21039333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/izapolsk", "html_url": "https://github.com/izapolsk", "followers_url": "https://api.github.com/users/izapolsk/followers", "following_url": "https://api.github.com/users/izapolsk/following{/other_user}", "gists_url": "https://api.github.com/users/izapolsk/gists{/gist_id}", "starred_url": "https://api.github.com/users/izapolsk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/izapolsk/subscriptions", "organizations_url": "https://api.github.com/users/izapolsk/orgs", "repos_url": "https://api.github.com/users/izapolsk/repos", "events_url": "https://api.github.com/users/izapolsk/events{/privacy}", "received_events_url": "https://api.github.com/users/izapolsk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@sgugger, could you please share your opinion on above feature request ? ^^", "At this stage we're not planning to add this no. You can always subclass the Trainer to put your own evaluation loop in it however.", "> At this stage we're not planning to add this no. You can always subclass the Trainer to put your own evaluation loop in it however.\r\n\r\nthis is exactly what I did. I just thought that it could be useful for others because I saw some other reports in GH about OOM in eval.", "Note that you can use `eval_accumulation_steps` to avoid taking too much space on the GPU. There is no need to use torchmetrics for that." ]
1,661
1,678
1,665
CONTRIBUTOR
null
### Feature request In eval loop the HF transformers collects and keeps in memory predictions and targets up to the end of eval and then computes metrics passed via callback. When transformers is used to fine-tune model with big dataset and many labels (in my case up to 220k labels), predictions and targets take huge amount of RAM. It's especially noticable with DDP and many GPUs per node. It would be great to add [torchmetrics](https://torchmetrics.readthedocs.io/en/stable/index.html) support where metrics can be computed for every eval step and then averaged at the end With torchmetrics it's also possible to compute metrics on GPU in distibuted way. What makes eval significantly faster. ### Motivation With current metrics callback I can't run eval with > 120k labels. I tried to sparsify and/or cut data. However, with growing number of labels and etc it anyway became inefficient. Eventually I had to update trainer to compute torchmetrics for every eval step on GPU what fully solved this issue in my case. So, it would be great to have this feature out-of-the-box. ### Your contribution I can create PR with appropriate changes if you also view this feature as useful.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18799/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18798/comments
https://api.github.com/repos/huggingface/transformers/issues/18798/events
https://github.com/huggingface/transformers/issues/18798
1,353,803,473
I_kwDOCUB6oc5QsWbR
18,798
Support mixed precision FP16 in TF Segformer | Nan loss
{ "login": "joihn", "id": 11663917, "node_id": "MDQ6VXNlcjExNjYzOTE3", "avatar_url": "https://avatars.githubusercontent.com/u/11663917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joihn", "html_url": "https://github.com/joihn", "followers_url": "https://api.github.com/users/joihn/followers", "following_url": "https://api.github.com/users/joihn/following{/other_user}", "gists_url": "https://api.github.com/users/joihn/gists{/gist_id}", "starred_url": "https://api.github.com/users/joihn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joihn/subscriptions", "organizations_url": "https://api.github.com/users/joihn/orgs", "repos_url": "https://api.github.com/users/joihn/repos", "events_url": "https://api.github.com/users/joihn/events{/privacy}", "received_events_url": "https://api.github.com/users/joihn/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Thanks! Are you casting the softmaxed outputs or the logits to FP32 before loss computation as recommended [here](https://www.tensorflow.org/guide/mixed_precision#building_the_model)? \r\n\r\nNit: You didn't provide any training code. I guess it's always better to provide a self-contained Colab Notebook or a notebook that anyone can spin up to debug the issue further. I also acknowledge that Colab doesn't always provide a GPU that has support for Tensor cores but I hope you got the point. ", "thanks for the quick answer, \r\nEven though my commit above doesn't include it, yes I tried casting the logits output to FP32, ans sadly it didn't solve it :/ \r\n\r\nGood point about the small code example for debugging, I will do one today :) ", "Then we need to inspect the layer-wise activations and the weight distributions that are probably impacted by the casting. Maybe try using the TensorBoard callback and inspect if that's the case. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,665
1,665
CONTRIBUTOR
null
### Description I'm trying to bring mixed precision training (FP16) support to [TF Segformer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/segformer/modeling_tf_segformer.py) . I had to do a very small modification to the inital code for supporting Fp16 ([my commit ](https://github.com/huggingface/transformers/commit/41d2bd145111a743a577f15955bac58a74871b33)). Everytime, the net converge fine for the first 10-15 epoch, and then the loss suddenly goes to Nan. Any idea ? I'm using `policy = mixed_precision.Policy("mixed_float16")` ([TF doc here ](https://www.tensorflow.org/guide/mixed_precision)) ### System Tensorflow version 2.8.2 Cuda 11.6 Nvidia titan X ### Who can help? @sayakpaul @NielsRogge ### Reproduction 1) Modify 1 line of code as in ([my commit ](https://github.com/huggingface/transformers/commit/41d2bd145111a743a577f15955bac58a74871b33)). 2) Launch a training with mixed precision FP16 3) wait 10~15 epoch ### Expected behavior The net shoudl train fine with FP16 mixed precision
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18798/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18797/comments
https://api.github.com/repos/huggingface/transformers/issues/18797/events
https://github.com/huggingface/transformers/issues/18797
1,353,502,168
I_kwDOCUB6oc5QrM3Y
18,797
CLIPTextModel gives invalid output for zeroed attention mask
{ "login": "jonatanklosko", "id": 17034772, "node_id": "MDQ6VXNlcjE3MDM0Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/17034772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatanklosko", "html_url": "https://github.com/jonatanklosko", "followers_url": "https://api.github.com/users/jonatanklosko/followers", "following_url": "https://api.github.com/users/jonatanklosko/following{/other_user}", "gists_url": "https://api.github.com/users/jonatanklosko/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonatanklosko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatanklosko/subscriptions", "organizations_url": "https://api.github.com/users/jonatanklosko/orgs", "repos_url": "https://api.github.com/users/jonatanklosko/repos", "events_url": "https://api.github.com/users/jonatanklosko/events{/privacy}", "received_events_url": "https://api.github.com/users/jonatanklosko/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I've also experienced this issue, and anecdotally this inconsistency seems to impact the quality of stable diffusion outputs from https://github.com/huggingface/diffusers. More specifically, when trying to port the stable diffusion pipeline to another framework (e.g. like Flax where the implementation is not present) using the provided stable diffusion weights, the images which rely on PT's CLIPTextModel are mostly incoherent.\r\n\r\nI am assuming Stable Diffusion was trained using the PT CLIPTextModel, and thus results rely on this inconsistent/invalid text embedding? ", "Thanks a lot for the issue @jonatanklosko !\r\n\r\nThis indeed seems a bit strange, I see two solutions here\r\n\r\n- We could combine causal mask and attention mask and see what happens\r\n- Or instead of using additive masks, we could replace the masked values with large negative numbers.\r\n\r\nhowever, is it really a bug ? as long as the values for masked positions are much lower than non-masked tokens, those tokens will still be ignored. Do you have an example where you see the masked positions are not ignored ?\r\n\r\n@seanmor5 good point! \r\n>I've also experienced this issue, and anecdotally this inconsistency seems to impact the quality of stable diffusion outputs from https://github.com/huggingface/diffusers\r\n\r\nI'm not sure if this impacts the quality of diffusers, for example, as discussed in this [issue](https://github.com/huggingface/diffusers/issues/233), we have verified that the results are 1:1 with the original repo.\r\n\r\n>I am assuming Stable Diffusion was trained using the PT CLIPTextModel, and thus results rely on this inconsistent/invalid text embedding?\r\n\r\nYes, it was trained using `CLIPTextModel`, but both this training and the actual pre-trained CLIP model never used attention mask. They always pad the sequence to max_len 77 and use causal mask. This is how we recommend to use the stable diffusion model. I know this is not ideal but that's how it was trained. cf https://github.com/CompVis/stable-diffusion/blob/main/ldm/modules/encoders/modules.py#L155\r\n\r\nAlso cc @patrickvonplaten , wdyt ?", ">e.g. like Flax where the implementation is not present\r\n\r\nIt's coming soon https://github.com/patil-suraj/stable-diffusion-jax/", "Also quite interested in the use case of masking all text tokens - when would this make sense?", "@patil-suraj consider the following weights\r\n\r\n```python\r\nimport torch\r\nfrom torch.nn.functional import softmax\r\n\r\nsoftmax(torch.tensor([1.0, 0.0]))\r\n#=> tensor([0.7311, 0.2689])\r\n```\r\n\r\nif we add large negative values, then the actual weights are neglectable and softmax outputs equal values:\r\n\r\n```python\r\nsoftmax(torch.tensor([1.0 - 1e30, 0.0 - 1e30]))\r\n#=> tensor([0.5000, 0.5000])\r\n```\r\n\r\nhowever if we also add large negative values from causal mask this makes the values disproportional:\r\n\r\n```python\r\nsoftmax(torch.tensor([1.0 - 1e30, 0.0 - 1e30 - 1e30]))\r\n#=> tensor([1., 0.])\r\n```\r\n\r\nIn the flax implementation masks are combined and negative values are added just once (though apparently the output still differs, because the flax version uses `-1e4` as the negative value). But thinking about this, shouldn't all weights be 0 when masking all tokens? If so, modifying the input to softmax is not enough.\r\n\r\n@patrickvonplaten\r\n\r\n> Also quite interested in the use case of masking all text tokens - when would this make sense?\r\n\r\nThis came up with unconditional input to stable diffusers, I believe [this part](https://github.com/huggingface/diffusers/blob/16172c1c7ef6ec721bfe4d0787313519157749a1/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L92-L94). @seanmor5 please correct me if I'm wrong :)", "Interesting! Thanks for the pointer @jonatanklosko - I think though that even when doing:\r\n\r\nThe tokenizer should return tokens that should be attended to. E.g. taking the line here: https://github.com/huggingface/diffusers/blob/16172c1c7ef6ec721bfe4d0787313519157749a1/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L92-L94\r\n\r\nand doing:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\ntok = AutoTokenizer.from_pretrained(\"openai/clip-vit-large-patch14\")\r\n\r\nuncond_input = tok(\r\n [\"\"] * 2, padding=\"max_length\", max_length=77, return_tensors=\"pt\"\r\n)\r\nprint(uncond_input)\r\n```\r\nI get:\r\n\r\n```\r\n{'input_ids': tensor([[49406, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407],\r\n [49406, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407, 49407,\r\n 49407, 49407, 49407, 49407, 49407, 49407, 49407]]), 'attention_mask': tensor([[1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0],\r\n [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0]])}\r\n```\r\n\r\nwhich shows that at least two tokens are attended to (so > 0 positions of attention_mask are equal to 1) which should avoid the problem described above. What would be the use case where all values in `attention_mask` are 0?", "@patrickvonplaten Ahh! This was my mistake, when doing some initial testing with the stable diffusion weights in our framework I noticed after a step or 2 the latent results started to diverge from the PT stable diffusion implementation. I noticed some slight differences between the results of the text encoder from PT and our implementation (within some reasonable amount of precision) and ended up writing an invalid test case where all of the attention masks were 0. The source of our issue probably lies elsewhere then, sorry for the confusion!", "No worries! Better be safe than sorry! :-) ", "So I think we can close this :)\r\n\r\nThe additive mask before softmax is a trick that works under the assumption that the mask has at least a single 1. So from what I understand can be said that for zeroed mask the output of most models is just not well defined, and it's fine given the use cases so far." ]
1,661
1,662
1,662
CONTRIBUTOR
null
### System Info - `transformers` version: 4.21.2 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.33 - Python version: 3.8.6 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.12.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu) - Jax version: 0.3.10 - JaxLib version: 0.3.10 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj ### Reproduction ```python from transformers import CLIPTextModel import torch model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32") inputs = { "input_ids": torch.tensor([[49406, 320, 1125, 539, 320, 1929, 49407]]), "attention_mask": torch.tensor([[0, 0, 0, 0, 0, 0, 0]]) } outputs = model(**inputs) ``` Given the zeroed attention mask, the attention weights should be all equal here: https://github.com/huggingface/transformers/blob/21f6f58721dd9154357576be6de54eefef1f1818/src/transformers/models/clip/modeling_clip.py#L246 However, causal and attention masks are added separately ([here](https://github.com/huggingface/transformers/blob/21f6f58721dd9154357576be6de54eefef1f1818/src/transformers/models/clip/modeling_clip.py#L228-L246)), so in this case, before going through softmax, certain values are twice as small as the other ones (to be more precise, some values are -min_float and other are -inf). Consequently, softmax outputs probabilities that match the causal mask. This is also the case for `TFCLIPTextModel`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18797/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18796/comments
https://api.github.com/repos/huggingface/transformers/issues/18796/events
https://github.com/huggingface/transformers/pull/18796
1,353,479,052
PR_kwDOCUB6oc497XOf
18,796
Fix docstring for BartForSequenceClassification
{ "login": "skbollam", "id": 31428025, "node_id": "MDQ6VXNlcjMxNDI4MDI1", "avatar_url": "https://avatars.githubusercontent.com/u/31428025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skbollam", "html_url": "https://github.com/skbollam", "followers_url": "https://api.github.com/users/skbollam/followers", "following_url": "https://api.github.com/users/skbollam/following{/other_user}", "gists_url": "https://api.github.com/users/skbollam/gists{/gist_id}", "starred_url": "https://api.github.com/users/skbollam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skbollam/subscriptions", "organizations_url": "https://api.github.com/users/skbollam/orgs", "repos_url": "https://api.github.com/users/skbollam/repos", "events_url": "https://api.github.com/users/skbollam/events{/privacy}", "received_events_url": "https://api.github.com/users/skbollam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes docstring for BartForSequenceClassification, which uses the last time step of the last hidden state for classification. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18796", "html_url": "https://github.com/huggingface/transformers/pull/18796", "diff_url": "https://github.com/huggingface/transformers/pull/18796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18796.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18795/comments
https://api.github.com/repos/huggingface/transformers/issues/18795/events
https://github.com/huggingface/transformers/pull/18795
1,353,436,870
PR_kwDOCUB6oc497PgS
18,795
Add docstring for BartForCausalLM
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? Adds docstring for BartForCausalLM ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18795", "html_url": "https://github.com/huggingface/transformers/pull/18795", "diff_url": "https://github.com/huggingface/transformers/pull/18795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18795.patch", "merged_at": 1661854743000 }
https://api.github.com/repos/huggingface/transformers/issues/18794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18794/comments
https://api.github.com/repos/huggingface/transformers/issues/18794/events
https://github.com/huggingface/transformers/pull/18794
1,353,429,183
PR_kwDOCUB6oc497OFP
18,794
fix the description of token used for Bart classification
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@patrickvonplaten - does this look fine to you?", "@ArthurZucker @gante could you take a look here? ", "> @ArthurZucker @JoaoLages could you take a look here?\r\n\r\nMaybe you wanted to call João Gante and not me, but here are my 2 cents anyway 😄 : the description that @ekagra-ranjan wrote is correct but the previous description was not incorrect either. \r\nTaking the last EOS token embedding from the model's output is a type of pooling. For example, [in BERT we take the first token instead, that corresponds to the CLS token](https://github.com/huggingface/transformers/blob/7a8118947f3c6a802a9f63dc22c394961d38860f/src/transformers/models/bert/modeling_bert.py#L653) and we also have [the same description for `BertForSequenceClassification`](https://github.com/huggingface/transformers/blob/7a8118947f3c6a802a9f63dc22c394961d38860f/src/transformers/models/bert/modeling_bert.py#L1509). \r\nThe previous description is simpler, more general and it is not incorrect. Not against having more descriptive docstrings, but then it would make sense to review all the `(...)ForSequenceClassification` classes, not only BART.\r\n", "Sorry @JoaoLages :sweat_smile: ! You are right indeed", "@JoaoLages I see your point and I guess you are right! Thanks for sharing your thoughts." ]
1,661
1,664
1,664
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the description of token used for Bart classification. It uses last EOS token and not a special pooled output. https://github.com/huggingface/transformers/blob/7a8118947f3c6a802a9f63dc22c394961d38860f/src/transformers/models/bart/modeling_bart.py#L1520-L1527 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18794/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18794", "html_url": "https://github.com/huggingface/transformers/pull/18794", "diff_url": "https://github.com/huggingface/transformers/pull/18794.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18794.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18793/comments
https://api.github.com/repos/huggingface/transformers/issues/18793/events
https://github.com/huggingface/transformers/issues/18793
1,353,419,341
I_kwDOCUB6oc5Qq4pN
18,793
[BUG] Getting different sentence embeddings when using model on CPU and GPU
{ "login": "kbkartik", "id": 79010023, "node_id": "MDQ6VXNlcjc5MDEwMDIz", "avatar_url": "https://avatars.githubusercontent.com/u/79010023?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kbkartik", "html_url": "https://github.com/kbkartik", "followers_url": "https://api.github.com/users/kbkartik/followers", "following_url": "https://api.github.com/users/kbkartik/following{/other_user}", "gists_url": "https://api.github.com/users/kbkartik/gists{/gist_id}", "starred_url": "https://api.github.com/users/kbkartik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kbkartik/subscriptions", "organizations_url": "https://api.github.com/users/kbkartik/orgs", "repos_url": "https://api.github.com/users/kbkartik/repos", "events_url": "https://api.github.com/users/kbkartik/events{/privacy}", "received_events_url": "https://api.github.com/users/kbkartik/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "To fix this issue, we need to set the seed values which I forgot." ]
1,661
1,664
1,664
NONE
null
### System Info - `transformers` version: 4.21.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1+cu113 (False) - Tensorflow version (GPU?): 2.8.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @LysandreJik @sg ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```from transformers import RobertaConfig, RobertaModel, RobertaTokenizer import torch import numpy as np device = ("cuda" if torch.cuda.is_available() else "cpu") # Initializing tokenizer tokenizer = RobertaTokenizer.from_pretrained("roberta-base") # Initializing a RoBERTa configuration configuration = RobertaConfig() configuration.vocab_size = tokenizer.vocab_size # Initializing a model from the configuration model = RobertaModel(configuration) model = model.to(device) model = model.eval() with torch.no_grad(): tokenized_task = tokenizer('random_sentence_check_v000', return_tensors="pt") outputs = model(**tokenized_task.to(device)) embedding = outputs.pooler_output.squeeze(0).cpu().numpy().tolist() ### Expected behavior I should get same sentence embedding either on CPU and GPU from a pretrained model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18793/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18792/comments
https://api.github.com/repos/huggingface/transformers/issues/18792/events
https://github.com/huggingface/transformers/issues/18792
1,353,408,293
I_kwDOCUB6oc5Qq18l
18,792
UnimplementedError: The Conv2D op currently does not support grouped convolutions on the CPU.
{ "login": "innat", "id": 17668390, "node_id": "MDQ6VXNlcjE3NjY4Mzkw", "avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/innat", "html_url": "https://github.com/innat", "followers_url": "https://api.github.com/users/innat/followers", "following_url": "https://api.github.com/users/innat/following{/other_user}", "gists_url": "https://api.github.com/users/innat/gists{/gist_id}", "starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/innat/subscriptions", "organizations_url": "https://api.github.com/users/innat/orgs", "repos_url": "https://api.github.com/users/innat/repos", "events_url": "https://api.github.com/users/innat/events{/privacy}", "received_events_url": "https://api.github.com/users/innat/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @gante ", "Hey @innat 👋 That exception doesn't come from `transformers`, but from TensorFlow itself. As you mentioned, using a newer TF version does not result in an error. \r\n\r\nI'm afraid we won't be able to help you here :)", "@gante \r\nSorry I think I didn't explain well. Here is short and exact summary. \r\n\r\n> I ran transformer on Colab TPU and Kaggle TPU with TensorFlow 2.4.1. The script that I shared above works in Colab TPU but throws error in Kaggle TPU. \r\n\r\nPlease let me know if you had trouble to run the script.", "@gante Here are the prepared notebook. It would be easy to test.\r\n\r\n- [kaggle-tpu](https://www.kaggle.com/code/ipythonx/transformer-issue-github-18792/)\r\n- [colab-tpu](https://drive.google.com/file/d/1qAo4PsZ8DekZqFzF5Oq2E3nkJ3aJWFuM/view?usp=sharing)", "Grouped convolution on CPU were added in Tensorflow 2.5, so 2.4 won't work. That's why colab with Tensorflow 2.9 isn't affected. Have you tried updating Tensorflow on kaggle with `!pip install -U tensorflow`?", "I set accelerator **TPU** to both colab and kaggle environments. I'm using `TF 2.4.1` on both environment. With this set up, convn-next model successfully built on colab tpu (with `tf 2.4.1`) but didn't work on kaggle tpu (with `tf 2.4.1`). \r\n\r\nPlease note, I'm **NOT** using `TF 2.9.1`, either in colab or kaggle environments. Have you run the the code that I shared above. I think it's straightforward to get on the same page.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nThe last comemnt came from the original OP. This bot message doesn't make sense. Instead of such alert, it should tag `stat:awaiting:transformers`.", "Hi @innat -- as I've mentioned above this is an issue for the Kaggle and/or the TensorFlow team, there is nothing the `transformers` team can do. \r\n\r\nWe don't have the power to go back in time and add code to a repository that isn't ours, nor to update Kaggle's TPU runtimes." ]
1,661
1,664
1,664
NONE
null
# Info ``` Coalb TPU v2 Kaggle TPU v3 TensorFlow: 2.4.1 Transformer: 4.22.0.dev0 ``` # Who can help? @Rocketknight1 @NielsRogge @sgugger @amyeroberts # Information - [ ] The official example scripts - [X] My own modified scripts # Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) # Reproduction Please, get the file form [HERE](https://drive.google.com/file/d/1qAo4PsZ8DekZqFzF5Oq2E3nkJ3aJWFuM/view?usp=sharing). A notebook scripts, just plug-n-play. **What to do** 1. Run the script in Colab with TPU. 2. Run the script in Kaggle with TPU. You may not need to change anything, just through the file to these platform and run all. # Expected behavior **What was I doing** With the given script above, I was trying to run a vision transformer model on Kaggle TPU (with TF 2.4.1 by default). And I got ```python 2 prime_input = tf.keras.Input(shape=(*IMAGE_SIZE, 3)) 3 mode_inputs = tf.keras.layers.Permute(dims=(3, 1, 2))(prime_input) ----> 4 backbone = TFConvNextModel.from_pretrained("facebook/convnext-tiny-224") 5 backbone.trainable = False .... 171 def call(self, hidden_states, training=False): 172 input = hidden_states --> 173 x = self.dwconv(hidden_states) 174 x = self.layernorm(x) 175 x = self.pwconv1(x) ``` > UnimplementedError: The Conv2D op currently does not support grouped convolutions on the CPU. A grouped convolution was attempted to be run because the input depth of 96 does not match the filter input depth of 1 A known tf issue, discussed also [here](https://github.com/tensorflow/tensorflow/issues/29005). But this issue didn't appear when I ran the same script on Colab TPU (with `tf 2.4.1`) system. The model build successfully. As I am currently using transformer on kaggle platform, I need to make it work. The given script above is just about model construction code. Any pointer what's going on here? Please note again, Kaggle TPU v3 and Colab TPU v2. Not sure if it's something to do with this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18792/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18791/comments
https://api.github.com/repos/huggingface/transformers/issues/18791/events
https://github.com/huggingface/transformers/pull/18791
1,353,381,907
PR_kwDOCUB6oc497Frt
18,791
Fix decode_input_ids to bare T5Model and improve doc
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the fixes! ", "@patrickvonplaten Thanks for the review! Applied your suggestions. " ]
1,661
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? * Fix 1: use the tokenizer to obtain the labels as tensors. `docs/source/en/model_doc/t5.mdx` * Fix 2: `src/transformers/models/t5/` * Present case: T5 prepends the decoder_input_ids with pad token. This preprocessing is handled internally by `T5ForConditionalGeneration` by shifting the labels to the right. * Issue: This preprocessing needs to be done manually while using bare T5Model. This is missing from the example which uses bare T5Model. * Proposed Fix: Added a preprocessing step in the example so that the input matches with what T5 expects at its decoder. The PR reuses the `_shift_right()` method which is an internal function to T5. Please let me know if we can rename `_shift_right()` to `shift_right()` or if there is a better way to handle this. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @sgugger @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18791/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18791", "html_url": "https://github.com/huggingface/transformers/pull/18791", "diff_url": "https://github.com/huggingface/transformers/pull/18791.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18791.patch", "merged_at": 1662466346000 }
https://api.github.com/repos/huggingface/transformers/issues/18790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18790/comments
https://api.github.com/repos/huggingface/transformers/issues/18790/events
https://github.com/huggingface/transformers/issues/18790
1,353,354,522
I_kwDOCUB6oc5Qqo0a
18,790
circular import issue when importing with `transformers` and `happytransformer`
{ "login": "maifeeulasad", "id": 29339330, "node_id": "MDQ6VXNlcjI5MzM5MzMw", "avatar_url": "https://avatars.githubusercontent.com/u/29339330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maifeeulasad", "html_url": "https://github.com/maifeeulasad", "followers_url": "https://api.github.com/users/maifeeulasad/followers", "following_url": "https://api.github.com/users/maifeeulasad/following{/other_user}", "gists_url": "https://api.github.com/users/maifeeulasad/gists{/gist_id}", "starred_url": "https://api.github.com/users/maifeeulasad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maifeeulasad/subscriptions", "organizations_url": "https://api.github.com/users/maifeeulasad/orgs", "repos_url": "https://api.github.com/users/maifeeulasad/repos", "events_url": "https://api.github.com/users/maifeeulasad/events{/privacy}", "received_events_url": "https://api.github.com/users/maifeeulasad/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Got resolved after restarting the environment. Weird.", "Hi, I had a similar issue by simply running \"from transformers import GPT2Tokenizer\"\r\nThis is the error I got:\r\n`ImportError: cannot import name 'GPT2Tokenizer' from partially initialized module 'transformers' (most likely due to a circular import)`", "same for me but when I do the same import code in the shell (meaning python command then 1 by 1) it works... no idea how or why or wtf is happening", "after alot of digging around seems like my issue is caused by the file being named tokenize.py if I rename it to tokenize_data everything works..." ]
1,661
1,692
1,661
NONE
null
### System Info Libraries: ``` transformers 4.21.2 happytransformer 2.4.1 huggingface-hub 0.9.1 torch 1.12.1 tensorflow 2.9.1 ``` Env: ``` python 3.10 Windows 10.0.19044.1889 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Here is the entry file(`code.py`): ``` from ping import onichan from pong import araara if __name__ == '__main__': pass ``` A module called `ping.py`: ``` from happytransformer import HappyTextToText, TTSettings def onichan(): print('onichan') ``` Another module called `pong.py`: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM def araara(): print('araara') ``` And I have a file named `__init__.py` to initialize the module. Directory structure: ``` │---code.py │---ping.py │---pong.py │---__init__.py ``` ### Expected behavior Not getting this error, I would say. Here is the complete trace: ``` C:\Users\ShibaInu\Desktop\err>python code.py 2022-08-28 18:51:13.374326: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2022-08-28 18:51:13.374507: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "C:\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1002, in _get_module return importlib.import_module("." + module_name, self.__name__) File "C:\Python310\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Python310\lib\site-packages\transformers\pipelines\__init__.py", line 37, in <module> from .audio_classification import AudioClassificationPipeline File "C:\Python310\lib\site-packages\transformers\pipelines\audio_classification.py", line 20, in <module> from .base import PIPELINE_INIT_ARGS, Pipeline File "C:\Python310\lib\site-packages\transformers\pipelines\base.py", line 34, in <module> from ..modelcard import ModelCard File "C:\Python310\lib\site-packages\transformers\modelcard.py", line 44, in <module> from .training_args import ParallelMode File "C:\Python310\lib\site-packages\transformers\training_args.py", line 26, in <module> from .trainer_utils import ( File "C:\Python310\lib\site-packages\transformers\trainer_utils.py", line 47, in <module> import tensorflow as tf File "C:\Python310\lib\site-packages\tensorflow\__init__.py", line 37, in <module> from tensorflow.python.tools import module_util as _module_util File "C:\Python310\lib\site-packages\tensorflow\python\__init__.py", line 42, in <module> from tensorflow.python import data File "C:\Python310\lib\site-packages\tensorflow\python\data\__init__.py", line 21, in <module> from tensorflow.python.data import experimental File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\__init__.py", line 95, in <module> from tensorflow.python.data.experimental import service File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py", line 387, in <module> from tensorflow.python.data.experimental.ops.data_service_ops import distribute File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py", line 22, in <module> from tensorflow.python.data.experimental.ops import compression_ops File "C:\Python310\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py", line 16, in <module> from tensorflow.python.data.util import structure File "C:\Python310\lib\site-packages\tensorflow\python\data\util\structure.py", line 22, in <module> from tensorflow.python.data.util import nest File "C:\Python310\lib\site-packages\tensorflow\python\data\util\nest.py", line 36, in <module> from tensorflow.python.framework import sparse_tensor as _sparse_tensor File "C:\Python310\lib\site-packages\tensorflow\python\framework\sparse_tensor.py", line 24, in <module> from tensorflow.python.framework import constant_op File "C:\Python310\lib\site-packages\tensorflow\python\framework\constant_op.py", line 25, in <module> from tensorflow.python.eager import execute File "C:\Python310\lib\site-packages\tensorflow\python\eager\execute.py", line 24, in <module> from tensorflow.python.framework import ops File "C:\Python310\lib\site-packages\tensorflow\python\framework\ops.py", line 23, in <module> from absl import app File "C:\Python310\lib\site-packages\absl\app.py", line 31, in <module> import pdb File "C:\Python310\lib\pdb.py", line 77, in <module> import code File "C:\Users\ShibaInu\Desktop\err\code.py", line 1, in <module> from ping import onichan ImportError: cannot import name 'onichan' from partially initialized module 'ping' (most likely due to a circular import) (C:\Users\ShibaInu\Desktop\err\ping.py) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\ShibaInu\Desktop\err\code.py", line 1, in <module> from ping import onichan File "C:\Users\ShibaInu\Desktop\err\ping.py", line 1, in <module> from happytransformer import HappyTextToText, TTSettings File "C:\Python310\lib\site-packages\happytransformer\__init__.py", line 1, in <module> from happytransformer.happy_question_answering import HappyQuestionAnswering File "C:\Python310\lib\site-packages\happytransformer\happy_question_answering.py", line 7, in <module> from transformers import QuestionAnsweringPipeline, AutoModelForQuestionAnswering File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist File "C:\Python310\lib\site-packages\transformers\utils\import_utils.py", line 992, in __getattr__ module = self._get_module(self._class_to_module[name]) File "C:\Python310\lib\site-packages\transformers\utils\import_utils.py", line 1004, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name 'onichan' from partially initialized module 'ping' (most likely due to a circular import) (C:\Users\ShibaInu\Desktop\err\ping.py) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18790/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18789/comments
https://api.github.com/repos/huggingface/transformers/issues/18789/events
https://github.com/huggingface/transformers/pull/18789
1,353,251,894
PR_kwDOCUB6oc496syr
18,789
fix the error which will make shape not match occur.
{ "login": "novioleo", "id": 10055562, "node_id": "MDQ6VXNlcjEwMDU1NTYy", "avatar_url": "https://avatars.githubusercontent.com/u/10055562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/novioleo", "html_url": "https://github.com/novioleo", "followers_url": "https://api.github.com/users/novioleo/followers", "following_url": "https://api.github.com/users/novioleo/following{/other_user}", "gists_url": "https://api.github.com/users/novioleo/gists{/gist_id}", "starred_url": "https://api.github.com/users/novioleo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/novioleo/subscriptions", "organizations_url": "https://api.github.com/users/novioleo/orgs", "repos_url": "https://api.github.com/users/novioleo/repos", "events_url": "https://api.github.com/users/novioleo/events{/privacy}", "received_events_url": "https://api.github.com/users/novioleo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18789). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,661
1,666
1,666
NONE
null
# What does this PR do? When using PyTorch for model trace, the size of feature map will be inconsistent, resulting in failure to export successfully. The reason is that += is used in the size calculation code, and the object that happens to be added happens to be a shallow copy object, causing input_shape to change. You can solve this problem by declaring variables directly. # Test script ```python3 from einops import repeat from transformers import BertTokenizerFast,LayoutXLMTokenizerFast,LayoutLMv2ForSequenceClassification import torch ## dummy inputs dummy_input_ids = torch.LongTensor(torch.randint(low=0, high=1000, size=(2,256)))#.cuda() box = [[48, 84, 73, 128]] * 256 dummy_bboxes = repeat(torch.LongTensor(box).unsqueeze(0), '1 b s-> 2 b s') dummy_attention_mask = torch.LongTensor(torch.randint( low=0, high=1024, size=(2, 256) ))#.cuda() dummy_imgs = torch.randn(2, 3, 448, 448)#.cuda() dummy_token_ids = torch.zeros_like(dummy_attention_mask) dummy_inputs = [ dummy_input_ids, dummy_bboxes, dummy_imgs, dummy_attention_mask, dummy_token_ids ] with torch.no_grad(): model = LayoutLMv2ForSequenceClassification.from_pretrained("microsoft/layoutlmv2-base-uncased", torchscript=True,num_labels=30522) model.eval() traced_model = torch.jit.trace(func=model, strict=False, example_inputs=dummy_inputs) torch.jit.save(traced_model,'temp.pt') model = torch.jit.load('temp.pt') model.eval() with torch.no_grad(): result = model(*dummy_inputs) print(result) ``` **If the script is run directly before the code is modified, the feature map size will be inconsistent** # Who can review? @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18789/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18789", "html_url": "https://github.com/huggingface/transformers/pull/18789", "diff_url": "https://github.com/huggingface/transformers/pull/18789.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18789.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18788/comments
https://api.github.com/repos/huggingface/transformers/issues/18788/events
https://github.com/huggingface/transformers/pull/18788
1,353,112,208
PR_kwDOCUB6oc496RHb
18,788
Improve Text Generation doc
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you @sgugger for the review. I have applied your suggestions." ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> * Add relevant args explicitly in beam search decoding example in generation utils * PAD token is absent in GPT2 instead of EOS token ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18788/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18788", "html_url": "https://github.com/huggingface/transformers/pull/18788", "diff_url": "https://github.com/huggingface/transformers/pull/18788.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18788.patch", "merged_at": 1661970630000 }
https://api.github.com/repos/huggingface/transformers/issues/18787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18787/comments
https://api.github.com/repos/huggingface/transformers/issues/18787/events
https://github.com/huggingface/transformers/pull/18787
1,353,110,158
PR_kwDOCUB6oc496Qu4
18,787
Improve GPT2 doc
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the typos and dimensions of args in doc of GPT2. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18787/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18787", "html_url": "https://github.com/huggingface/transformers/pull/18787", "diff_url": "https://github.com/huggingface/transformers/pull/18787.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18787.patch", "merged_at": 1661966799000 }
https://api.github.com/repos/huggingface/transformers/issues/18786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18786/comments
https://api.github.com/repos/huggingface/transformers/issues/18786/events
https://github.com/huggingface/transformers/pull/18786
1,353,097,777
PR_kwDOCUB6oc496ObL
18,786
reflect max_new_tokens in `Seq2SeqTrainer`
{ "login": "kumapo", "id": 70637, "node_id": "MDQ6VXNlcjcwNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kumapo", "html_url": "https://github.com/kumapo", "followers_url": "https://api.github.com/users/kumapo/followers", "following_url": "https://api.github.com/users/kumapo/following{/other_user}", "gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kumapo/subscriptions", "organizations_url": "https://api.github.com/users/kumapo/orgs", "repos_url": "https://api.github.com/users/kumapo/repos", "events_url": "https://api.github.com/users/kumapo/events{/privacy}", "received_events_url": "https://api.github.com/users/kumapo/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh, could you take a look at this when you have some time please? Thanks a lot!", "Hi, @kumapo \r\n\r\nI believe it also requires a change in\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py#L129\r\n\r\nright?", "@ydshieh, yes. at same time I believe `Seq2SeqTrainer.evaluate()` needs the same change.\r\n", "@sgugger, thank you for your feedback. I've updated the PR.", "It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?", "@LysandreJik, I've done all steps to refresh circleci permission.\r\nbut it seems that nothing happens with tests. let me know if I missed something to be known.", "Can you try pushing an empty commit on your branch to re-trigger the tests?\r\n```\r\ngit commit -m \"Trigger CI\" --allow-empty\r\n```", "To pass the test, you can run \r\n\r\n```bash\r\nmake style\r\n```\r\nand commit the change." ]
1,661
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? in most cases, VisionEncoderDecoderModel's `max_length` is set implicitly. it leads to the problem if the model generates prediction given `max_new_tokens`. this PR makes `max_new_tokens` handled as expected in `Seq2SeqTrainer. prediction_step()` in the case. Fixes #18785 P.S. I can reproduce the issue if using `huggingface/transformers`. but, using this PR with same codes to reproduce, no exceptions raised. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18786/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18786", "html_url": "https://github.com/huggingface/transformers/pull/18786", "diff_url": "https://github.com/huggingface/transformers/pull/18786.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18786.patch", "merged_at": 1662037958000 }
https://api.github.com/repos/huggingface/transformers/issues/18785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18785/comments
https://api.github.com/repos/huggingface/transformers/issues/18785/events
https://github.com/huggingface/transformers/issues/18785
1,353,093,401
I_kwDOCUB6oc5QppEZ
18,785
Raise ValueError if given max_new_tokens to `Seq2SeqTrainer.predict()`
{ "login": "kumapo", "id": 70637, "node_id": "MDQ6VXNlcjcwNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/70637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kumapo", "html_url": "https://github.com/kumapo", "followers_url": "https://api.github.com/users/kumapo/followers", "following_url": "https://api.github.com/users/kumapo/following{/other_user}", "gists_url": "https://api.github.com/users/kumapo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kumapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kumapo/subscriptions", "organizations_url": "https://api.github.com/users/kumapo/orgs", "repos_url": "https://api.github.com/users/kumapo/repos", "events_url": "https://api.github.com/users/kumapo/events{/privacy}", "received_events_url": "https://api.github.com/users/kumapo/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "thank you all for kind supports!" ]
1,661
1,662
1,662
CONTRIBUTOR
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.6.4 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: `yes` - Using distributed or parallel set-up in script?: `no` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```Python3 model = transformers.VisionEncoderDecoderModel.from_encoder_decoder_pretrained( "google/vit-base-patch16-224-in21k", "bert-base-uncased" ) tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-uncased") feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id eval_ds = datasets.load_dataset( "kumapo/stair_captions_dataset_script", "2014", data_dir="../input/coco-2014-val", split="validation", streaming=True ) # do some preprocessing eval_ds with map() .. training_args = transformers.Seq2SeqTrainingArguments( predict_with_generate=True, fp16=False, output_dir="output/", report_to="none", ) trainer = transformers.Seq2SeqTrainer( model=model, tokenizer=tokenizer, args=training_args, data_collator=transformers.default_data_collator ) _ = trainer.predict(eval_ds, max_new_tokens=16) ``` then, `ValueError: Both max_new_tokens and max_length have been set but they serve the same purpose` raised: ``` ValueError Traceback (most recent call last) /tmp/ipykernel_23/2318841552.py in <module> 61 data_collator=transformers.default_data_collator, 62 ) ---> 63 _ = trainer.predict(eval_ds, max_new_tokens=16) /opt/conda/lib/python3.7/site-packages/transformers/trainer_seq2seq.py in predict(self, test_dataset, ignore_keys, metric_key_prefix, **gen_kwargs) 135 self._gen_kwargs = gen_kwargs 136 --> 137 return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) 138 139 def prediction_step( /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in predict(self, test_dataset, ignore_keys, metric_key_prefix) 2844 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop 2845 output = eval_loop( -> 2846 test_dataloader, description="Prediction", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix 2847 ) 2848 total_batch_size = self.args.eval_batch_size * self.args.world_size /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 2947 2948 # Prediction step -> 2949 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) 2950 inputs_decode = self._prepare_input(inputs["input_ids"]) if args.include_inputs_for_metrics else None 2951 /opt/conda/lib/python3.7/site-packages/transformers/trainer_seq2seq.py in prediction_step(self, model, inputs, prediction_loss_only, ignore_keys) 201 generated_tokens = self.model.generate( 202 generation_inputs, --> 203 **gen_kwargs, 204 ) 205 # in case the batch is shorter than max length, the output should be padded /opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) 28 return cast(F, decorate_context) 29 /opt/conda/lib/python3.7/site-packages/transformers/generation_utils.py in generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs) 1237 elif max_length is not None and max_new_tokens is not None: 1238 raise ValueError( -> 1239 "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a" 1240 " limit to the generated output length. Remove one of those arguments. Please refer to the" 1241 " documentation for more information. " ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation) ``` ### Expected behavior nothing raised.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18785/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18784/comments
https://api.github.com/repos/huggingface/transformers/issues/18784/events
https://github.com/huggingface/transformers/pull/18784
1,353,042,423
PR_kwDOCUB6oc496EWV
18,784
Improve GPT2 doc
{ "login": "ekagra-ranjan", "id": 3116519, "node_id": "MDQ6VXNlcjMxMTY1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekagra-ranjan", "html_url": "https://github.com/ekagra-ranjan", "followers_url": "https://api.github.com/users/ekagra-ranjan/followers", "following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}", "gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions", "organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs", "repos_url": "https://api.github.com/users/ekagra-ranjan/repos", "events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}", "received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Superseeded by #18787 " ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the typos and dimensions of args in doc of GPT2. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18784/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18784", "html_url": "https://github.com/huggingface/transformers/pull/18784", "diff_url": "https://github.com/huggingface/transformers/pull/18784.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18784.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18783/comments
https://api.github.com/repos/huggingface/transformers/issues/18783/events
https://github.com/huggingface/transformers/pull/18783
1,352,967,441
PR_kwDOCUB6oc4952mY
18,783
Fix broken link DeepSpeed documentation link
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
MEMBER
null
# What does this PR do? Fix broken link DeepSpeed documentation link. The current `<a>` is not working.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18783/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18783", "html_url": "https://github.com/huggingface/transformers/pull/18783", "diff_url": "https://github.com/huggingface/transformers/pull/18783.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18783.patch", "merged_at": 1661740340000 }
https://api.github.com/repos/huggingface/transformers/issues/18782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18782/comments
https://api.github.com/repos/huggingface/transformers/issues/18782/events
https://github.com/huggingface/transformers/issues/18782
1,352,963,931
I_kwDOCUB6oc5QpJdb
18,782
Memory increment and release when loading model via PretrainedModel.from_pretrained
{ "login": "tobyych", "id": 44737479, "node_id": "MDQ6VXNlcjQ0NzM3NDc5", "avatar_url": "https://avatars.githubusercontent.com/u/44737479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tobyych", "html_url": "https://github.com/tobyych", "followers_url": "https://api.github.com/users/tobyych/followers", "following_url": "https://api.github.com/users/tobyych/following{/other_user}", "gists_url": "https://api.github.com/users/tobyych/gists{/gist_id}", "starred_url": "https://api.github.com/users/tobyych/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tobyych/subscriptions", "organizations_url": "https://api.github.com/users/tobyych/orgs", "repos_url": "https://api.github.com/users/tobyych/repos", "events_url": "https://api.github.com/users/tobyych/events{/privacy}", "received_events_url": "https://api.github.com/users/tobyych/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "@ydshieh has been looking into memory leaks as well recently and might have some insights for you!", "Hi @tobyych \r\n\r\nCould you also try to add `gc.collect()` after `del model` in both loading methods, and see what you get (memory usage) after `gc.collect()` is done. You have to import `gc`.", "Hi @ydshieh,\r\n\r\nTried to add `gc.collect()` after `del model`.\r\n\r\nFor single `hf_load`,\r\n```\r\nLine # Mem usage Increment Occurrences Line Contents\r\n=============================================================\r\n 14 245.5 MiB 245.5 MiB 1 @profile\r\n 15 def hf_load():\r\n 16 # bert-base-uncased: 421MB on disk\r\n 17 1103.9 MiB 858.4 MiB 1 model = AutoModelForMaskedLM.from_pretrained(\"bert-base-uncased\")\r\n 18 686.2 MiB -417.7 MiB 1 del model\r\n 19 266.5 MiB -419.7 MiB 1 gc.collect()\r\n```\r\n\r\nFor single `direct_load`,\r\n```\r\nLine # Mem usage Increment Occurrences Line Contents\r\n=============================================================\r\n 32 240.8 MiB 240.8 MiB 1 @profile\r\n 33 def direct_load():\r\n 34 661.5 MiB 420.7 MiB 1 model = torch.load('/home/toby/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f', map_location='cpu')\r\n 35 241.9 MiB -419.6 MiB 1 del model\r\n 36 241.9 MiB 0.0 MiB 1 gc.collect()\r\n```\r\n\r\nFor multiple `hf_load`,\r\n```\r\nLine # Mem usage Increment Occurrences Line Contents\r\n=============================================================\r\n 22 241.4 MiB 241.4 MiB 1 @profile\r\n 23 def multiple_hf_load():\r\n 24 263.1 MiB 21.6 MiB 1 hf_load()\r\n 25 921.1 MiB 658.1 MiB 1 hf_load()\r\n 26 263.3 MiB -657.9 MiB 1 hf_load()\r\n 27 263.3 MiB 0.0 MiB 1 hf_load()\r\n 28 263.3 MiB 0.0 MiB 1 hf_load()\r\n 29 263.3 MiB 0.0 MiB 1 hf_load()\r\n```\r\n\r\nFor multiple `direct_load`,\r\n```\r\nLine # Mem usage Increment Occurrences Line Contents\r\n=============================================================\r\n 38 240.8 MiB 240.8 MiB 1 @profile\r\n 39 def multiple_direct_load():\r\n 40 241.9 MiB 1.1 MiB 1 direct_load()\r\n 41 242.0 MiB 0.1 MiB 1 direct_load()\r\n 42 242.0 MiB 0.0 MiB 1 direct_load()\r\n 43 242.0 MiB 0.0 MiB 1 direct_load()\r\n 44 242.0 MiB 0.0 MiB 1 direct_load()\r\n 45 248.0 MiB 6.0 MiB 1 direct_load()\r\n```\r\n\r\nWith the explicit garbage collection, single `hf_load` could release the remaining memory that wasn't released previously. However, still not sure why it used double memory compared to `direct_load`.\r\n\r\nAs for multiple `hf_load`, the results looked much better after adding the explicit garbage collection as the memory used after several loads dropped to 263.3MB instead of the ~1000MB seen previously.\r\n\r\nAny clue what weren't collected by the GC previously?", "Hi, @tobyych \r\n\r\nGlad to know `gc.collect()` works.\r\n\r\nI don't know what weren't collected by the GC previously. In general, (I believe) it's not easy to know exactly what `gc` has done at at which timing. If you are able to investigate and share your finding, it would be great.\r\n\r\nI will try to check the `double memory` part.", "@ydshieh, I tried to inspect the objects that were collected by the GC between `del model` and `gc.collect()` using the code below, where I aggregated the sizes of Python objects by their modules. I filtered those from the `transformers` package and might be a starting point for your side to see which part of code might be related.\r\n\r\n```python\r\ndef hf_load():\r\n # bert-base-uncased: 421MB on disk\r\n model = AutoModelForMaskedLM.from_pretrained(\"bert-base-uncased\")\r\n del model\r\n d = defaultdict(int)\r\n for o in gc.get_objects():\r\n try:\r\n d[o.__module__] += sys.getsizeof(o)\r\n except:\r\n d['others'] += sys.getsizeof(o)\r\n for k, v in d.items():\r\n if type(k) is str and k.startswith(\"transformers\"):\r\n print(k, v)\r\n gc.collect()\r\n```\r\n\r\n```\r\ntransformers.modeling_utils 19120\r\ntransformers.utils.doc 1224\r\ntransformers.utils.versions 408\r\ntransformers.utils.logging 6664\r\ntransformers.utils.import_utils 21552\r\ntransformers.utils.generic 10363\r\ntransformers.utils.hub 7928\r\ntransformers.utils 136\r\ntransformers.dependency_versions_check 136\r\ntransformers.utils.dummy_speech_objects 2400\r\ntransformers.utils.dummy_tensorflow_text_objects 1200\r\ntransformers.utils.dummy_sentencepiece_and_speech_objects 1200\r\ntransformers.utils.dummy_timm_objects 4008\r\ntransformers.utils.dummy_scatter_objects 6136\r\ntransformers.utils.dummy_tf_objects 390408\r\ntransformers.utils.dummy_flax_objects 187200\r\ntransformers.dynamic_module_utils 1224\r\ntransformers.tokenization_utils_base 21309\r\ntransformers.tokenization_utils 6480\r\ntransformers.models.t5.tokenization_t5 3104\r\ntransformers.convert_slow_tokenizer 42888\r\ntransformers.tokenization_utils_fast 4328\r\ntransformers.models.t5.tokenization_t5_fast 1744\r\ntransformers.configuration_utils 6640\r\ntransformers.models.auto.configuration_auto 7184\r\ntransformers.models.auto.auto_factory 22544\r\ntransformers.models.auto.modeling_auto 27936\r\ntransformers.onnx.utils 1656\r\ntransformers.onnx.config 9288\r\ntransformers.models.bert.configuration_bert 2232\r\ntransformers.models.albert.configuration_albert 2400\r\ntransformers.models.bart.configuration_bart 3216\r\ntransformers.models.big_bird.configuration_big_bird 2400\r\ntransformers.models.roberta.configuration_roberta 2400\r\ntransformers.models.camembert.configuration_camembert 2264\r\ntransformers.models.convbert.configuration_convbert 2400\r\ntransformers.models.data2vec.configuration_data2vec_text 2400\r\ntransformers.models.deberta.configuration_deberta 2672\r\ntransformers.models.deberta_v2.configuration_deberta_v2 2672\r\ntransformers.models.distilbert.configuration_distilbert 2400\r\ntransformers.models.electra.configuration_electra 2400\r\ntransformers.models.xlm.configuration_xlm 2400\r\ntransformers.models.flaubert.configuration_flaubert 2400\r\ntransformers.models.fnet.configuration_fnet 1200\r\ntransformers.models.funnel.configuration_funnel 1744\r\ntransformers.models.ibert.configuration_ibert 2400\r\ntransformers.models.layoutlm.configuration_layoutlm 2672\r\ntransformers.models.longformer.configuration_longformer 1200\r\ntransformers.models.luke.configuration_luke 1200\r\ntransformers.models.mbart.configuration_mbart 3216\r\ntransformers.models.megatron_bert.configuration_megatron_bert 1200\r\ntransformers.models.mobilebert.configuration_mobilebert 2400\r\ntransformers.models.mpnet.configuration_mpnet 1200\r\ntransformers.models.mvp.configuration_mvp 1200\r\ntransformers.models.nezha.configuration_nezha 1200\r\ntransformers.models.nystromformer.configuration_nystromformer 1200\r\ntransformers.feature_extraction_utils 5256\r\ntransformers.models.perceiver.configuration_perceiver 2672\r\ntransformers.models.qdqbert.configuration_qdqbert 1200\r\ntransformers.models.reformer.configuration_reformer 1200\r\ntransformers.models.rembert.configuration_rembert 1200\r\ntransformers.models.roformer.configuration_roformer 2400\r\ntransformers.models.squeezebert.configuration_squeezebert 2400\r\ntransformers.models.tapas.configuration_tapas 1200\r\ntransformers.models.wav2vec2.configuration_wav2vec2 1336\r\ntransformers.models.xlm_roberta.configuration_xlm_roberta 2264\r\ntransformers.models.xlm_roberta_xl.configuration_xlm_roberta_xl 2400\r\ntransformers.models.yoso.configuration_yoso 1200\r\ntransformers.activations 14496\r\ntransformers.modeling_outputs 45024\r\ntransformers.deepspeed 3896\r\ntransformers.generation_beam_constraints 9944\r\ntransformers.generation_beam_search 6568\r\ntransformers.generation_logits_process 25728\r\ntransformers.generation_stopping_criteria 6752\r\ntransformers.pytorch_utils 2288\r\ntransformers.generation_utils 17464\r\ntransformers.models.bert.modeling_bert 41152\r\n```", "@tobyych \r\n\r\nI opened a PR #18832 which could solve this issue. Notice that this is not a real memory issue however. `GC` usually makes its own decision for when to collect. But it's not bad if we can release some memory earlier. ", "Thanks @ydshieh!" ]
1,661
1,662
1,662
NONE
null
Issue --- I was trying to understand the memory usage when loading a Hugging Face model. I found that when loading the model via `AutoModelForMaskedLM.from_pretrained("bert-base-uncased")`, the resulting increment in memory was (1) larger than the cached BERT model on disk (859MB v.s. 421MB) and (2) when deleting the variable, not all of the allocated memory got released. On the other hand, if I just do `torch.load("[path to cached model]")`, the memory allocation and release matched and the number was very close to that on disk. May I know why was there such a difference in behaviour? Code to reproduce the issue --- ```python import torch from transformers import AutoModelForMaskedLM from memory_profiler import profile @profile def hf_load(): # bert-base-uncased: 421MB on disk model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased") del model @profile def direct_load(): model = torch.load('/home/toby/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f', map_location='cpu') del model ``` Profile --- `hf_load`: ``` Line # Mem usage Increment Occurrences Line Contents ============================================================= 13 241.9 MiB 241.9 MiB 1 @profile 14 def hf_load(): 15 # bert-base-uncased: 421MB on disk 16 1100.4 MiB 858.5 MiB 1 model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased") 17 683.0 MiB -417.4 MiB 1 del model ``` `direct_load`: ``` Line # Mem usage Increment Occurrences Line Contents ============================================================= 19 240.6 MiB 240.6 MiB 1 @profile 20 def direct_load(): 21 661.4 MiB 420.8 MiB 1 model = torch.load('/home/toby/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f', map_location='cpu') 22 241.7 MiB -419.7 MiB 1 del model ``` To supplement, I also observed that when running `hf_load` above multiple times, the memory usage was rather unobvious. ``` Line # Mem usage Increment Occurrences Line Contents ============================================================= 19 239.7 MiB 239.7 MiB 1 @profile 20 def multiple_hf_load(): 21 681.0 MiB 441.3 MiB 1 hf_load() 22 1008.8 MiB 327.9 MiB 1 hf_load() 23 919.4 MiB -89.4 MiB 1 hf_load() 24 993.6 MiB 74.2 MiB 1 hf_load() 25 995.9 MiB 2.2 MiB 1 hf_load() 26 992.9 MiB -3.0 MiB 1 hf_load() ``` ``` Line # Mem usage Increment Occurrences Line Contents ============================================================= 34 240.8 MiB 240.8 MiB 1 @profile 35 def multiple_direct_load(): 36 242.0 MiB 1.1 MiB 1 direct_load() 37 241.9 MiB -0.1 MiB 1 direct_load() 38 241.9 MiB 0.0 MiB 1 direct_load() 39 241.9 MiB 0.0 MiB 1 direct_load() 40 241.9 MiB 0.0 MiB 1 direct_load() 41 241.9 MiB 0.0 MiB 1 direct_load() ``` It increased in the first two times, but did not keep increasing from the third time onwards. I wonder how could this be explained. P.S. Also attached the case for `direct_load` above. No increment was observed. Supplementary information --- OS: 5.10.60.1-microsoft-standard-WSL2, 4.15.0-1113-azure #126~16.04.1-Ubuntu Python: 3.8.12 PyTorch: 1.11.0 Transformers: 4.21.2 @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18782/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18781/comments
https://api.github.com/repos/huggingface/transformers/issues/18781/events
https://github.com/huggingface/transformers/pull/18781
1,352,809,331
PR_kwDOCUB6oc495UxC
18,781
Add inference section to task guides
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the suggestions! I reworked the `sequence_classification` task a bit, and if we like the changes, then I can apply them to the other tasks. Main changes are:\r\n\r\n- Encourage users to login with their HF accounts so they can push their finetuned models. Along these lines, I've added `push_to_hub` and the `PushToHubCallback`.\r\n- Added a section to include a `compute_metrics` function so users can evaluate their models during training.\r\n\r\nI think adding these two will help the task guides be more complete :)" ]
1,661
1,669
1,669
MEMBER
null
Currently, the task guides only show how to finetune a model, but it doesn't directly connect the dots to how you can use that model for inference. For completeness, this PR adds a section to the task guides to show how to use a model for inference after finetuning. This gives users a better overview of the model lifecycle. In doing so, when we update `task_summary.mdx`, we can focus less on the practical steps of how to use a model for inference (we can add links to the task guides) and instead discuss the more theoretical aspects of these tasks as intended by the Conceptual Guide section.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18781/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18781", "html_url": "https://github.com/huggingface/transformers/pull/18781", "diff_url": "https://github.com/huggingface/transformers/pull/18781.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18781.patch", "merged_at": 1669053982000 }
https://api.github.com/repos/huggingface/transformers/issues/18780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18780/comments
https://api.github.com/repos/huggingface/transformers/issues/18780/events
https://github.com/huggingface/transformers/issues/18780
1,352,712,845
I_kwDOCUB6oc5QoMKN
18,780
TAPAS model usage issue
{ "login": "deep-mining-swang", "id": 17224712, "node_id": "MDQ6VXNlcjE3MjI0NzEy", "avatar_url": "https://avatars.githubusercontent.com/u/17224712?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deep-mining-swang", "html_url": "https://github.com/deep-mining-swang", "followers_url": "https://api.github.com/users/deep-mining-swang/followers", "following_url": "https://api.github.com/users/deep-mining-swang/following{/other_user}", "gists_url": "https://api.github.com/users/deep-mining-swang/gists{/gist_id}", "starred_url": "https://api.github.com/users/deep-mining-swang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deep-mining-swang/subscriptions", "organizations_url": "https://api.github.com/users/deep-mining-swang/orgs", "repos_url": "https://api.github.com/users/deep-mining-swang/repos", "events_url": "https://api.github.com/users/deep-mining-swang/events{/privacy}", "received_events_url": "https://api.github.com/users/deep-mining-swang/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This is likely an error due to a mismatch of versions between your Torch and TorchScatter installations", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing this as we've answered the question, feel free to re-open if you still have this issue" ]
1,661
1,664
1,664
NONE
null
### System Info Hi, I have some issues when I am using TAPAS. `from transformers import TapasConfig, TapasForQuestionAnswering ` report following errors: RuntimeError: Failed to import transformers.models.tapas.modeling_tapas because of the following error (look up to see its traceback): module 'distutils' has no attribute 'version' After I do `pip install setuptools==59.5.0` it just returned following error: Segmentation fault (core dumped) Here is my env [transformers](https://pypi.python.org/pypi/transformers)==4.20.1 [evaluate](https://pypi.python.org/pypi/evaluate)==0.1.2 [bert-score](https://pypi.python.org/pypi/bert-score)==0.3.11 [datasets](https://pypi.python.org/pypi/datasets)==2.3.2 [accelerate](https://pypi.python.org/pypi/accelerate) [deepspeed](https://pypi.python.org/pypi/deepspeed)==0.6.5 [wordninja](https://pypi.python.org/pypi/wordninja)==2.0.0 [sacrebleu](https://pypi.python.org/pypi/sacrebleu)==2.1.0 [fasttext](https://pypi.python.org/pypi/fasttext)==0.9.2 [nltk](https://pypi.python.org/pypi/nltk)==3.7 [scikit-learn](https://pypi.python.org/pypi/scikit-learn)==1.0.2 ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import TapasConfig, TapasForQuestionAnswering ### Expected behavior Segmentation fault (core dumped)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18780/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18779/comments
https://api.github.com/repos/huggingface/transformers/issues/18779/events
https://github.com/huggingface/transformers/pull/18779
1,352,646,763
PR_kwDOCUB6oc494x1q
18,779
fix a typo in auto feature extraction for videomae
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/18778 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18779/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18779", "html_url": "https://github.com/huggingface/transformers/pull/18779", "diff_url": "https://github.com/huggingface/transformers/pull/18779.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18779.patch", "merged_at": 1661765093000 }
https://api.github.com/repos/huggingface/transformers/issues/18778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18778/comments
https://api.github.com/repos/huggingface/transformers/issues/18778/events
https://github.com/huggingface/transformers/issues/18778
1,352,645,412
I_kwDOCUB6oc5Qn7sk
18,778
Incorrect auto feature extractor for videomae
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Tried to fix it with a PR: https://github.com/huggingface/transformers/pull/18779" ]
1,661
1,661
1,661
CONTRIBUTOR
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Windows-10-10.0.19043-SP0 - Python version: 3.9.12 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1+cpu (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Install dev dependencies: ```bash cd transformers pip install -e ".[dev]" ``` 2. Init new model: ```bash transformers-cli add-new-model-like >>What is the model you would like to duplicate? videomae ``` ### Expected behavior It should ask me: `Will your new model use the same processing class as videomae (VideoMAEFeatureExtractor)?` Instead, it asks: `Will your new model use the same processing class as videomae (ViTFeatureExtractor)?`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18778/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18777/comments
https://api.github.com/repos/huggingface/transformers/issues/18777/events
https://github.com/huggingface/transformers/pull/18777
1,352,544,129
PR_kwDOCUB6oc494byI
18,777
Cache results of is_torch_tpu_available()
{ "login": "comaniac", "id": 8262694, "node_id": "MDQ6VXNlcjgyNjI2OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8262694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/comaniac", "html_url": "https://github.com/comaniac", "followers_url": "https://api.github.com/users/comaniac/followers", "following_url": "https://api.github.com/users/comaniac/following{/other_user}", "gists_url": "https://api.github.com/users/comaniac/gists{/gist_id}", "starred_url": "https://api.github.com/users/comaniac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/comaniac/subscriptions", "organizations_url": "https://api.github.com/users/comaniac/orgs", "repos_url": "https://api.github.com/users/comaniac/repos", "events_url": "https://api.github.com/users/comaniac/events{/privacy}", "received_events_url": "https://api.github.com/users/comaniac/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Not sure the reason of CI failure. It seems not relevant to this PR.", "@LysandreJik thanks for the review and sure we could wait for @sgugger.\r\nMeanwhile, do I need to do anything to fix the CI failure?", "Thanks for bearing with us. The test failures are spurious and unrelated, so we can merge this." ]
1,661
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `xm.xla_device()` (called by `is_torch_tpu_available()`) hangs when calling multiple times but no XLA devices are available, and this results in Trainer hanging. Since currently `torch_xla` will be used as long as it is installed in the current active Python environment, I encountered this issue even when I only want to run the Trainer with PyTorch on GPU. The detail reason behind `torch_xla` is still under investigation (see https://github.com/pytorch/xla/issues/3939). To workaround this issue, this PR adds `lru_cache` to `is_torch_tpu_available()`, so that `xm.xla_device()` is guaranteed to be called only once when no XLA device is available. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @muellerzr @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18777/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18777", "html_url": "https://github.com/huggingface/transformers/pull/18777", "diff_url": "https://github.com/huggingface/transformers/pull/18777.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18777.patch", "merged_at": 1662047134000 }
https://api.github.com/repos/huggingface/transformers/issues/18776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18776/comments
https://api.github.com/repos/huggingface/transformers/issues/18776/events
https://github.com/huggingface/transformers/issues/18776
1,352,463,738
I_kwDOCUB6oc5QnPV6
18,776
TF: Can't create sharded XGLM model
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "cc @ArthurZucker ", "Hey! Little update on this : the problem comes from the previously introduced \"hack\" : \r\n```python \r\n return tf.Variable(emb, trainable=False, name=\"model.embed_positions.weights\")\r\n```\r\nThis appears [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/xglm/modeling_tf_xglm.py#L86). This hack can also be seen in [BART](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_tf_bart.py#L1036-L1038) . \r\n\r\nIn order to have as little breaking changes as possible, I think we can add the followiing : \r\n\r\n```python \r\nif \"model.\" in layer.name : # potentially all models that have the hack will have model. something\" \r\n param_dset = shard_file.create_dataset(\r\n \".\".join(layer.name.split(\".\")[1:]), layer.numpy().shape, dtype=layer.numpy().dtype\r\n )\r\n```\r\n\r\nI think we have to keep the \".\" separation for coherence. \r\nWill see if I can open a PR on that soon \r\n" ]
1,661
1,665
1,665
MEMBER
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35 - Python version: 3.8.13 - Huggingface_hub version: 0.9.0 - PyTorch version (GPU?): 1.12.0+cu116 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.0 (gpu) - Jax version: 0.3.5 - JaxLib version: 0.3.5 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running this CLI command ``` CUDA_VISIBLE_DEVICES="" TOKENIZERS_PARALLELISM=false NVIDIA_TF32_OVERRIDE=0 transformers-cli pt-to-tf --model-name facebook/xglm-2.9B --new-weights --max-error 3e-3 ``` Gets you the following exception (in the sharding code) ``` Traceback (most recent call last): File "/home/joao/hf/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "/home/joao/transformers/src/transformers/commands/transformers_cli.py", line 55, in main service.run() File "/home/joao/transformers/src/transformers/commands/pt_to_tf.py", line 309, in run tf_from_pt_model.save_pretrained(self._local_dir) File "/home/joao/transformers/src/transformers/modeling_tf_utils.py", line 2020, in save_pretrained param_dset = shard_file.create_dataset( File "/home/joao/hf/lib/python3.8/site-packages/h5py/_hl/group.py", line 161, in create_dataset dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds) File "/home/joao/hf/lib/python3.8/site-packages/h5py/_hl/dataset.py", line 156, in make_new_dset dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl, dapl=dapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5d.pyx", line 84, in h5py.h5d.create TypeError: expected bytes, str found ``` ### Expected behavior Successful sharding :D
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18776/timeline
completed
null
null