url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/22794
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22794/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22794/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22794/events
|
https://github.com/huggingface/transformers/issues/22794
| 1,670,000,996
|
I_kwDOCUB6oc5jijFk
| 22,794
|
LLaMA FastTokenizer does not add `eos_token_id` at the end.
|
{
"login": "osainz59",
"id": 25911658,
"node_id": "MDQ6VXNlcjI1OTExNjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/25911658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osainz59",
"html_url": "https://github.com/osainz59",
"followers_url": "https://api.github.com/users/osainz59/followers",
"following_url": "https://api.github.com/users/osainz59/following{/other_user}",
"gists_url": "https://api.github.com/users/osainz59/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osainz59/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osainz59/subscriptions",
"organizations_url": "https://api.github.com/users/osainz59/orgs",
"repos_url": "https://api.github.com/users/osainz59/repos",
"events_url": "https://api.github.com/users/osainz59/events{/privacy}",
"received_events_url": "https://api.github.com/users/osainz59/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Yes! Quick fix, use the slow tokenizer. Otherwise I'll open a PR to add template processing! \r\nThanks for reporting!",
"But it shouldn't add an `eos` token right? The LM is not trained to generate a token after the `eos` I believe.",
"> But it shouldn't add an `eos` token right? The LM is not trained to generate a token after the `eos` I believe.\r\n\r\nBy default, but if specified with `add_eos_token=True` it should. You can always fine-tune the model to make the model learn when to stop.",
"I guess they would set the `pad_token_id` using the `eos_token_id`?\r\n`model.config.pad_token_id = model.config.eos_token_i`",
"Same here, doing add_eos_token=True doesn't do anything",
"This should have been fixed by #22959 ",
"> I guess they would set the `pad_token_id` using the `eos_token_id`? `model.config.pad_token_id = model.config.eos_token_i`\r\n\r\nI believe if you just set the `pad_token = eos_token` the model still is not learning to predict the `eos_token` because the corresponding `attn_mask` does not include the token and the `labels` ignores that token - i.e. no loss is computed for it. Not 100% sure about this, but that was what it seemed like from some self exploration.",
"The same is happening with Falcon...",
"When you say the same, what do you mean? ",
"That it doesn't generate <|endoftext|> (token id 11) when calling generate, therefore it never stops generating. I have tried by setting `eos_token_id` to 193, which corresponds to `\\n`, but I don't think that's a clean fix. I have noticed that when tokenizing the inputs with the Falcon-40b tokenizer, it's not adding `eos_token_id` at the end of input ids. ",
"Few things here. \r\nLlama has no official model so make sure the one you are using is up to date and has the same eos token id for the model.config / generation config and the tokenizer. \r\n\r\nFor falcon, code is on the hub, but latest code of transformers adds the eos if you set “add_eos=True”. In the doc for llama you can find that initializing a model with “add_eos=True” will make it add the eos when tokenizing. ",
"Actually I was talking about Falcon, not llama, because I'm facing an issue similar to the ones people are reporting with Llama. In fact I upgraded my transformers version to the last version in `main` branch, and the problem persists... The model never generates a EOS token, so it never stops generating... \r\nI have tried to explicitly add a string \"<|endoftext|>\" at the end of the inputs for fine-tuning, but still doesn't work. \r\n\r\nWhat can I do to make falcon generate a eos token ? ",
"The issue is different, the model not stopping does not mean that it is not *adding* the `eos_token` but rather not *predicting* it. \r\nThe problem with LLAM has already been mentioned here: #23230 ",
"I thought it could be related, my hypothesis was that Falcon wasn't generating the EOS token because it wasn't being included in the inputs when tokenizing, so when we train the model over inputs without EOS token at the end, the model doesn't learn to generate EOS token.",
"@avacaondata - I have noticed this same issue, where the model is not learning to predict the EOS token. After doing some digging through several examples and source code, I've noticed something a bit strange particularly related to the `DataCollatorForLanguageModeling`. A very typical pattern that I have seen suggested is the following:\r\n```\r\ntransformers import DataCollatorForLanguageModeling\r\n\r\ntokenizer.pad_token = tokenizer.eos_token\r\ndata_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)\r\n```\r\nHowever, the problem I see with this approach is that when the DataCollator overrides OR generates the `labels` field for the batch it sets all `tokens == pad_token` to be `-100`. \r\n```\r\nlabels = batch[\"input_ids\"].clone()\r\nif self.tokenizer.pad_token_id is not None:\r\n labels[labels == self.tokenizer.pad_token_id] = -100\r\nbatch[\"labels\"] = labels\r\n```\r\nSince the `CrossEntropy` loss ignores tokens with `-100` even if the tokenizer we are using properly adds the `eos_token`, the loss function will actually ignore this token.\r\n\r\nWays that I have worked around this issue are either (1) to ensure that the `eos_token_id != pad_token_id` and make sure that the tokenizer includes the `eos_token` when tokenizing (some automatically do this such as the `T5 tokenizer`) OR (2) create the labels column myself when tokenizing - by cloning `input_ids` - and then using the `DataCollatorForSeq2Seq`. I actually really like the `DataCollatorForSeq2Seq` because it automatically pads the inputs and labels, but does not mess with tokens in unexpected ways, such as the eos_token.\r\n\r\nHope this is helpful!",
"@jonathangomesselman Thank you very much for the clear explanation, it makes much sense! \r\n\r\nI will change the label for the eos token so that it's not ignored by cross entropy anymore. \r\n\r\nIdeally I think that for instruction-tuning we shouldn't use `DataCollatorForLanguageModeling`, in this paper they did some experiments and found that only training over outputs typically works better: https://arxiv.org/pdf/2305.14314.pdf . However, I haven't found a way to make `DataCollatorForSeq2Seq` work for decoder-only models such as Llama or Falcon. Do you have any code on how to do that? ",
"@avacaondata - You're welcome! \r\n\r\nI have generally followed this practice as well - just fine-tuning over the `model outputs`, since generally I don't need the model to directly learn the statistical distribution over human instructions, but rather just how to \"react\" to them. \r\n\r\nContinuing from above, to use the `DataCollatorForSeq2Seq` for decoder-only models we need to manually create the `labels` field when tokenizing our data - i.e. ensuring we have the fields `input_ids`, `attention_mask`, and `labels`. Since we create the `labels` ourselves we have control over what tokens we explicitly train over vs. which we want to ignore (using `-100` as a label). Here is the skeleton of some code you could use to tokenize the inputs:\r\n```\r\nfrom transformers import LlamaTokenizerFast\r\n\r\ntokenizer = LlamaTokenizerFast.from_pretrained(\"hf-internal-testing/llama-tokenizer\")\r\n# By default the bos_token is added and not the eos_token. For instruction tuning I often ignore bos_token.\r\ntokenizer.add_bos_token = False\r\ntokenizer.add_eos_token = True\r\n\r\ndef create_instruction_tuned_format(data_row):\r\n return f\"\"\"<User Instruction>:{data_row[\"instruct\"]}\r\n<Agent Response>: {data_row['response']}\r\n\"\"\".strip()\r\n\r\ndef tokenize(data_row):\r\n \"\"\"Format and tokenize instruction tuning data\r\n\r\n 1) Combine the user input (instruction) and agent response\r\n 2) Create `labels` - ensuring we only fine tune over the \r\n desired agent response\r\n \"\"\"\r\n model_input_text = create_instruction_tuned_format(data_row)\r\n # Tokenize the full model input\r\n model_input = tokenizer(\r\n model_input_text, \r\n truncation=True,\r\n padding=False,\r\n return_tensors=None\r\n )\r\n\r\n # Create `labels` - ignoring user input (instructions)\r\n agent_response = tokenizer(data_row['title']).input_ids\r\n num_tokens_ignore = len(model_input['labels']) - len(agent_response)\r\n ignored_tokens = [-100] * (num_tokens_ignore)\r\n # Copy over the ids for the desired agent response\r\n model_input['labels'] = ignored_tokens \\\r\n + model_input['input_ids'][-len(agent_response):]\r\n \r\n # Just to demonstrate length equality\r\n assert len(model_inputs['labels']) == len(model_inputs['input_ids'])\r\n\r\n return model_input\r\n\r\ntokenized_ds = ds.map(tokenizer, remove_columns=ds.column_names)\r\n```\r\nA couple of things to note/highlight:\r\n1. We combine the user instruction and agent response using a very simple format. In the [LIMA paper](https://arxiv.org/pdf/2305.11206.pdf) for example they introduce a new EOT (end-of-turn) token to separate the instruction and the response. \r\n2. We tokenize the response to figure out the number of fine-tuning tokens at the end of the full token sequence.\r\n\r\nNow that we have our data tokenized and formatted we can use the `DataCollatorForSeq2Seq` as follows:\r\n```\r\ntokenizer.pad_token = tokenizer.eos_token\r\ndata_collator = DataCollatorForSeq2Seq(\r\n tokenizer, return_tensors=\"pt\", padding=True\r\n)\r\n\r\nbatch_size = 8\r\ntrain_dataloader = DataLoader(\r\n tokenized_ds, shuffle=True, collate_fn=data_collator, batch_size=batch_size, pin_memory=True\r\n)\r\n```\r\nNote that the LLAMA tokenizer by default does not have a `pad_token` so we have to set it. Because we are using the `DataCollatorForSeq2Seq` it is okay for us to set the padding token to the `eos_token` as the collator does not create the labels tensor but rather just pads our existing labels tensor with `-100` - i.e. the `eos_token` will not be ignored/replaced.\r\n\r\nThis may not be the most standard approach for doing this - but this is an example of what I have found to work / have seen some repos roughly follow. The main idea being that by creating the `labels` ourselves we are able to set `-100` for tokens that we don't want to fine-tune over + ensure that we learn to generate the `eos_token`. ",
"Wow @jonathangomesselman Thank you so much for the so clear explanation... :heart_eyes: \r\n\r\nI tried it and yes it works flawlessly. I will check the LIMA paper in detail too to check for that EOT special token, I think that's an interesting approach. \r\n\r\nAgain, thank you so much, you were extremely helpful!! :heart: ",
"@avacaondata you're welcome! I had very similar questions to what you asked and found myself a bit surprised to *not* find many good resources. Thankfully the HuggingFace code repos are actually quite readable, especially in separating the complex model logic of the base pre-trained transform models (encoder-decoder + decoder only) vs. adding the \"language modeling\" head (see sub-classes with `...ConditionalGeneration`, `...CausalLM`, `...LMHeadModel`). \r\n\r\nIf you're curious yourself, I would definitely recommend looking at the code to learn more. Each model has a slightly different naming convention but you will see that the logic is nearly identical. Some to check out are:\r\n- [T5ForConditionalGeneration](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1528) (encoder-decoder)\r\n- [LlamaForCausalLM](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L613) (decoder-only)\r\n- [GPT2LMHeadModel](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/gpt2/modeling_gpt2.py#L959) (decoder-only)\r\n\r\nHave fun exploring!",
"@jonathangomesselman thanks a lot!\r\n\r\nI was also running into this issue where the model was unable to output the eos_token after fine-tuning. I also followed examples where they set `tokenizer.pad_token = tokenizer.eos_token`. From your earlier comment, I made sure `tokenizer.pad_token != tokenizer.eos_token` by setting `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and using `DataCollatorForLanguageModeling` as before, e.g.\r\n```\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\ndata_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)\r\n```\r\n\r\nNow the model finally outputs the eos_token as intended!",
"@georgesung Thanks for sharing this approach! Adding a new `[PAD]` token is a great way to differentiate between that and the `EOS` token - which as you say allows you to then use the native `DataCollatorForLanuageModeling`. It is very interesting / odd to me that this is such a common problem, given it seems sort of obvious that we want this behavior. But regardless it is exciting to see the model finally start outputting the `eos_token` 😅 . An interesting thing that I noticed is that this is generally not an issue with the Encoder-Decoder models such as T5. With these models the tokenizer generally adds the `eos_token` by default and the colaters used don't have this problem of `ignoring` the `eos_token` by treating it as a padding token.\r\n\r\n@avacaondata We can use a similar approach to add a the `EOT` token described in the LIMA Paper for separating the `instruction` and the `response`.",
"I think this could be a great TIP addition to the documentation / blog! If anyone of you has time to open PR, feel free to do so and ping me! 🤗 ",
"@ArthurZucker - I would be happy to work on this! Where do you think it would be best to add this TIP?",
"Probably in the `llama.md`!",
"What is the correct code for Falcon? I'm still puzzled. \r\n\r\nRelated links:\r\n- discord: https://discord.com/channels/879548962464493619/1126681170957045770/1126681170957045770\r\n- hf: https://discuss.huggingface.co/t/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token/45954\r\n- so: https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token",
"@georgesung question:\r\n\r\n> tokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\nBut this assumes the model has a `pad_token`. I think an additional check has to be done that it does have an embedding for `pad_token` so that there are no run time errors (~type errors in the matrix extraction from the embedding \"table\"/matrix).\r\n\r\nBut if one does that some care might be needed to initialize the new token so that it dominates the generation: https://nlp.stanford.edu/~johnhew/vocab-expansion.html \r\n",
"@brando90 \r\n> But this assumes the model has a pad_token\r\n\r\nI haven't confirmed, but I think `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` is equivalent to `tokenizer.pad_token = '[PAD]'` (edit: might be wrong about that). So if there are runtime errors with `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` then there would also be runtime errors with `tokenizer.pad_token = tokenizer.eos_token` -- note `tokenizer.eos_token` is just a string. But I observed runtime errors with neither. I just observed that when I set `tokenizer.pad_token = tokenizer.eos_token` during training, the model won't stop generating during inference, since it was trained to not output the eos token (per discussions above).\r\n\r\nSince I was working with open_llama_7b, I saw that even though the model's tokenizer didn't specify a pad token string in its [tokenizer_config.json](https://huggingface.co/openlm-research/open_llama_7b/blob/main/tokenizer_config.json), it still had a row in its token embedding matrix for the pad token. If you run `print(model)`, you can see its token embedding matrix has an index reserved for the pad token (idx 0 in this case):\r\n```\r\n> print(model)\r\n\r\nLlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): Embedding(32000, 4096, padding_idx=0)\r\n..\r\n```\r\nYou can also see the pad token's embedding itself: `model.state_dict()['model.embed_tokens.weight'][0]`. Although from discussions above and also [this discussion](https://stackoverflow.com/questions/73155719/do-weights-of-the-pad-token-have-a-function), it doesn't seem to matter what the actual embeddings are for the pad token.",
"@georgesung unfortunately I'm working with Falcon. It doesn't have a pad token to my surprise (I'm not sure how this even happens in the first place tbh):\r\n```\r\nLoading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:10<00:00, 1.36s/it]\r\ntype(model)=<class 'transformers_modules.tiiuae.falcon-7b.2f5c3cd4eace6be6c0f12981f377fb35e5bf6ee5.modelling_RW.RWForCausalLM'>\r\ntype(tokenizer)=<class 'transformers.tokenization_utils_fast.PreTrainedTokenizerFast'>\r\nUsing pad_token, but it is not set yet.\r\ntokenizer.pad_token=None\r\ntype(peft_config)=<class 'peft.tuners.lora.LoraConfig'>\r\nmodel=RWForCausalLM(\r\n (transformer): RWModel(\r\n (word_embeddings): Embedding(65024, 4544)\r\n (h): ModuleList(\r\n (0-31): 32 x DecoderLayer(\r\n (input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)\r\n (self_attention): Attention(\r\n (maybe_rotary): RotaryEmbedding()\r\n (query_key_value): Linear4bit(in_features=4544, out_features=4672, bias=False)\r\n (dense): Linear4bit(in_features=4544, out_features=4544, bias=False)\r\n (attention_dropout): Dropout(p=0.0, inplace=False)\r\n )\r\n (mlp): MLP(\r\n (dense_h_to_4h): Linear4bit(in_features=4544, out_features=18176, bias=False)\r\n (act): GELU(approximate='none')\r\n (dense_4h_to_h): Linear4bit(in_features=18176, out_features=4544, bias=False)\r\n )\r\n )\r\n )\r\n (ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)\r\n )\r\n (lm_head): Linear(in_features=4544, out_features=65024, bias=False)\r\n)\r\n\r\n\r\n---- start Print all special tokens\r\neos_token: <|endoftext|>\r\nadditional_special_tokens: ['>>TITLE<<', '>>ABSTRACT<<', '>>INTRODUCTION<<', '>>SUMMARY<<', '>>COMMENT<<', '>>ANSWER<<', '>>QUESTION<<', '>>DOMAIN<<', '>>PREFIX<<', '>>SUFFIX<<', '>>MIDDLE<<']\r\n\r\n---- end Print all special tokens\r\nmodel.get_input_embeddings().weight.size()=torch.Size([65024, 4544])\r\npad_embedding=tensor([[[-0.0179, 0.0201, -0.0273, ..., -0.0275, -0.0396, -0.0131],\r\n [-0.0510, -0.0079, -0.0383, ..., -0.0481, 0.0581, 0.0282],\r\n [-0.0217, -0.0216, -0.0064, ..., -0.0508, 0.0554, -0.0013],\r\n ...,\r\n [ 0.0425, 0.0452, -0.0131, ..., 0.0019, 0.0476, 0.0342],\r\n [-0.0170, -0.0085, 0.0449, ..., -0.0074, 0.0178, 0.0043],\r\n [-0.0439, -0.0859, -0.0820, ..., 0.0130, 0.0669, 0.0884]]],\r\n device='cuda:0', dtype=torch.float16, grad_fn=<UnsqueezeBackward0>)\r\nSuccess!\r\n/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)\r\n warnings.warn(\r\nTraceback (most recent call last):\r\n File \"/lfs/hyperturing1/0/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/hf_uu/model_tokenizer/falcon_uu_mdl_tok.py\", line 190, in <module>\r\n example_test_model_already_has_pad_token()\r\n File \"/lfs/hyperturing1/0/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/hf_uu/model_tokenizer/falcon_uu_mdl_tok.py\", line 182, in example_test_model_already_has_pad_token\r\n tokenizer.decode(model.generate(**tokenizer(sent, return_tensors='pt'), do_sample=True)[0])\r\n File \"/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/transformers/generation/utils.py\", line 1271, in generate\r\n self._validate_model_kwargs(model_kwargs.copy())\r\n File \"/lfs/hyperturing1/0/brando9/miniconda/envs/data_quality/lib/python3.10/site-packages/transformers/generation/utils.py\", line 1144, in _validate_model_kwargs\r\n raise ValueError(\r\nValueError: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)\r\n```\r\n\r\ncode:\r\n\r\n```\r\n # qlora flacon7b\r\n from uutils.hf_uu.model_tokenizer.falcon_uu_mdl_tok import get_model_tokenizer_qlora_falcon7b\r\n model, tokenizer, peft_config = get_model_tokenizer_qlora_falcon7b()\r\n print(f'{model=}')\r\n sent = 'Dogs are great because they are '\r\n print()\r\n\r\n # print to see if pad tokens are present and if it ignores the tokens at the end\r\n # encoded_input = tokenizer(sent, padding='max_length', max_length=10, return_tensors='pt')\r\n # sys.exit()\r\n\r\n # Print all special tokens\r\n print('\\n---- start Print all special tokens')\r\n for token_name, token in tokenizer.special_tokens_map.items():\r\n print(f\"{token_name}: {token}\")\r\n print('\\n---- end Print all special tokens')\r\n\r\n # Get the ID for the '[PAD]' token\r\n try:\r\n pad_token_id = tokenizer.convert_tokens_to_ids('[PAD]')\r\n except KeyError:\r\n raise ValueError(\"Token [PAD] is not present in the tokenizer vocabulary.\")\r\n\r\n # Index into the model's embedding table\r\n try:\r\n print(f'{model.get_input_embeddings().weight.size()=}')\r\n pad_embedding = model.get_input_embeddings().weight[pad_token_id]\r\n except IndexError:\r\n raise ValueError(f\"Token ID {pad_token_id} is not present in the model's embedding matrix.\")\r\n\r\n print(f'{pad_embedding=}')\r\n print('Success!')\r\n\r\n # check it generates something sensible\r\n tokenizer.decode(model.generate(**tokenizer(sent, return_tensors='pt'), do_sample=True)[0])\r\n print('Success2!')\r\n```",
"I think I just need to add it to the tokenizer and the model. Since during fine-tuning/training the pad token would be ignored anyway, adding a random set of weights to the embedding table matrix wouldn't matter anyway. It wouldn't be updated anyway. \r\n\r\nCode:\r\n```\r\n # - Get falcon 4bit model\r\n # todo, where is this being saved & how to download quicker\r\n model = AutoModelForCausalLM.from_pretrained(\r\n pretrained_model_name_or_path=pretrained_model_name_or_path,\r\n quantization_config=bnb_config,\r\n trust_remote_code=True # allows to execute custom code you download from the uploaded model code you are using\r\n )\r\n # this is here to save gpu vram. Likely only needed when using 40b or when oom issues happen ref: https://stackoverflow.com/questions/76633335/why-does-hugging-face-falcon-model-use-mode-config-use-cache-false-why-wouldn\r\n model.config.use_cache = use_cache\r\n print(f'{type(model)=}')\r\n\r\n # - Get falcon tokenizer\r\n tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path,\r\n trust_remote_code=True) # execs code downloaded from hf hub\r\n # tokenizer.pad_token = tokenizer.eos_token # todo: why? https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token\r\n tokenizer.add_special_tokens({'pad_token': '[PAD]'}) # I think this is fine if during the training pad is ignored\r\n model.resize_token_embeddings(len(tokenizer)) # todo: I think this is fine if during the training pad is ignored\r\n print(f'{type(tokenizer)=}')\r\n print(f'{tokenizer.pad_token=}')\r\n```\r\n\r\n----\r\n\r\nSo close....\r\n```\r\nDarn this still not works:\r\n```\r\n UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)\r\n```\r\ncode:\r\n```\r\n\"\"\"\r\nsfttrainer (likely using peft) best practices:\r\nhttps://huggingface.co/docs/trl/main/en/sft_trainer#best-practices\r\n\r\nBest practices\r\n\r\nPay attention to the following best practices when training a model with that trainer:\r\n\r\n- SFTTrainer always pads by default the sequences to the max_seq_length argument of the SFTTrainer. If none is passed, the trainer will retrieve that value from the tokenizer. Some tokenizers do not provide default value, so there is a check to retrieve the minimum between 2048 and that value. Make sure to check it before training.\r\n- For training adapters in 8bit, you might need to tweak the arguments of the prepare_model_for_int8_training method from PEFT, hence we advise users to use prepare_in_int8_kwargs field, or create the PeftModel outside the SFTTrainer and pass it.\r\n- For a more memory-efficient training using adapters, you can load the base model in 8bit, for that simply add load_in_8bit argument when creating the SFTTrainer, or create a base model in 8bit outside the trainer and pass it.\r\n- If you create a model outside the trainer, make sure to not pass to the trainer any additional keyword arguments that are relative to from_pretrained() method.\r\n\r\ntodo: why trust_remote_code? I want more details.\r\n\"\"\"\r\nimport sys\r\n\r\nimport torch\r\nfrom peft import LoraConfig\r\n\r\nfrom transformers.modeling_utils import PreTrainedModel\r\n\r\nfrom pdb import set_trace as st\r\n\r\n\r\ndef test_bfloat16_int4(compute_dtype: torch.dtype,\r\n use_4bit,\r\n ):\r\n \"\"\"\r\npython -c \"import torch; print(torch.cuda.get_device_capability());\"\r\n todo: check other code test_bfloat16() do we need use_4bit?\r\n \"\"\"\r\n if compute_dtype == torch.float16 and use_4bit:\r\n major, _ = torch.cuda.get_device_capability()\r\n if major >= 8:\r\n print(\"=\" * 80)\r\n print(\"Your GPU supports bfloat16, you can accelerate training with the argument --bfloat16\")\r\n print(\"=\" * 80)\r\n\r\n\r\ndef get_model_tokenizer_qlora_falcon7b(\r\n # -- mode args\r\n # model_id = \"tiiuae/falcon-7b\"\r\n pretrained_model_name_or_path: str = \"ybelkada/falcon-7b-sharded-bf16\",\r\n use_cache: bool = True,\r\n # -- lora args\r\n lora_alpha=16, # todo\r\n lora_dropout=0.1, # todo, evidence drop out really help? google, crfm, gpt4\r\n lora_r=64, # todo\r\n bnb_4bit_compute_dtype=torch.float16, # changed it from Guanaco hf\r\n\r\n # -- training args\r\n output_dir=\"./results\",\r\n per_device_train_batch_size=4,\r\n gradient_accumulation_steps=4,\r\n # paging so that the sudden mem gpu spikes don't cause the run to shut down\r\n # (I think usually caused by too long seqs)\r\n # todo: why 32 bit opt?\r\n # todo: paged nadamw opt?\r\n optim=\"paged_adamw_32bit\",\r\n save_steps=10,\r\n logging_steps=10,\r\n learning_rate=2e-4,\r\n max_grad_norm=0.3,\r\n max_steps=500,\r\n warmup_ratio=0.03,\r\n lr_scheduler_type=\"constant\",\r\n # -- quant. args (not recommended to be changed unless you know what your doing?)\r\n load_in_4bit=True, # load (usually huge) base model in 4 bits\r\n bnb_4bit_quant_type=\"nf4\", # normal float 4 for the (large) base models qlora\r\n) -> tuple:\r\n \"\"\"\r\n Load the Falcon 7B model, quantize it in 4bit and attach LoRA adapters on it.\r\n\r\n bf16 = 1S, 7Exp, 8Mantissa\r\n hypothesis: 7b trained due to 6.7 emergence rumour, I still don't think emergence is real.\r\n Notes:\r\n - ft a model is very specific to the model, tokenizer and training scheme. Thus we return\r\n - model, tokenizer, ft config (peft config), training args\r\n\r\n ref:\r\n - https://colab.research.google.com/drive/1DOi8MFv4SWN9NImVornZ7t6BgmLoPQO-#scrollTo=AjB0WAqFSzlD\r\n \"\"\"\r\n from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer\r\n\r\n # - Get bnb config for bit-4 base model (bnb lib for using 4bit qlora quantization techniques by tim dettmers)\r\n bnb_config = BitsAndBytesConfig(\r\n load_in_4bit=load_in_4bit, # load (usually huge) base model in 4 bits\r\n bnb_4bit_quant_type=bnb_4bit_quant_type, # normal float 4 for the (usually huge) base model\r\n bnb_4bit_compute_dtype=bnb_4bit_compute_dtype, # if you can, during computation use bf16\r\n )\r\n\r\n # - Get falcon 4bit model\r\n # todo, where is this being saved & how to download quicker\r\n model = AutoModelForCausalLM.from_pretrained(\r\n pretrained_model_name_or_path=pretrained_model_name_or_path,\r\n quantization_config=bnb_config,\r\n trust_remote_code=True # allows to execute custom code you download from the uploaded model code you are using\r\n )\r\n print(f'{type(model)=}')\r\n print(f'{model=}')\r\n # this is here to save gpu vram. Likely only needed when using 40b or when oom issues happen ref: https://stackoverflow.com/questions/76633335/why-does-hugging-face-falcon-model-use-mode-config-use-cache-false-why-wouldn\r\n model.config.use_cache = use_cache\r\n print(f'{type(model)=}')\r\n\r\n # - Get falcon tokenizer\r\n tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path,\r\n trust_remote_code=True) # execs code downloaded from hf hub\r\n # tokenizer.pad_token = tokenizer.eos_token # ref: https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token\r\n # tokenizer.add_special_tokens({'pad_token': '[PAD]'}) # I think this is fine if during the training pad is ignored\r\n tokenizer.add_special_tokens({'pad_token': '<|pad|>'}) # I think this is fine if during the training pad is ignored\r\n\r\n # - Modify model\r\n # add pad token embed\r\n model.resize_token_embeddings(len(tokenizer)) # todo: I think this is fine if during the training pad is ignored\r\n model.transformer.word_embeddings.padding_idx = len(tokenizer) - 1\r\n model.config.max_new_tokens = len(tokenizer)\r\n # model.config.min_length = 1\r\n print(f'{model=}')\r\n print(f'{type(tokenizer)=}')\r\n print(f'{tokenizer.pad_token=}')\r\n # data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) todo\r\n\r\n # - Get falcon lora config\r\n peft_config = LoraConfig(\r\n lora_alpha=lora_alpha,\r\n lora_dropout=lora_dropout,\r\n r=lora_r,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n # model card for falcon tiiuae/falcon-7b: https://huggingface.co/tiiuae/falcon-7b/blob/main/modelling_RW.py\r\n # does seem to include all trainable params as done by qlora on their own paper\r\n target_modules=[\r\n # word_embeddings,\r\n \"query_key_value\",\r\n \"dense\",\r\n \"dense_h_to_4h\",\r\n \"dense_4h_to_h\",\r\n # \"lm_head\"\r\n ]\r\n )\r\n print(f'{type(peft_config)=}')\r\n\r\n # todo: print the num params of the lora = D1*r + D2*r and num of bytes by prec. (bytes) * num params\r\n return model, tokenizer, peft_config\r\n\r\n\r\n# -- tests\r\n\r\ndef example_test_model_already_has_pad_token():\r\n \"\"\"\r\n if it already has pad token, it likely has a small prob, so we are done.\r\n\r\n compare it's norm with other tokens to verify this is true.\r\n\r\npython ~/ultimate-utils/ultimate-utils-proj-src/uutils/hf_uu/model_tokenizer/falcon_uu_mdl_tok.py\r\n \"\"\"\r\n # - the get datasets todo: preprocessing, padding, streaming\r\n from uutils.hf_uu.data_hf.common import get_guanaco_datsets_add_splits_train_test_only\r\n trainset, _, testset = get_guanaco_datsets_add_splits_train_test_only()\r\n\r\n # qlora flacon7b\r\n from uutils.hf_uu.model_tokenizer.falcon_uu_mdl_tok import get_model_tokenizer_qlora_falcon7b\r\n model, tokenizer, peft_config = get_model_tokenizer_qlora_falcon7b()\r\n model: PreTrainedModel = model\r\n print(f'{model=}')\r\n sent = 'Dogs are great because they are '\r\n print()\r\n\r\n # print to see if pad tokens are present and if it ignores the tokens at the end\r\n encoded_input = tokenizer(sent, padding='max_length', max_length=10, return_tensors='pt')\r\n print(f'{encoded_input=}')\r\n\r\n # Print all special tokens\r\n print('\\n---- start Print all special tokens')\r\n for token_name, token in tokenizer.special_tokens_map.items():\r\n print(f\"{token_name}: {token}\")\r\n print('\\n---- end Print all special tokens')\r\n\r\n # Get the ID for the '[PAD]' token\r\n try:\r\n pad_token_id = tokenizer.convert_tokens_to_ids('[PAD]')\r\n except KeyError:\r\n raise ValueError(\"Token [PAD] is not present in the tokenizer vocabulary.\")\r\n\r\n # Index into the model's embedding table\r\n try:\r\n print(f'{model.get_input_embeddings().weight.size()=}')\r\n pad_embedding = model.get_input_embeddings().weight[pad_token_id]\r\n except IndexError:\r\n raise ValueError(f\"Token ID {pad_token_id} is not present in the model's embedding matrix.\")\r\n\r\n print(f'{pad_embedding=}')\r\n print('Success!\\n')\r\n\r\n # check it generates something sensible\r\n # tokenizer.decode(model.generate(**tokenizer(sent, return_tensors='pt'), do_sample=True)[0])\r\n input_ids, attention_mask = encoded_input['input_ids'], encoded_input['attention_mask']\r\n predicted_tokens_ids_options = model.generate(input_ids=input_ids, attention_mask=attention_mask, do_sample=True)\r\n predicted_tokens_ids = predicted_tokens_ids_options[0]\r\n predicted_sent = tokenizer.decode(predicted_tokens_ids)\r\n print(f'original sentence: {sent=}')\r\n print(f'predicted sentence: {predicted_sent=}')\r\n print('Success2!')\r\n\r\n\r\nif __name__ == '__main__':\r\n import time\r\n\r\n start_time = time.time()\r\n example_test_model_already_has_pad_token()\r\n print(f\"The main function executed in {time.time() - start_time} seconds.\\a\")\r\n```\r\nit doesn't like the modifications to the model:\r\n```\r\n model.transformer.word_embeddings.padding_idx = len(tokenizer) - 1\r\n model.config.max_new_tokens = len(tokenizer)\r\n```\r\n\r\n",
"Hey @brando90 ! Thanks a lot for reporting and using `transformers`. This particular thread is not exactly the good place to have such huge chunks of codes and talk about another issue. My best recommendation is: \r\n- create a colab with your code, make it minimaly reproducible. Use small models so that it's faster for everyone who wants to take a look 🚀 !\r\n- share your colab and issue on the hugging face forum : https://discuss.huggingface.co/. If you don't get an answer form the community, try to ping me or anyone from the team! \r\n- properly format the part of your code. In this case the previous message is pretty much unreadable! Would love to help you make this work, but make sure you convey in a good format! \r\n- summarise your issue! (A tokenizer not having a pad tokens is pretty common, GPT2 was pretty much the same. When training, inputs can often be truncated rather than padded, to have as much information as possible).\r\n- check the documentation 📖 ! Especially regarding how to modify generation parameters such as `pad_token`, `max_new_tokens` etc . You should have a look [here](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig). This will remove the warning that you were seeing. \r\n\r\nReading the post you created on the HF forum, you mention \r\n> it doesn’t like the modifications to the model: \r\n\r\nBut since there is no traceback, this is very vague! A colab will show the outputs you got, making it easier to understand. \r\nAlso regarding padding token and not padding token, I believe this is a very important question and if we should review how we resize the embedding, so be it! Some model's embedding are usually always bigger than the length of the tokenizer to allow adding new tokens / be a power of X to make it faster."
] | 1,681
| 1,707
| 1,684
|
NONE
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.13.3
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 2.1.0.dev20230411+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
As mentioned on the title, the LLaMA tokenizer does not add the `eos_token` at the end of the inputs. This only happens on the fast version (`use_fast=True`).
Steps to reproduce the behaviour:
1. Load the LLaMA tokenizer
```python
tokenizer = AutoTokenizer.from_pretrained(LLAMA_PATH, add_eos_token=True, use_fast=True)
```
2. Tokenize something
```python
simple_sentence = "This is a sentence to test if the tokenizer adds eos token."
simple_sentence_ids = tokenizer(
simple_sentence, add_special_tokens=True
).input_ids
```
3. Print the `input_ids` to check if the `eos_token_id` (`2`) is added at the end.
```python
print(simple_sentence_ids)
```
4. Output:
```python
[1, 910, 338, 263, 10541, 304, 1243, 565, 278, 5993, 3950, 12778, 321, 359, 5993, 29889]
```
### Expected behavior
Expected output
```python
[1, 910, 338, 263, 10541, 304, 1243, 565, 278, 5993, 3950, 12778, 321, 359, 5993, 29889, 2]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22794/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22793
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22793/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22793/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22793/events
|
https://github.com/huggingface/transformers/pull/22793
| 1,669,994,906
|
PR_kwDOCUB6oc5OaohU
| 22,793
|
🌐 [i18n-KO] Translated `run_scripts.mdx` to Korean
|
{
"login": "HanNayeoniee",
"id": 33839093,
"node_id": "MDQ6VXNlcjMzODM5MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanNayeoniee",
"html_url": "https://github.com/HanNayeoniee",
"followers_url": "https://api.github.com/users/HanNayeoniee/followers",
"following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}",
"gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions",
"organizations_url": "https://api.github.com/users/HanNayeoniee/orgs",
"repos_url": "https://api.github.com/users/HanNayeoniee/repos",
"events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanNayeoniee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니당 -->
# What does this PR do?
Translated the `run_scripts.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- 제출 전 체크리스트로, 가짜연구소만의 체크리스트도 <details>로 감싸서 만들어두면 더 좋을 것 같아요. -->
## Who can review?
<!-- 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22793/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22793/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22793",
"html_url": "https://github.com/huggingface/transformers/pull/22793",
"diff_url": "https://github.com/huggingface/transformers/pull/22793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22793.patch",
"merged_at": 1682345901000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22792
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22792/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22792/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22792/events
|
https://github.com/huggingface/transformers/issues/22792
| 1,669,732,257
|
I_kwDOCUB6oc5jhheh
| 22,792
|
LLaMA 13B does not forward or generate properly after being converted to HuggingFace checkpoints
|
{
"login": "RiverGao",
"id": 56507857,
"node_id": "MDQ6VXNlcjU2NTA3ODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/56507857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RiverGao",
"html_url": "https://github.com/RiverGao",
"followers_url": "https://api.github.com/users/RiverGao/followers",
"following_url": "https://api.github.com/users/RiverGao/following{/other_user}",
"gists_url": "https://api.github.com/users/RiverGao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RiverGao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RiverGao/subscriptions",
"organizations_url": "https://api.github.com/users/RiverGao/orgs",
"repos_url": "https://api.github.com/users/RiverGao/repos",
"events_url": "https://api.github.com/users/RiverGao/events{/privacy}",
"received_events_url": "https://api.github.com/users/RiverGao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@RiverGao - could you retry running these steps with the most recent version of transformers? Llama was under a lot of development and has only be officially added to the library in the most recent release - v4.28.0 (4.28.0.dev is some commit between 4.27 and 4.28). This ensures that you have all the most recent updates to the model. ",
"I've encountered the same problem(specifically step 3) with version 4.29.0.dev0. \r\nA quick fix would be not quantizing the final 2 layers by manually passing the `qunatization_config`:\r\n\r\n```python\r\nfrom transformers.utils.quantization_config import BitsAndBytesConfig\r\n\r\nquantization_config = BitsAndBytesConfig.from_dict({\r\n 'load_in_8bit': True, 'llm_int8_skip_modules': ['model.layers.39', 'model.layers.38', 'lm_head']}, False)\r\n\r\nllama_13B = './models/13B_hf'\r\ntokenizer = LlamaTokenizer.from_pretrained(llama_13B)\r\nmodel = LlamaForCausalLM.from_pretrained(llama_13B, torch_dtype=torch.float16, \r\n quantization_config=quantization_config, load_in_8bit=False, device_map='auto')\r\n\r\n```",
"@aliwalker Thank you for the quick fix!",
"cc @younesbelkada ",
"@aliwalker @amyeroberts I have retried with updated transformers (version 4.29.0dev0), and the result is:\r\n- When using `BitsAndBytesConfig` mentioned above, the model is able to generate normal text, but it still produces NaN's in layer 39\r\n- When `load_in_8bit` is set to `False` for all layers, no NaN value is produced.\r\nSo, in conclusion, it may be a problem with the 8-bit quantization, instead of a problem with the converting script.",
"Hi @RiverGao \r\nThis is probably related to the fact that you are using a V100. Could you share with us your `bitsandbytes` version? Or alternatively update `bitsandbytes` and let us know if you still face the issue\r\n\r\n```bash\r\npip install --upgrade bitsandbytes\r\n```",
"@younesbelkada Thank you for your helpful information, I updated `bitsandbytes` from 0.37.2 to 0.38.1, and LLaMA 13B produced no NaN attention when loaded with `load_in_8bit=True`. However, when applied with [Alpaca-LoRA tuned weights](https://github.com/tloen/alpaca-lora), the model produces NaN's in layer 39 again if it is loaded in int8.",
"Hi @RiverGao \r\nInteresting, how you apply the LoRA weights on that model? Can you share a repro script?",
"@younesbelkada I applied the LoRA weights following using `export_hf_checkpoint.py` from [their repo](https://github.com/tloen/alpaca-lora) by\r\n`$ BASE_MODEL=path/to/converted/llama python export_hf_checkpoint.py`:\r\n```\r\n# export_hf_checkpoint.py\r\n\r\nimport os\r\n\r\nimport torch\r\nimport transformers\r\nfrom peft import PeftModel\r\nfrom transformers import LlamaForCausalLM, LlamaTokenizer # noqa: F402\r\n\r\nBASE_MODEL = os.environ.get(\"BASE_MODEL\", None)\r\nassert (\r\n BASE_MODEL\r\n), \"Please specify a value for BASE_MODEL environment variable, e.g. `export BASE_MODEL=huggyllama/llama-7b`\" # noqa: E501\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(BASE_MODEL)\r\n\r\nbase_model = LlamaForCausalLM.from_pretrained(\r\n BASE_MODEL,\r\n load_in_8bit=False,\r\n torch_dtype=torch.float16,\r\n device_map={\"\": \"cpu\"},\r\n)\r\n\r\nfirst_weight = base_model.model.layers[0].self_attn.q_proj.weight\r\nfirst_weight_old = first_weight.clone()\r\n\r\nlora_model = PeftModel.from_pretrained(\r\n base_model,\r\n \"Angainor/alpaca-lora-13b\",\r\n device_map={\"\": \"cpu\"},\r\n torch_dtype=torch.float16,\r\n)\r\n\r\nlora_weight = lora_model.base_model.model.model.layers[\r\n 0\r\n].self_attn.q_proj.weight\r\n\r\nassert torch.allclose(first_weight_old, first_weight)\r\n\r\n# merge weights - new merging method from peft\r\nlora_model = lora_model.merge_and_unload()\r\n\r\nlora_model.train(False)\r\n\r\n# did we do anything?\r\nassert not torch.allclose(first_weight_old, first_weight)\r\n\r\nlora_model_sd = lora_model.state_dict()\r\ndeloreanized_sd = {\r\n k.replace(\"base_model.model.\", \"\"): v\r\n for k, v in lora_model_sd.items()\r\n if \"lora\" not in k\r\n}\r\n\r\nLlamaForCausalLM.save_pretrained(\r\n base_model, \"../alpaca-hf/13B\", state_dict=deloreanized_sd, max_shard_size=\"400MB\"\r\n)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,686
| 1,686
|
NONE
| null |
### System Info
## System info
- Transformer Version: 4.28.0.dev0
- Platform: Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-194-generic x86_64)
- GPU: Nvidia Tesla V100 SXM2 32GB
- CUDA version: 11.0
- PyTorch version: 1.12.1
## Summary
The LLaMA 13B model converted to HuggingFace format seems not able to generate any text, which may be due to NaN attentions produced in the last layer.
## Bug description
- Downloaded LLaMA weights (7B and 13B) from facebook, and checked their generation by `llama/example.py` provided with them ✅
- Used `transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py` to convert 7B and 13B models into huggingface format ✅
- Loaded the 13B model by `LlamaForCausalLM.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True)` and try to generate text by `model.generate` and ` tokenizer.batch_decode`, but the model simply outputs the given input and nothing else. ❌
- Loaded the 13B model by `LlamaModel.from_pretrained(model_path, device_map="balanced", load_in_8bit=True)` and try to obtain its attentions by `attentions = model(**encoded_input, output_attentions=True).attentions`, and found that all the attention weights in the last layer (layer 39) are NaN's. ❌
## Trying to debug
- Loaded the 7B model by `LlamaForCausalLM.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True)`, and checked its generation by `model.generate` and ` tokenizer.batch_decode` ✅
- Loaded the 7B model by `LlamaModel.from_pretrained(model_path, device_map="balanced", load_in_8bit=True)` and try to obtain its attentions by `attentions = model(**encoded_input, output_attentions=True).attentions`, and the attentions are fine ✅
- I guess there may be corruptions in the model parameters. So I printed the parameters in layer 38 and 39, `model.norm.weight`, and `lm_head.weight` of the 13B model, but did not find any trace.
### Who can help?
@ArthurZucker @sgugger
### Information
- [x] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Step 1: Convert LLaMA weights to HuggingFace
```
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir downloaded_llama_weights --model_size 13B --output_dir llama-hf
```
Step 2: Try to generate text
```
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = f'llama-hf/13B'
sentences = [
'Hello!',
'Translate this sentence into German: I love baseball.',
'Tell me about New York.'
]
# model initialization
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True)
# iterate over sentences
for sentence in sentences:
print('-----\nInput:\n' + sentence + '\n-----')
encoded_input = tokenizer(sentence, return_tensors='pt').to('cuda')
generate_ids = model.generate(encoded_input.input_ids, max_length=256)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print('-----\nOutput:\n' + output + '\n-----')
```
Step 3: Try to obtain model attentions
```
# model initialization
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaModel.from_pretrained(model_path, device_map= "balanced", load_in_8bit=True, torch_dtype=torch.float32)
n_layer = model.config.num_hidden_layers
# iterate over sentences
attention_per_sentence = [[] for j in range(n_layer)] # n_layer * n_sent * (n_head, max_sent_len, max_sent_len)
for sentence in sentences:
encoded_input = tokenizer(sentence, return_tensors='pt')
attentions = model(**encoded_input, output_attentions=True).attentions # n_layer * shape (1, n_head, L, L)
for lyr in range(n_layer):
attn_array = attentions[lyr].detach().cpu().numpy()
assert not np.isnan(np.sum(attn_array)), f"layer {lyr} has NaN attentions"
```
### Expected behavior
- In step 1, everything works fine.
- In step 2, the outputs are the same as inputs, no token is generated
- In step 3, assertion fails in layer 39: "AssertionError: layer 39 has NaN attentions"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22792/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/22792/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22791
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22791/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22791/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22791/events
|
https://github.com/huggingface/transformers/issues/22791
| 1,669,707,803
|
I_kwDOCUB6oc5jhbgb
| 22,791
|
Able to load 'gpt_neox_reward_model' type models
|
{
"login": "sann3",
"id": 5269291,
"node_id": "MDQ6VXNlcjUyNjkyOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5269291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sann3",
"html_url": "https://github.com/sann3",
"followers_url": "https://api.github.com/users/sann3/followers",
"following_url": "https://api.github.com/users/sann3/following{/other_user}",
"gists_url": "https://api.github.com/users/sann3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sann3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sann3/subscriptions",
"organizations_url": "https://api.github.com/users/sann3/orgs",
"repos_url": "https://api.github.com/users/sann3/repos",
"events_url": "https://api.github.com/users/sann3/events{/privacy}",
"received_events_url": "https://api.github.com/users/sann3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sann3, could you follow the issue template please and provide: \r\n* Environment information printed out when running `transformers-cli env` in your terminal\r\n* A reproducible code snippet\r\n* Error and full traceback\r\n* Information about the expected behaviour",
"Hi, to add to this (same issue). The new models from OpenAssistant have `gpt_neox_reward_model` defined as the model type in `config.json` This doesn't map to any existing models as defined in `transformers\\models\\auto\\configuration_auto.py`.\r\n\r\nThe traceback for the error is as follows:\r\n\r\n```\r\nTraceback (most recent call last):\r\nFile “C:\\dev\\oobabooga-windows\\text-generation-webui\\server.py”, line 85, in load_model_wrapper\r\nshared.model, shared.tokenizer = load_model(shared.model_name)\r\nFile “C:\\dev\\oobabooga-windows\\text-generation-webui\\modules\\models.py”, line 186, in load_model\r\nmodel = LoaderClass.from_pretrained(checkpoint, **params)\r\nFile “C:\\dev\\oobabooga-windows\\installer_files\\env\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py”, line 441, in from_pretrained\r\nconfig, kwargs = AutoConfig.from_pretrained(\r\nFile “C:\\dev\\oobabooga-windows\\installer_files\\env\\lib\\site-packages\\transformers\\models\\auto\\configuration_auto.py”, line 937, in from_pretrained\r\nconfig_class = CONFIG_MAPPING[config_dict[“model_type”]]\r\nFile “C:\\dev\\oobabooga-windows\\installer_files\\env\\lib\\site-packages\\transformers\\models\\auto\\configuration_auto.py”, line 643, in getitem\r\nraise KeyError(key)\r\nKeyError: ‘gpt_neox_reward_model’\r\n```\r\nI'm experimenting by changing the value in `config.json` for the model to `gpt_neox`.\r\nI hope this helps.",
"@sann3 @sweetlilmre Could either of you share which checkpoint you're trying to use? \r\n\r\nOn one of the OpenAssistant [model repos](https://huggingface.co/OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1), they share a code snippet explaining how to use the checkpoint, including how to use their GPTNeoXRewardModel defined [here](https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/models/reward_model.py). \r\n\r\n",
"Hi @amyeroberts I'm using the model with [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) so my low level knowledge is almost zero here. Thank you for the links, I'll check them out, but I think this is probably a capability mismatch between the text-generation-webui project and the new models from OpenAssistant. I really don't want to waste your time on this.",
"@amyeroberts I am trying the following models, the [link](https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/models/reward_model.py) you have shared might help\r\nOpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5\r\nOpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1",
"@sann3 - yes, for those checkpoints the instructions on the [model repo](https://huggingface.co/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5#how-to-use) should enable you to use them. ",
"Able to load the model with https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/models/reward_model.py code."
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### Feature request
The new models from OpenAssistant are type of gpt_neox_reward_model, the latest version of transformers lib not supporting them.
OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5
OpenAssistant/oasst-rm-2-pythia-6.9b-epoch-1
Getting the following error
KeyError: 'gpt_neox_reward_model'
-
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22791/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22790
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22790/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22790/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22790/events
|
https://github.com/huggingface/transformers/issues/22790
| 1,669,654,480
|
I_kwDOCUB6oc5jhOfQ
| 22,790
|
DeBERTa models produce nonsense fill-mask output
|
{
"login": "mawilson1234",
"id": 5022150,
"node_id": "MDQ6VXNlcjUwMjIxNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5022150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mawilson1234",
"html_url": "https://github.com/mawilson1234",
"followers_url": "https://api.github.com/users/mawilson1234/followers",
"following_url": "https://api.github.com/users/mawilson1234/following{/other_user}",
"gists_url": "https://api.github.com/users/mawilson1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mawilson1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mawilson1234/subscriptions",
"organizations_url": "https://api.github.com/users/mawilson1234/orgs",
"repos_url": "https://api.github.com/users/mawilson1234/repos",
"events_url": "https://api.github.com/users/mawilson1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/mawilson1234/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey! Did you find a solution/cause yet? I am experiencing the same issues on debertav3-base even though I pretrained the model on my own training data...",
"No dice, but I discovered the problem is worse than than just mask filling; it doesn't even produce the right thing for given tokens.\r\n```python\r\n>>> import torch\r\n>>> from transformers import AutoModelForMaskedLM, AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained('deberta-v3-base')\r\n>>> model = AutoModelForMaskedLM.from_pretrained('deberta-v3-base')\r\n>>> text = 'Do you [MASK] the muffin man?'\r\n>>> inputs = tokenizer(text, return_tensors='pt')\r\n\r\n# double checking\r\n>>> tokenizer.batch_decode(inputs['input_ids'])\r\n# all good\r\n['Do you [MASK] the muffin man?']\r\n\r\n>>> with torch.no_grad():\r\n>>> outputs = model(**inputs)\r\n>>> tokenizer.batch_decode(torch.argmax(outputs.logits, dim=-1))\r\n# ???\r\n['ût slimnatch Laughternatchilia Arrijailût']\r\n```\r\nI'd think it was something with the tokenizer, but for you saying you had the same issue with your pre-trained model. Do you know whether the same thing happens for all positions for your model?\r\n\r\nEdit:\r\nFound #18674 that references this. Looks like it's been around for a while and it's being worked on.",
"Hey! I just came back from holidays, will have a look when I can, note that Deberta should be refactored soon, follow #22105 if you want to know more. This will be looked at when fixing! ",
"Hope to get to this by the end of the summer! ",
"I'm leaving this open to the community, did not have the bandwidth to adress it :( "
] | 1,681
| 1,707
| null |
NONE
| null |
### System Info
Python version: 3.8.15
Transformers version: 4.24.0
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Both on the HF website and using transformers in Python scripts/interpreter, the DeBERTa models seem to produce nonsense outputs in a fill-mask task. This is demonstrated below using a fill-mask pipeline for ease of reproduction, but the same thing happens even when calling the models manually and inspecting the logits. I demonstrate with one model, but the other `microsoft/deberta` masked language models appear to have the same issue (i.e., not the ones fine-tuned on mnli or whatever, which I wouldn't test against).
```python
>>> from transformers import pipeline
>>> test_sentence = 'Do you [MASK] the muffin man?'
# for comparison
>>> bert = pipeline('fill-mask', model = 'bert-base-uncased')
>>> print('\n'.join([d['sequence'] for d in bert(test_sentence)]))
do you know the muffin man?
do you remember the muffin man?
do you mean the muffin man?
do you see the muffin man?
do you recognize the muffin man?
>>> deberta = pipeline('fill-mask', model = 'microsoft/deberta-v3-large')
>>> print('\n'.join([d['sequence'] for d in deberta(test_sentence)]))
Do you Moisturizing the muffin man?
Do you Kagan the muffin man?
Do youULA the muffin man?
Do you闘 the muffin man?
Do you aplica the muffin man?
```
Here's a screenshot from the HF website for the same model (`microsoft/deberta-v3-large`):

Based on the paper and the documentation on the model cards, it seems like these should be able to be used for masked language modeling out of the box since they were pre-trained on it, but they're clearly not doing a good job of it. Am I missing something about why these models shouldn't be used for MLM without fine-tuning, or is there a bug with them?
### Expected behavior
I'd expect sensible predictions for masked token locations (assuming these models can indeed be used for that without additional fine-tuning).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22790/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22789
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22789/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22789/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22789/events
|
https://github.com/huggingface/transformers/issues/22789
| 1,669,608,647
|
I_kwDOCUB6oc5jhDTH
| 22,789
|
It's possible to create a chunk pipeline with an invalid stride/max-length combination
|
{
"login": "boyleconnor",
"id": 6520892,
"node_id": "MDQ6VXNlcjY1MjA4OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boyleconnor",
"html_url": "https://github.com/boyleconnor",
"followers_url": "https://api.github.com/users/boyleconnor/followers",
"following_url": "https://api.github.com/users/boyleconnor/following{/other_user}",
"gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions",
"organizations_url": "https://api.github.com/users/boyleconnor/orgs",
"repos_url": "https://api.github.com/users/boyleconnor/repos",
"events_url": "https://api.github.com/users/boyleconnor/events{/privacy}",
"received_events_url": "https://api.github.com/users/boyleconnor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> ChunkPipeline should follow the philosophy of failing as early as possible with a clear message to the user. Therefore any initialization of a ChunkPipeline with a stride that is too long for its model's maximum length (plus the special tokens added by its tokenizer) should result in an exception.\r\n\r\nFully agree with that statement.\r\n\r\n> Fixing this would unfortunately be more complicated than just checking stride < tokenizer.model_max_length. \r\n\r\n\r\nThis is completely true, however I think if we can use it and already fix 90% of the issues by raising early that's already a huge win.\r\n\r\nI think raising early during `_sanitize_parameters` makes complete sense to me.\r\n\r\n@sgugger for a core maintainer's opinion. Wdyt ?\r\n\r\n@boyleconnor would you be open to create a PR for it ?",
"Yes, the simple check should cover most use cases already and would be a nice addition.",
"So I take it there is no way to extract from the tokenizer how many special tokens it will add to each window (that you are aware of @sgugger and @Narsil)?\r\n\r\n@Narsil I can open a PR some time this weekend"
] | 1,681
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.16
- Huggingface_hub version: 0.12.1
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When the user initializes a `ChunkPipeline` (e.g. for token classification) with a `stride` that is too high for the given tokenizer's `model_max_length`, the pipeline initializes without raising any errors. Ideally, it should not let the user initialize a pipeline with invalid parameters.
#### Concrete example
If we attempt to initialize a pipeline with a `stride` equal to its tokenizer's `model_max_length` (in the case of GPT-2, `1024`) it will initialize without any warnings or errors (which is bad):
```
from transformers import pipeline
token_classifier = pipeline('token-classification-sliding-window', model='nguyenkhoa2407/gpt2-NER-favsbot', aggregation_strategy='FIRST', stride=1024)
```
however, when we go to use the pipeline (in this case, on some sufficiently long nonsense text):
```
text = 2000 * 'hello, '
output = token_classifier(text)
```
it will finally inform the user that this is an invalid value for `stride` by giving them the following error:
```
PanicException: assertion failed: stride < max_len
```
Note that this error *won't* be triggered if the text is shorter than the tokenizer's `model_max_length`:
```
text = 20 * 'hello, '
output = token_classifier(text)
# this executes without raising any error or warning
```
thus allowing the bug to go undetected until the user inputs a text of sufficient length.
#### Complication: special tokens
Fixing this would unfortunately be more complicated than just checking `stride < tokenizer.model_max_length`. Since tokenizer's `stride`s account for special characters, the true value of `max_len` is `tokenizer.model_max_length - <num_special_tokens>`, where `<num_special_tokens>` is the number of special tokens added by the tokenizer to each chunk. E.g. the following:
```
token_classifier = pipeline('token-classification', model='dslim/bert-base-NER', aggregation_strategy='FIRST', stride=510)
text = 2000 * 'hello, '
output = token_classifier(text)
```
will also fail (with the same `PanicException` as above) because while BERT's `model_max_length` is 512 (longer than the stride of `510`), the tokenizer adds 2 special tokens (namely, `[CLS]` and `[SEP]`) to each chunk.
### Expected behavior
`ChunkPipeline` should follow the philosophy of failing as early as possible with a clear message to the user. Therefore any initialization of a `ChunkPipeline` with a `stride` that is too long for its model's maximum length (plus the special tokens added by its tokenizer) should result in an exception.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22789/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22788
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22788/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22788/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22788/events
|
https://github.com/huggingface/transformers/pull/22788
| 1,669,601,282
|
PR_kwDOCUB6oc5OZXT8
| 22,788
|
Feature to convert videomae huge and small finetuned on kinetics and ssv2 added to the videomae to pytorch converter
|
{
"login": "sandstorm12",
"id": 12384476,
"node_id": "MDQ6VXNlcjEyMzg0NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12384476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sandstorm12",
"html_url": "https://github.com/sandstorm12",
"followers_url": "https://api.github.com/users/sandstorm12/followers",
"following_url": "https://api.github.com/users/sandstorm12/following{/other_user}",
"gists_url": "https://api.github.com/users/sandstorm12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sandstorm12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sandstorm12/subscriptions",
"organizations_url": "https://api.github.com/users/sandstorm12/orgs",
"repos_url": "https://api.github.com/users/sandstorm12/repos",
"events_url": "https://api.github.com/users/sandstorm12/events{/privacy}",
"received_events_url": "https://api.github.com/users/sandstorm12/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@NielsRogge Uploaded the huge-kinetics, small-kinetics, and small-ssv2 models to the HF model hub under the following names:\r\n\r\n1. sandstorm12/videomae-huge-finetuned-kinetics\r\n2. sandstorm12/videomae-small-finetuned-kinetics\r\n3. sandstorm12/videomae-small-finetuned-ssv2\r\n\r\nLet me know if anything else is needed."
] | 1,681
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds the feature to convert VideoMAE huge and small to hugging face compatible pytorch model. The following models are added to the converter:
1. VideoMAE huge fine-tuned Kinetics
2. VideoMAE small fine-tuned Kinetics
3. VideoMAE small fine-tuned SSV2
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22788/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22788",
"html_url": "https://github.com/huggingface/transformers/pull/22788",
"diff_url": "https://github.com/huggingface/transformers/pull/22788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22788.patch",
"merged_at": 1682107986000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22787
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22787/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22787/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22787/events
|
https://github.com/huggingface/transformers/pull/22787
| 1,669,583,405
|
PR_kwDOCUB6oc5OZT8b
| 22,787
|
Add `top_k` argument to post-process of conditional/deformable-DETR
|
{
"login": "CreatlV",
"id": 6471651,
"node_id": "MDQ6VXNlcjY0NzE2NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6471651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CreatlV",
"html_url": "https://github.com/CreatlV",
"followers_url": "https://api.github.com/users/CreatlV/followers",
"following_url": "https://api.github.com/users/CreatlV/following{/other_user}",
"gists_url": "https://api.github.com/users/CreatlV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CreatlV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CreatlV/subscriptions",
"organizations_url": "https://api.github.com/users/CreatlV/orgs",
"repos_url": "https://api.github.com/users/CreatlV/repos",
"events_url": "https://api.github.com/users/CreatlV/events{/privacy}",
"received_events_url": "https://api.github.com/users/CreatlV/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@CreatlV Thanks for opening this PR! \r\n\r\nHaving the processor be compatible with the model is definitely something we want and updating the code to make it less brittle is a great initiative. At the moment, with `min`, if `num_queries *num_classes < 100`, then the model will return all of the boxes. I think we could adapt this further to make it scale `k` according to the model. Specifically, adding an argument e.g. `k` to the method, which defaults to `num_queries` or its current default. We can keep the `min` upper bound to keep it safe. \r\n\r\n@NielsRogge For the number of boxes returned, the default `k` value for this model is 100 (rather than 300). Was there a reason for setting it to this value? (I'm guessing consistency with other detr models?)\r\n",
"@amyeroberts it was done to reflect the original code, as linked in his message. The probabilities get reshaped to `(batch_size, num_queries*num_labels)` and then the top 100 values (highest scoring queries) are taken for each example in the batch. However, since Deformable DETR uses 300 queries by default, this will always be > 100. But when you train the model from scratch with a custom number of queries, this would indeed raise an error.\r\n\r\nMaking this more general makes sense. Note that we typically filter them based on a certain threshold; we first filter the 300 queries to get the top 100 recognized objects, and then set a threshold like 0.9 to only get the predictions with a score higher than 0.9. Both the `threshold` and the `top_k` value can both be seen as postprocessing hyperparameters. However I'm not sure `top_k` is general enough as it seems DETR-specific",
"I added `top_k` as an argument to the post-processing functions of conditional/deformable-DETR that used them. With the default value unchanged from previously. The top_k value for `post_process` of conditional DETR is 300, compared to 100 of the other functions, is this intentional @NielsRogge ? ",
"Thanks again for adding this improvement and iterating! 🎉 "
] | 1,681
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
# What does this PR do?
The current post-processing for object detection methods of deformable and conditional DETR assumes the number of classes * the number of object queries > 100. This reflects the original code in the [deformable-DETR repository](https://github.com/fundamentalvision/Deformable-DETR/blob/11169a60c33333af00a4849f1808023eba96a931/models/deformable_detr.py#L412). However, this limits the flexibility of training on datasets with fewer classes/object queries. This PR suggests updating the post process for object detection code not to break if n_classes * n_object_queries < 100.
This PR suggests adding `top_k` argument to post-process functions of conditional/deformable-DETR with the default value of the previously hard-coded value.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts, as you added these models do you think this approach is a reasonable addition?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22787/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22787",
"html_url": "https://github.com/huggingface/transformers/pull/22787",
"diff_url": "https://github.com/huggingface/transformers/pull/22787.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22787.patch",
"merged_at": 1683796064000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22786
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22786/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22786/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22786/events
|
https://github.com/huggingface/transformers/issues/22786
| 1,669,574,064
|
I_kwDOCUB6oc5jg62w
| 22,786
|
Implement a decode method in transformers.BasicTokenizer
|
{
"login": "jiangwangyi",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwangyi",
"html_url": "https://github.com/jiangwangyi",
"followers_url": "https://api.github.com/users/jiangwangyi/followers",
"following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwangyi/orgs",
"repos_url": "https://api.github.com/users/jiangwangyi/repos",
"events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwangyi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] |
[
"@jiangwy99 The BasicTokenizer class will perform simple string processing e.g. splitting on white spaces. However it doesn't encode the tokens to ids, and so doesn't have a corresponding `decode` method. \r\n\r\ncc @ArthurZucker ",
"@amyeroberts BasicTokenizer has implemented a tokenize function, which converts text into a list of tokens. What I want is a de-tokenize, which converts a list of tokenized tokens into the original text, serving as a dual operation of BasicTokenizer.tokenize",
"@jiangwy99 The BasicTokenizer is just that, a very simple class used for doing basic preprocessing of strings: puncutation splitting, making lower case etc. Adding this functionality is beyond the scope of the current class. \r\n\r\nIf it's something you're still interested in, you're welcome to make a fork of the repo with an implementation and share it here. The forum is [a good place](https://discuss.huggingface.co/) to discuss questions about implementation details. Note: it won't be possible to always exactly recover the original input if `do_lower_case=True`, and special handling of spacing and capitalization with punctuation would be required. "
] | 1,681
| 1,681
| null |
CONTRIBUTOR
| null |
### Feature request
Transformers has provided a nice BasicTokenizer for basic tokenizing when we don't need BPE tokenizers. For data processing (like data format converting), it is better to offer a decode method for basic use.
### Motivation
When doing data format converting in some data processing problems, we usually meet the requirement to recover a list of tokens into continuous, readable text.
### Your contribution
None.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22786/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/22785
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22785/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22785/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22785/events
|
https://github.com/huggingface/transformers/pull/22785
| 1,669,524,879
|
PR_kwDOCUB6oc5OZIVW
| 22,785
|
improve(llama): Faster apply_rotary_pos_emb
|
{
"login": "fpgaminer",
"id": 1585817,
"node_id": "MDQ6VXNlcjE1ODU4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1585817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fpgaminer",
"html_url": "https://github.com/fpgaminer",
"followers_url": "https://api.github.com/users/fpgaminer/followers",
"following_url": "https://api.github.com/users/fpgaminer/following{/other_user}",
"gists_url": "https://api.github.com/users/fpgaminer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fpgaminer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fpgaminer/subscriptions",
"organizations_url": "https://api.github.com/users/fpgaminer/orgs",
"repos_url": "https://api.github.com/users/fpgaminer/repos",
"events_url": "https://api.github.com/users/fpgaminer/events{/privacy}",
"received_events_url": "https://api.github.com/users/fpgaminer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Should a similar patch be applied to GPT-NeoX?",
"@neggert I believe it can be added to GPT-NeoX too - very happy to review a PR if you'd like to add! "
] | 1,681
| 1,688
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
Faster implementation for `apply_rotary_pos_emb` in `modeling_llama.py`.
Please see issue #22683 for code that verifies the correctness of the change.
NOTE: Not marking as fixing the above issue, as speed is still not as good as before.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22785/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22785",
"html_url": "https://github.com/huggingface/transformers/pull/22785",
"diff_url": "https://github.com/huggingface/transformers/pull/22785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22785.patch",
"merged_at": 1681741119000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22784
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22784/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22784/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22784/events
|
https://github.com/huggingface/transformers/issues/22784
| 1,669,480,676
|
I_kwDOCUB6oc5jgkDk
| 22,784
|
Example does not work at all
|
{
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@Oxi84 Thanks for reporting this issue. \r\n\r\nCould you share some more information so that we can best help you? Specifically the running environment: copy-paste the info you get from running `transformers-cli env` in your terminal (\"latest version\" is ill-defined). I'm able to run the snippet: \r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\ntokenizer = AutoTokenizer.from_pretrained(\"sileod/deberta-v3-base-tasksource-nli\")\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"sileod/deberta-v3-base-tasksource-nli\")\r\n```\r\nwithout any issue on the main branch - 4.29.0.dev0. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### System Info
latest version
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tried to run this as in instructions:
https://huggingface.co/sileod/deberta-v3-base-tasksource-nli
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sileod/deberta-v3-base-tasksource-nli")
model = AutoModelForSequenceClassification.from_pretrained("sileod/deberta-v3-base-tasksource-nli")
I get error that tokenizer does not exist.
Then i tried the author code:
from tasknet import Adapter
from transformers import AutoModelForMultipleChoice,AutoTokenizer
model_name="sileod/deberta-v3-base-tasksource-nli"
tokenizer3 = AutoTokenizer.from_pretrained("microsoft/deberta-v3-base")
model3 = AutoModelForMultipleChoice.from_pretrained(model_name,ignore_mismatched_sizes=True, cache_dir="/root/Desktop/models/", low_cpu_mem_usage=True)
adapter = Adapter.from_pretrained(model_name.replace('nli','adapters'))
model_for_rlhf = adapter.adapt_model_to_task(model3, 'glue/cola') #glue/cola #hh-rlhf
if 1==1:
def cola(sentences1):
import torch
import time
beg = time.time()
inputs = tokenizer3(sentences1, return_tensors="pt", padding=True, truncation=True, max_length=40).to("cpu")
with torch.no_grad():
outputs = model_for_rlhf(**inputs)
the_cola_scores = []
print("outputs.logits",outputs.logits)
for aout in outputs.logits:
cola_prediction = torch.nn.functional.softmax(aout)[1].item()
the_cola_scores.append(round(cola_prediction,2))
return the_cola_scores
import time
timea = time.time
sentences1 = ["I likes apples","I love apples."]
cola = cola(sentences1)
print(cola,timea - time.time )
I get error:
RuntimeError: shape '[-1, 6]' is invalid for input of size 4
### Expected behavior
it should work, but there are many models here that do not have instruction on running them.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22784/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22783
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22783/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22783/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22783/events
|
https://github.com/huggingface/transformers/pull/22783
| 1,669,307,348
|
PR_kwDOCUB6oc5OYc0Y
| 22,783
|
🌐 [i18n-KO] Translated `tasks/summarization.mdx` to Korean
|
{
"login": "sim-so",
"id": 96299403,
"node_id": "U_kgDOBb1piw",
"avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sim-so",
"html_url": "https://github.com/sim-so",
"followers_url": "https://api.github.com/users/sim-so/followers",
"following_url": "https://api.github.com/users/sim-so/following{/other_user}",
"gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sim-so/subscriptions",
"organizations_url": "https://api.github.com/users/sim-so/orgs",
"repos_url": "https://api.github.com/users/sim-so/repos",
"events_url": "https://api.github.com/users/sim-so/events{/privacy}",
"received_events_url": "https://api.github.com/users/sim-so/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Team Pseudo Lab,\r\nI'm happy to inform you I finished the translation!\r\nCould you review this PR?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Could you review this PR? 😄 \r\n@sgugger, @ArthurZucker, @eunseojo"
] | 1,681
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Translated the `tasks/summarization.mdx` file of the documentation to Korean.
Thank you in advance for your review!
Part of #20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
This is a work on progress.
Could you review this PR when I finish this work?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22783/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22783/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22783",
"html_url": "https://github.com/huggingface/transformers/pull/22783",
"diff_url": "https://github.com/huggingface/transformers/pull/22783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22783.patch",
"merged_at": 1682341383000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22782
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22782/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22782/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22782/events
|
https://github.com/huggingface/transformers/issues/22782
| 1,669,206,690
|
I_kwDOCUB6oc5jfhKi
| 22,782
|
A minor change in the `decoder_config` of T5Model
|
{
"login": "uakarsh",
"id": 55104596,
"node_id": "MDQ6VXNlcjU1MTA0NTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/55104596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uakarsh",
"html_url": "https://github.com/uakarsh",
"followers_url": "https://api.github.com/users/uakarsh/followers",
"following_url": "https://api.github.com/users/uakarsh/following{/other_user}",
"gists_url": "https://api.github.com/users/uakarsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uakarsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uakarsh/subscriptions",
"organizations_url": "https://api.github.com/users/uakarsh/orgs",
"repos_url": "https://api.github.com/users/uakarsh/repos",
"events_url": "https://api.github.com/users/uakarsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/uakarsh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"My mistake, sorry for the issue"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
In the line mentioned below,
https://github.com/huggingface/transformers/blob/fb3aa06cb673d0e2774a2924747d3290135c09cc/src/transformers/models/t5/modeling_t5.py#L1339,
I believe there should be an update in the `decoder_config`
The below is the change, I want to suggest
```python
decoder_config = copy.deepcopy(config)
decoder_config.update("is_decoder", True)
```
This is because, we are using the decoder configuration, and when loading from the `PreTrained` class, the same is suggested.
Not sure whom to tag, @sgugger @NielsRogge, could you address it?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22782/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22781
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22781/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22781/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22781/events
|
https://github.com/huggingface/transformers/issues/22781
| 1,669,167,463
|
I_kwDOCUB6oc5jfXln
| 22,781
|
Unable to import transformers.models.bert.modeling_tf_bert on macOS?
|
{
"login": "talhakabakus",
"id": 444482,
"node_id": "MDQ6VXNlcjQ0NDQ4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/444482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talhakabakus",
"html_url": "https://github.com/talhakabakus",
"followers_url": "https://api.github.com/users/talhakabakus/followers",
"following_url": "https://api.github.com/users/talhakabakus/following{/other_user}",
"gists_url": "https://api.github.com/users/talhakabakus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talhakabakus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talhakabakus/subscriptions",
"organizations_url": "https://api.github.com/users/talhakabakus/orgs",
"repos_url": "https://api.github.com/users/talhakabakus/repos",
"events_url": "https://api.github.com/users/talhakabakus/events{/privacy}",
"received_events_url": "https://api.github.com/users/talhakabakus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@talhakabakus, thanks for raising this issue. \r\n\r\nSo that we can best help you, could you provide some additional information: \r\n* Environment: Copy-paste the output of `transformers-cli env` here\r\n* The error and full trackback encountered",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### System Info
```
transformers=4.28.0
tensorflow-macos=2.9.0
python=3.10
os=macOS Ventura 13.3.1
BERT_model=uncased_L-12_H-768_A-12
```
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Simply run the following code snippet:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained(BERT_PATH)
model = TFBertModel.from_pretrained(BERT_PATH)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
resp = model(encoded_input)
print(resp)
```
### Expected behavior
Retrieve the output generated by the BERT model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22781/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22780
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22780/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22780/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22780/events
|
https://github.com/huggingface/transformers/issues/22780
| 1,668,941,536
|
I_kwDOCUB6oc5jegbg
| 22,780
|
msgpack.exceptions.ExtraData: unpack(b) received extra data.
|
{
"login": "alhuri",
"id": 46427957,
"node_id": "MDQ6VXNlcjQ2NDI3OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/46427957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alhuri",
"html_url": "https://github.com/alhuri",
"followers_url": "https://api.github.com/users/alhuri/followers",
"following_url": "https://api.github.com/users/alhuri/following{/other_user}",
"gists_url": "https://api.github.com/users/alhuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alhuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alhuri/subscriptions",
"organizations_url": "https://api.github.com/users/alhuri/orgs",
"repos_url": "https://api.github.com/users/alhuri/repos",
"events_url": "https://api.github.com/users/alhuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/alhuri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @alhuri! Echoing what I suggested in https://github.com/huggingface/transformers/issues/22673#issuecomment-1517714740 - this is probably best asked on the Italian Hybrid CLIP repo (since they use a custom implementation of JAX CLIP there). Unless you have a code snippet that I can use to reproduce this error with just transformers? In which case I'd be able to take a deeper dive!",
"Closing since this issue is related to the Italian CLIP repo (not transformers!)"
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### System Info
- `transformers` version: 4.9.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (gpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
-Models: FlaxHybridCLIP
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to load a trained FlaxHybridCLIP model from a folder that contains the following files
config.json
flax_model.msgpack
I attempted to load it using the below:
```
if args.run_from_checkpoint is not None:
with open(f"{args.run_from_checkpoint}/config.json", "r") as fp:
config_dict = json.load(fp)
config_dict["vision_config"]["model_type"] = "clip"
config = HybridCLIPConfig(**config_dict)
model = FlaxHybridCLIP.from_pretrained(
args.run_from_checkpoint,
seed=training_args.seed,
dtype=getattr(jnp, model_args.dtype),
config=config,
freeze_backbones=args.freeze_backbones
)
```
But, I encountered the following error:
```
text_config_dict is None. Initializing the CLIPTextConfig with default values.
vision_config_dict is None. initializing the CLIPVisionConfig with default values.
loading weights file freeze/flax_model.msgpack
Traceback (most recent call last):
File "run_hybrid_clip.py", line 832, in <module>
main()
File "run_hybrid_clip.py", line 529, in main
model = FlaxHybridCLIP.from_pretrained(
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_flax_utils.py", line 350, in from_pretrained
state = from_bytes(cls, state_f.read())
File "/home/ubuntu/.local/lib/python3.8/site-packages/flax/serialization.py", line 359, in from_bytes
state_dict = msgpack_restore(encoded_bytes)
File "/home/ubuntu/.local/lib/python3.8/site-packages/flax/serialization.py", line 342, in msgpack_restore
return msgpack.unpackb(
File "msgpack/_unpacker.pyx", line 201, in msgpack._cmsgpack.unpackb
msgpack.exceptions.ExtraData: unpack(b) received extra data.
```
I used the modified Italian hybrid CLIP scripts [here](https://github.com/clip-italian/clip-italian/tree/master/hybrid_clip)
### Expected behavior
to load successfully and fine-tune with unfrozen backbone
Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22780/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22779
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22779/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22779/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22779/events
|
https://github.com/huggingface/transformers/pull/22779
| 1,668,929,748
|
PR_kwDOCUB6oc5OXO5C
| 22,779
|
Move labels to the same device as logits for Whisper
|
{
"login": "oscar-garzon",
"id": 37828243,
"node_id": "MDQ6VXNlcjM3ODI4MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/37828243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oscar-garzon",
"html_url": "https://github.com/oscar-garzon",
"followers_url": "https://api.github.com/users/oscar-garzon/followers",
"following_url": "https://api.github.com/users/oscar-garzon/following{/other_user}",
"gists_url": "https://api.github.com/users/oscar-garzon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oscar-garzon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oscar-garzon/subscriptions",
"organizations_url": "https://api.github.com/users/oscar-garzon/orgs",
"repos_url": "https://api.github.com/users/oscar-garzon/repos",
"events_url": "https://api.github.com/users/oscar-garzon/events{/privacy}",
"received_events_url": "https://api.github.com/users/oscar-garzon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes issue #22561 by moving labels to the same device as logits for `Whisper` model.
@sgugger Could you please review?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22779/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22779",
"html_url": "https://github.com/huggingface/transformers/pull/22779",
"diff_url": "https://github.com/huggingface/transformers/pull/22779.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22779.patch",
"merged_at": 1681513721000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22778
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22778/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22778/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22778/events
|
https://github.com/huggingface/transformers/issues/22778
| 1,668,672,729
|
I_kwDOCUB6oc5jdezZ
| 22,778
|
LLama RuntimeError: CUDA error: device-side assert triggered
|
{
"login": "abdoelsayed2016",
"id": 27821589,
"node_id": "MDQ6VXNlcjI3ODIxNTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/27821589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdoelsayed2016",
"html_url": "https://github.com/abdoelsayed2016",
"followers_url": "https://api.github.com/users/abdoelsayed2016/followers",
"following_url": "https://api.github.com/users/abdoelsayed2016/following{/other_user}",
"gists_url": "https://api.github.com/users/abdoelsayed2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdoelsayed2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdoelsayed2016/subscriptions",
"organizations_url": "https://api.github.com/users/abdoelsayed2016/orgs",
"repos_url": "https://api.github.com/users/abdoelsayed2016/repos",
"events_url": "https://api.github.com/users/abdoelsayed2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdoelsayed2016/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@abdoelsayed2016 Thanks for raising this issue. Could you share a minimal code snippet to enable us to reproduce the error? \r\n\r\nJust from the traceback alone, it seems that the issue is CUDA related, rather than the transformers model. The Llama model has been under active development and was part of the official version release yesterday. I'd also suggest updating to the most version of the code to make sure you have any possible updates which might have been added. ",
"@abdoelsayed2016 hi, any suggestion? same error here when I want to train llama with lora.",
"@j-suyako did you add token to the tokenizer?",
"yes, I follow the [alpaca](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py) to create the dataset, but I forget to resize the tokenizer length. Thanks for your reply!"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-4.18.0-372.16.1.el8_6.0.1.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.13.3
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`
0%| | 0/1524 [00:00<?, ?it/s]Traceback (most recent call last):
File "alpaca-lora/finetune.py", line 234, in <module>
fire.Fire(train)
File ".local/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/c703/c7031420/.local/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/c703/c7031420/.local/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "alpaca-lora/finetune.py", line 203, in train
trainer.train()
File ".conda/envs/llama/lib/python3.9/site-packages/transformers/trainer.py", line 1639, in train
return inner_training_loop(
File ".conda/envs/llama/lib/python3.9/site-packages/transformers/trainer.py", line 1906, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File ".conda/envs/llama/lib/python3.9/site-packages/transformers/trainer.py", line 2652, in training_step
loss = self.compute_loss(model, inputs)
File ".conda/envs/llama/lib/python3.9/site-packages/transformers/trainer.py", line 2684, in compute_loss
outputs = model(**inputs)
File ".conda/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".conda/envs/llama/lib/python3.9/site-packages/peft/peft_model.py", line 575, in forward
return self.base_model(
File ".conda/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".conda/envs/llama/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File ".conda/envs/llama/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 765, in forward
outputs = self.model(
File ".conda/envs/llama/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".conda/envs/llama/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File ".conda/envs/llama/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 574, in forward
attention_mask = self._prepare_decoder_attention_mask(
File ".conda/envs/llama/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 476, in _prepare_decoder_attention_mask
combined_attention_mask = _make_causal_mask(
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.`
### Description
I am interested in working with the Arabic language. I have tried adding all the tokens to the LLama tokenizer, and the tokenizer seems to work fine. However, during training, I encountered an error. I am looking for a solution to resolve this error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22778/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22777
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22777/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22777/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22777/events
|
https://github.com/huggingface/transformers/pull/22777
| 1,668,632,714
|
PR_kwDOCUB6oc5OWNE9
| 22,777
|
Remove accelerate from tf test reqs
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"While the tests aren't being ran here, I did run it on the branch I'm working on (that doesn't touch tf code) and it still passes (and passes now): \r\n\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Per conversation with @Rocketknight1, not merging this because it actually makes the TF example tests fail. However since they were not run here, we can't test that. (And as a result shows a bug)",
"This should work now after you rebase on main!"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR removes `accelerate` from the tf example test requirements as I believe its unused and causing issues with the `Accelerate` integration
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1, @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22777/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22777",
"html_url": "https://github.com/huggingface/transformers/pull/22777",
"diff_url": "https://github.com/huggingface/transformers/pull/22777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22777.patch",
"merged_at": 1681749081000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22776
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22776/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22776/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22776/events
|
https://github.com/huggingface/transformers/pull/22776
| 1,668,632,254
|
PR_kwDOCUB6oc5OWM-V
| 22,776
|
Indexing fix - CLIP checkpoint conversion
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
In the conversion script, there was an indexing error, where the image and text logits where taken as the 2nd and 3rd outputs of the HF model. However, this is only the case if the model returns a loss.
This PR updates the script to explicitly take the named parameters.
Fixes #22739
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22776/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22776",
"html_url": "https://github.com/huggingface/transformers/pull/22776",
"diff_url": "https://github.com/huggingface/transformers/pull/22776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22776.patch",
"merged_at": 1681495968000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22775
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22775/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22775/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22775/events
|
https://github.com/huggingface/transformers/issues/22775
| 1,668,568,298
|
I_kwDOCUB6oc5jdFTq
| 22,775
|
trainer.is_model_parallel seems conflict with deepspeed
|
{
"login": "ehion",
"id": 11802981,
"node_id": "MDQ6VXNlcjExODAyOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/11802981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehion",
"html_url": "https://github.com/ehion",
"followers_url": "https://api.github.com/users/ehion/followers",
"following_url": "https://api.github.com/users/ehion/following{/other_user}",
"gists_url": "https://api.github.com/users/ehion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehion/subscriptions",
"organizations_url": "https://api.github.com/users/ehion/orgs",
"repos_url": "https://api.github.com/users/ehion/repos",
"events_url": "https://api.github.com/users/ehion/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehion/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"https://huggingface.co/transformers/v4.6.0/_modules/transformers/trainer.html",
"and when i test a smaller model which could fit in one gpu memory and does not need model parallel \r\nby running :\r\ndeepspeed \"script + config\" --deepspeed **.json (with num_gpus 2 )\r\nno error occures anymore",
"cc @stas00 ",
"Please provide a reproducible example that I can run to see what the problem is and I would be happy to look at it - I can't do it with what you shared. Thank you.",
"@stas00 this problem ocures when i instance model with parameter device_map=\"auto\" and no bug when i do not use device_map,device_map will use model parallel when model size is too big for one gpu to hold in a multigpu-training env。\r\nYou can test it just use run_clm.py in transformers and use a large model(one gpu memory could not save the model eg:llama7b)instanced with parameter device_map=\"auto\"\r\n```\r\nmodel = LlamaForCausalLM.from_pretrained(model_args.model_name_or_path,torch_dtype=torch.float32,\r\n device_map=\"auto\",\r\n load_in_8bit=False)\r\n``` \r\nso previous bug may be a conflict between deepspeed and transformers,transformers build model parallel by default in my setting but deepspeed backend algorithms don't when device_map=\"auto\"",
"> @stas00 this problem ocures when i instance model with parameter device_map=\"auto\" and no bug when i do not use device_map,device_map will use model parallel when model size is too big for one gpu to hold in a multigpu-training env。 You can test it just use run_clm.py in transformers and use a large model(one gpu memory could not save the model eg:llama7b)instanced with parameter device_map=\"auto\"\r\n> \r\n> ```\r\n> model = LlamaForCausalLM.from_pretrained(model_args.model_name_or_path,torch_dtype=torch.float32,\r\n> device_map=\"auto\",\r\n> load_in_8bit=False)\r\n> ```\r\n> \r\n> so previous bug may be a conflict between deepspeed and transformers,transformers build model parallel by default in my setting but deepspeed backend algorithms don't when device_map=\"auto\"\r\n\r\nHey, the same situation happend to me, did you solve the problem?",
"`device_map=\"auto\"` and DeepSpeed are incompatible. You cannot use them together."
] | 1,681
| 1,685
| 1,681
|
NONE
| null |
### System Info
accelerate 0.18.0
aiofiles 23.1.0
aiohttp 3.8.4
aiosignal 1.3.1
altair 4.2.2
anyio 3.6.2
asttokens 2.2.1
async-timeout 4.0.2
attrs 22.2.0
backcall 0.2.0
backports.functools-lru-cache 1.6.4
bcrypt 4.0.1
bitsandbytes 0.37.2
certifi 2022.12.7
cfgv 3.3.1
chardet 5.1.0
charset-normalizer 3.0.1
click 8.1.3
colossalai 0.2.5
comm 0.1.3
contexttimer 0.3.3
contourpy 1.0.7
cPython 0.0.6
cycler 0.11.0
datasets 2.11.0
debugpy 1.6.7
decorator 5.1.1
deepspeed 0.9.0
dill 0.3.6
distlib 0.3.6
dnspython 2.3.0
entrypoints 0.4
evaluate 0.4.0
executing 1.2.0
fabric 3.0.0
fastapi 0.95.0
ffmpy 0.3.0
filelock 3.11.0
fonttools 4.39.3
frozenlist 1.3.3
fsspec 2023.4.0
gradio 3.24.1
gradio_client 0.0.8
h11 0.14.0
hjson 3.1.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.13.4
identify 2.5.18
idna 3.4
importlib-metadata 6.3.0
importlib-resources 5.12.0
invoke 2.0.0
ipykernel 6.22.0
ipython 8.12.0
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsonschema 4.17.3
jupyter_client 8.1.0
jupyter_core 5.3.0
kiwisolver 1.4.4
linkify-it-py 2.0.0
loralib 0.1.1
markdown-it-py 2.2.0
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
mdit-py-plugins 0.3.3
mdurl 0.1.2
mpi4py 3.1.4
multidict 6.0.4
multiprocess 0.70.14
nest-asyncio 1.5.6
ninja 1.11.1
nodeenv 1.7.0
numpy 1.24.2
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
orjson 3.8.10
packaging 23.1
pandas 2.0.0
paramiko 3.0.0
parso 0.8.3
peft 0.2.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.0.1
platformdirs 3.2.0
pre-commit 3.1.0
prompt-toolkit 3.0.38
psutil 5.9.4
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 10.0.0
pydantic 1.10.7
pydub 0.25.1
Pygments 2.15.0
pymongo 4.3.3
PyNaCl 1.5.0
pyparsing 3.0.9
pyrsistent 0.19.3
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3
PyYAML 6.0
pyzmq 25.0.2
regex 2022.10.31
requests 2.28.2
responses 0.18.0
rfc3986 1.5.0
rich 13.3.1
scikit-learn 1.2.2
scipy 1.10.1
semantic-version 2.10.0
sentencepiece 0.1.97
setuptools 67.6.0
six 1.16.0
sniffio 1.3.0
stack-data 0.6.2
starlette 0.26.1
threadpoolctl 3.1.0
tokenizers 0.13.2
toolz 0.12.0
torch 1.13.1
tornado 6.2
tqdm 4.65.0
traitlets 5.9.0
transformers 4.28.0.dev0
typing_extensions 4.5.0
tzdata 2023.3
uc-micro-py 1.0.1
urllib3 1.26.15
uvicorn 0.21.1
virtualenv 20.19.0
wcwidth 0.2.6
websockets 11.0.1
wheel 0.40.0
xxhash 3.2.0
yarl 1.8.2
zipp 3.15.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
python -m torch.distributed.launch $DISTRIBUTED_ARGS run_clm.py --model_name_or_path "/mnt/zts-dev-data/llama-7b-hf/" --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --do_eval --output_dir /mnt/zts-dev-data/tmp/test-clm --tokenizer_name test --logging_steps 50 --save_steps 1000 --overwrite_output_dir --fp16 True --deepspeed deepspeed_test.json
### Expected behavior
It's ok for me when i pretrained a llama 7b model with 2*a100 with no deepspeed(OOM when only 1*a100 training and huggingface would support model parallel by default therefore no OOM occured), but when i configured training script with --deepspeed this error appeared:
```python
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper__index_select)
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 70758 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 70757) of binary:
```
```
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 1,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
zero_optimization with stage 2 also errors
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22775/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22774
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22774/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22774/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22774/events
|
https://github.com/huggingface/transformers/pull/22774
| 1,668,478,985
|
PR_kwDOCUB6oc5OVrou
| 22,774
|
Don't use `LayoutLMv2` and `LayoutLMv3` in some pipeline tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merge now as this PR only touches tests. Feel free to leave a comment if any @NielsRogge "
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
These 2 models require **different input format** than those of usual text models. See the relevant code block at the end.
The offline discussion with @NielsRogge is that **these 2 models are only for DocQA pipeline**, despite they have implementations for different head tasks.
Therefore, this PR **removes these 2 models from being tested (pipeline) in the first place**, instead of skipping them at later point.
**IMO, we should also remove these models being used in the pipeline classes (except DocQA)** if they are not going to work. But I don't do anything on this.
**`LayoutLMv3` with `DocumentQuestionAnsweringPipeline` (and the pipeline test) is still not working due to some issue. We need to discuss with @NielsRogge to see if it could be fixed, but it's out of this PR's scope.**
### relevant code block
https://github.com/huggingface/transformers/blob/daf53241d6276c0cd932ee8ce3e5b0a403f392b7/src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py#L610-L625
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22774/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22774",
"html_url": "https://github.com/huggingface/transformers/pull/22774",
"diff_url": "https://github.com/huggingface/transformers/pull/22774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22774.patch",
"merged_at": 1681746320000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22773
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22773/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22773/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22773/events
|
https://github.com/huggingface/transformers/issues/22773
| 1,668,467,266
|
I_kwDOCUB6oc5jcspC
| 22,773
|
'transformer_model' object has no attribute 'module'
|
{
"login": "sqinghua",
"id": 71050713,
"node_id": "MDQ6VXNlcjcxMDUwNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/71050713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sqinghua",
"html_url": "https://github.com/sqinghua",
"followers_url": "https://api.github.com/users/sqinghua/followers",
"following_url": "https://api.github.com/users/sqinghua/following{/other_user}",
"gists_url": "https://api.github.com/users/sqinghua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sqinghua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sqinghua/subscriptions",
"organizations_url": "https://api.github.com/users/sqinghua/orgs",
"repos_url": "https://api.github.com/users/sqinghua/repos",
"events_url": "https://api.github.com/users/sqinghua/events{/privacy}",
"received_events_url": "https://api.github.com/users/sqinghua/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sqinghua, this is happening because the class `transformer_model` inherits from PyTorch's nn.Module class, which doesn't have a `save_pretrained` method. `save_pretrained` is a transformers library specific method common to models which inherit from `PreTrainedModel`. \r\n\r\nI'm not sure why `torch.save(...)` doesn't work. I would need more information to be able to help e.g. traceback and error message to know if it's a transformers related issue. \r\n\r\nIt should be possible to save the xlnet model out using `model.xlnet.save_pretrained(checkpoint)`. This won't save out any additional modeling which happens in `transformer_model` e.g. additional layers or steps in the forward pass beyond passing to `self.xlnet`. ",
"@amyeroberts Thank you for your detailed answer, it was very helpful. :)"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
```shell
Platform: Kaggle
python: 3.7
torch: 1.13.0
transformers: 4.27.4
tensorflow: 2.11.0
pre-trained model used: XLNET
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I reproduce others' open-sourced work on Kaggle. Following the partly code:
```
import torch
from torch import nn
from transformers import XLNetConfig, XLNetLMHeadModel, XLNetModel, XLNetTokenizer,AutoTokenizer
class transformer_model(nn.Module):
def __init__(self, model_name, drop_prob = dropout_prob):
super(transformer_model, self).__init__()
configuration = XLNetConfig.from_pretrained(model_name, output_hidden_states=True)
self.xlnet = XLNetModel.from_pretrained(model_name, config = configuration)
...
```
```
def train(model, optimizer, scheduler, tokenizer, max_epochs, save_path, device, val_freq = 10):
bestpoint_dir = os.path.join(save_path)
os.makedirs(bestpoint_dir, exist_ok=True)
...
model.save_pretrained(bestpoint_dir) #here
print("Saving model bestpoint to ", bestpoint_dir)
...
model = transformer_model(model_name).to(device)
...
```
Error messages: 'transformer_model' object has no attribute 'save_pretrained'
```
'transformer_model' object has no attribute 'save_pretrained'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_23/1390676391.py in <module>
414 num_training_steps = total_steps)
415
--> 416 train(model, optimizer, scheduler, tokenizer, epochs, save_path, device)
417
418 print(max_accuracy, "\n", max_match)
/tmp/ipykernel_23/1390676391.py in train(model, optimizer, scheduler, tokenizer, max_epochs, save_path, device, val_freq)
385
386 # To save the model, uncomment the following lines
--> 387 model.save_pretrained(bestpoint_dir)
388 print("Saving model bestpoint to ", bestpoint_dir)
389
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
1264 return modules[name]
1265 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1266 type(self).__name__, name))
1267
1268 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'transformer_model' object has no attribute 'save_pretrained'
```
### Expected behavior
```shell
`model.save_pretrained(...) `should work, I tried to fix the problem by using `model.module.save_pretrained(...)` and `torch.save(...)`, but failed.
How can I fix the problem? Thx!
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22773/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22772
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22772/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22772/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22772/events
|
https://github.com/huggingface/transformers/pull/22772
| 1,668,461,742
|
PR_kwDOCUB6oc5OVn4l
| 22,772
|
Seq2SeqTrainer: Evict decoder_input_ids only when it is created from labels
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> Did you confirm the failing test now passes?\r\n\r\nYes :D Both the SQUAD test that made me add the previous eviction and the command that FSMT command that @stas00 shared are passing with this PR",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
MEMBER
| null |
# What does this PR do?
Fixes #22634 (what remains of the issue, the [failing FSMT command in this comment](https://github.com/huggingface/transformers/issues/22634#issuecomment-1500919952))
A previous PR (#22108) expanded the capabilities of the trainer, by delegating input selection to `.generate()`. However, it did manually evict `decoder_input_ids` from the inputs to make the SQUAD test pass, causing the issue seen above.
This PR makes a finer-grained eviction decision -- we only want to evict `decoder_input_ids` when it is built from `labels`. In this particular case, `decoder_input_ids` will likely have right-padding, which is unsupported by `.generate()`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22772/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22772/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22772",
"html_url": "https://github.com/huggingface/transformers/pull/22772",
"diff_url": "https://github.com/huggingface/transformers/pull/22772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22772.patch",
"merged_at": 1681490714000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22771
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22771/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22771/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22771/events
|
https://github.com/huggingface/transformers/issues/22771
| 1,668,404,146
|
I_kwDOCUB6oc5jcdOy
| 22,771
|
TF Swiftformer
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi! I would like to take on this as my first issue if possible. Is that okay?",
"@joaocmd Of course! Cool issue to start on 🤗 \r\n\r\nIf you haven't seen it already, there's a detailed guide in the docs on porting a model to TensorFlow: https://huggingface.co/docs/transformers/add_tensorflow_model\r\n\r\nIt's best to wait until the PyTorch implementation is merged in, which will be at least a day or two away. ",
"Sounds great @amyeroberts, thank you :)\r\n\r\nI'll start looking into it once the PyTorch PR is merged.",
"Any news on this tf model? ",
"Hi @D-Roberts, I haven't started looking into it as the pytorch version has not yet been merged.",
"Hi @joaocmd , the PyTorch version of SwiftFormer is now merged so you can continue working the TensorFlow version of it.",
"Hi @shehanmunasinghe, I'm on it, thanks!",
"Hi @amyeroberts ,\r\n\r\nMay I proceed to work on this issue if it has not been sorted yet?",
"@sqali There is currently an open PR which is actively being worked on by @joaocmd: #23342 ",
"Hi @amyeroberts ,\r\n\r\nIs there any other issue that you are working on in which I can help you with?",
"@sqali Here's a page in the docs all about different ways to contribute and how to find issues to work on: https://huggingface.co/docs/transformers/main/contributing. I'd suggest looking at issues marked with the 'Good First Issues' tag and finding one which no-one is currently working on: https://github.com/huggingface/transformers/labels/Good%20First%20Issue\r\n\r\n"
] | 1,681
| 1,688
| null |
COLLABORATOR
| null |
### Model description
Add the TensorFlow port of the SwiftFormer model. See related issue: #22685
To be done once the SwiftFormer model has been added: #22686
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Original repo: https://github.com/amshaker/swiftformer
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22771/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22771/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/22770
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22770/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22770/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22770/events
|
https://github.com/huggingface/transformers/pull/22770
| 1,668,283,131
|
PR_kwDOCUB6oc5OVCQl
| 22,770
|
Tweak ESM tokenizer for Nucleotide Transformer
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
MEMBER
| null |
Nucleotide Transformer is a model that takes DNA inputs. It uses the same model architecture as the protein model ESM, but in addition to a different vocabulary it tokenizes inputs without a `<sep>` or `<eos>` token at the end. This PR makes a small tweak to the tokenization code for ESM, so that it doesn't try to add `self.eos_token_id` to sequences when the tokenizer does not have an `eos_token` set. With this change, we can fully support Nucleotide Transformer as an ESM checkpoint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22770/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22770",
"html_url": "https://github.com/huggingface/transformers/pull/22770",
"diff_url": "https://github.com/huggingface/transformers/pull/22770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22770.patch",
"merged_at": 1681481924000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22769
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22769/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22769/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22769/events
|
https://github.com/huggingface/transformers/issues/22769
| 1,668,277,758
|
I_kwDOCUB6oc5jb-X-
| 22,769
|
Error projecting concatenated Fourier Features.
|
{
"login": "sr-ndai",
"id": 99852046,
"node_id": "U_kgDOBfOfDg",
"avatar_url": "https://avatars.githubusercontent.com/u/99852046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sr-ndai",
"html_url": "https://github.com/sr-ndai",
"followers_url": "https://api.github.com/users/sr-ndai/followers",
"following_url": "https://api.github.com/users/sr-ndai/following{/other_user}",
"gists_url": "https://api.github.com/users/sr-ndai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sr-ndai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sr-ndai/subscriptions",
"organizations_url": "https://api.github.com/users/sr-ndai/orgs",
"repos_url": "https://api.github.com/users/sr-ndai/repos",
"events_url": "https://api.github.com/users/sr-ndai/events{/privacy}",
"received_events_url": "https://api.github.com/users/sr-ndai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@sr-ndai Thanks for raising this issue and detailed reproduction. \r\n\r\nCould you share the config or checkpoint being used in this example? ",
"@amyeroberts No problem, here is the configuration I used:\r\n```python\r\nfrom transformers import PerceiverConfig\r\n\r\nconfig = PerceiverConfig(\r\n num_latents = 128,\r\n d_latents=256,\r\n num_blocks=6,\r\n qk_channels = 256,\r\n num_self_attends_per_block=2,\r\n num_self_attention_heads=8,\r\n num_cross_attention_heads=8,\r\n self_attention_widening_factor=2,\r\n cross_attention_widening_factor=1,\r\n hidden_act='gelu_new',\r\n attention_probs_dropout_prob=0.1,\r\n use_query_residual=True,\r\n num_labels = 128 # For decoder\r\n)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,693
| 1,693
|
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers.models.perceiver.modeling_perceiver import PerceiverImagePreprocessor
input_preprocessor = PerceiverImagePreprocessor(
config,
prep_type="conv",
spatial_downsample=4,
temporal_downsample=1,
position_encoding_type="fourier",
in_channels=4,
out_channels=256,
conv2d_use_batchnorm=True,
concat_or_add_pos="concat",
project_pos_dim=128,
fourier_position_encoding_kwargs = dict(
num_bands=64,
max_resolution=[25,50], # 4x downsample include
)
)
test = torch.randn(1,4,100,200)
inputs, modality_sizes, inputs_without_pos = input_preprocessor(test)
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 23
21 test = torch.randn(1,4,100,200)
22 # preprocessor outputs a tuple
---> 23 inputs, modality_sizes, inputs_without_pos = input_preprocessor(test)
25 print(inputs.shape)
26 print(inputs_without_pos.shape)
torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
transformers/models/perceiver/modeling_perceiver.py:3226), in PerceiverImagePreprocessor.forward(self, inputs, pos, network_input_is_1d)
3223 else:
3224 raise ValueError("Unsupported data format for conv1x1.")
-> 3226 inputs, inputs_without_pos = self._build_network_inputs(inputs, network_input_is_1d)
3227 modality_sizes = None # Size for each modality, only needed for multimodal
3229 return inputs, modality_sizes, inputs_without_pos
transformers/models/perceiver/modeling_perceiver.py:3169), in PerceiverImagePreprocessor._build_network_inputs(self, inputs, network_input_is_1d)
3166 pos_enc = self.position_embeddings(index_dims, batch_size, device=inputs.device, dtype=inputs.dtype)
3168 # Optionally project them to a target dimension.
-> 3169 pos_enc = self.positions_projection(pos_enc)
3171 if not network_input_is_1d:
3172 # Reshape pos to match the input feature shape
3173 # if the network takes non-1D inputs
3174 sh = inputs.shape
torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
torch/nn/modules/linear.py:114), in Linear.forward(self, input)
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1176x258 and 256x128)
### Expected behavior
Linear projection input number of features should equal the number of positional features.
When concatenating Fourier Features the expected number of positional features is: (2 * num_bands * num_dims) + 2.
The build_position_encoding() function takes as input the out_channels which is used to define the input number of features for linear projection. PerceiverImagePreprocessor incorrectly passes in the embedding dimension to out_channels for positional encoding projection.
Possible Fix:
Use the positional encoding class method output_size() to pull number of input features for projection within build_position_encoding() function.
positions_projection = nn.Linear(output_pos_enc.output_size(), project_pos_dim) if project_pos_dim > 0 else nn.Identity()
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22769/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22768
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22768/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22768/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22768/events
|
https://github.com/huggingface/transformers/issues/22768
| 1,668,057,872
|
I_kwDOCUB6oc5jbIsQ
| 22,768
|
How to avoid adding double start of token<s><s> in TrOCR during training ?
|
{
"login": "Mohammed20201991",
"id": 59222637,
"node_id": "MDQ6VXNlcjU5MjIyNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohammed20201991",
"html_url": "https://github.com/Mohammed20201991",
"followers_url": "https://api.github.com/users/Mohammed20201991/followers",
"following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions",
"organizations_url": "https://api.github.com/users/Mohammed20201991/orgs",
"repos_url": "https://api.github.com/users/Mohammed20201991/repos",
"events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohammed20201991/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"+1",
"Thanks for Natabara for his comment The solution is super easy by just skipping <s> start token `labels= labels[1:] ` coming from the tokenizer because the tokenizer adds start token <s> and the TrOCR adds start token <s> automatically as mentioned in TrOCR paper ``` def __getitem__(self, idx): # get file name + text file_name = self.df['file_name'][idx] text = self.df['text'][idx] # prepare image (i.e. resize + normalize) image = Image.open(self.root_dir + file_name).convert(\"RGB\") pixel_values = self.processor(image, return_tensors=\"pt\").pixel_values # add labels (input_ids) by encoding the text labels = self.processor.tokenizer(text, padding=\"max_length\", truncation=True, max_length=self.max_target_length).input_ids # important: make sure that PAD tokens are ignored by the loss function labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels] labels= labels[1:] return {\"pixel_values\": pixel_values.squeeze(), \"labels\": torch.tensor(labels)} ```"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
**Describe the bug**
The model I am using (TrOCR Model):
The problem arises when using:
* [x] the official example scripts: done by the nice tutorial [(fine_tune)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb) @NielsRogge
* [x] my own modified scripts: (as the script below )
```
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-large-handwritten")
def compute_metrics(pred):
labels_ids = pred.label_ids
print('labels_ids',len(labels_ids), type(labels_ids),labels_ids)
pred_ids = pred.predictions
print('pred_ids',len(pred_ids), type(pred_ids),pred_ids)
pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
print(pred_str)
labels_ids[labels_ids == -100] = processor.tokenizer.pad_token_id
label_str = processor.batch_decode(labels_ids, skip_special_tokens=True)
print(label_str)
cer = cer_metric.compute(predictions=pred_str, references=label_str)
return {"cer": cer}
class Dataset(Dataset):
def __init__(self, root_dir, df, processor, max_target_length=128):
self.root_dir = root_dir
self.df = df
self.processor = processor
self.max_target_length = max_target_length
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
# get file name + text
file_name = self.df['file_name'][idx]
text = self.df['text'][idx]
# prepare image (i.e. resize + normalize)
image = Image.open(self.root_dir + file_name).convert("RGB")
pixel_values = self.processor(image, return_tensors="pt").pixel_values
# add labels (input_ids) by encoding the text
labels = self.processor.tokenizer(text,
padding="max_length",
truncation=True,
max_length=self.max_target_length).input_ids
# important: make sure that PAD tokens are ignored by the loss function
labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels]
# encoding
return {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(labels)}
#Train a model
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-large-handwritten")
# set special tokens used for creating the decoder_input_ids from the labels
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
# make sure vocab size is set correctly
model.config.vocab_size = model.config.decoder.vocab_size
# set beam search parameters
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
model.config.decoder.is_decoder = True
model.config.decoder.add_cross_attention = True
working_dir = './test/'
training_args = Seq2SeqTrainingArguments(...)
# instantiate trainer
trainer = Seq2SeqTrainer(model=model, args=training_args, train_dataset = train_dataset,
data_collator = default_data_collator, )
trainer.train()
# python3 train.py path/to/labels path/to/images/
```
- Platform: Linux Ubuntu distribution [GCC 9.4.0] on Linux
- PyTorch version (GPU?): 0.8.2+cu110
- transformers: 4.22.2
- Python version:3.8.10
A clear and concise description of what the bug is.
To **Reproduce** Steps to reproduce the behavior:
1. After training the model or during the training phase when evaluating metrics calculate I see the model added the double start of token `<s><s>` or ids `[0,0, ......,2,1,1, 1 ]`
2. here is an example during the training phase showing generated tokens in compute_metrics
Input predictions: `[[0,0,506,4422,8046,2,1,1,1,1,1]]`
Input references: `[[0,597,2747 ...,1,1,1]] `
4. Other examples during testing models [[](https://i.stack.imgur.com/sWzbf.png)]
**Expected behavior** A clear and concise description of what you expected to happen.
In 2 reproduced problems:
I am expecting during `training`Input predictions should be: `[[0,506,4422, ... ,8046,2,1,1,1,1,1 ]] `
In addition during the `testing` phase: generated text without double` **<s><s>** `
`tensor([[0,11867,405,22379,1277,..........,368,2]]) `
`<s>ennyit erről, tőlem fényképezz amennyit akarsz, a véleményem akkor</s>`
cc @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22768/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22767
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22767/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22767/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22767/events
|
https://github.com/huggingface/transformers/issues/22767
| 1,667,914,956
|
I_kwDOCUB6oc5jalzM
| 22,767
|
Mobilenet_v1 Dropout probability
|
{
"login": "andreysher",
"id": 30853561,
"node_id": "MDQ6VXNlcjMwODUzNTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/30853561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreysher",
"html_url": "https://github.com/andreysher",
"followers_url": "https://api.github.com/users/andreysher/followers",
"following_url": "https://api.github.com/users/andreysher/following{/other_user}",
"gists_url": "https://api.github.com/users/andreysher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreysher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreysher/subscriptions",
"organizations_url": "https://api.github.com/users/andreysher/orgs",
"repos_url": "https://api.github.com/users/andreysher/repos",
"events_url": "https://api.github.com/users/andreysher/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreysher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@andreysher Thanks for raising this issue. The dropout rate `p` or `rate` is defined the same for TensorFlow and PyTorch layers. \r\n\r\nFrom the [TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout): \r\n> The Dropout layer randomly sets input units to 0 with a frequency of `rate` at each step during training time\r\n\r\nFrom the [PyTorch documentation](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html): \r\n> During training, randomly zeroes some of the elements of the input tensor with probability `p` \r\n\r\nIs there something specific about the original MobileNet implementation or checkpoints which means this doesn't apply? \r\n",
"Thanks for this clarification! But i don't understand why default dropout_prob in the PR is 0.999? If i load model by \r\n```\r\nfrom transformers import AutoImageProcessor, MobileNetV1ForImageClassification\r\n\r\nmodel = MobileNetV1ForImageClassification.from_pretrained(\"google/mobilenet_v1_1.0_224\")\r\n```\r\ni get model with dropout p = 0.999. This is unexpected. Could you enplane why such value is presented in pretrained model?",
"@andreysher Ah, OK, I think I see what the issue is. The reason the dropout value p=0.999 is because it's set in the [model config](https://huggingface.co/google/mobilenet_v1_1.0_224/blob/dd9bc45ff57d9492e00d48547587baf03f0cdade/config.json#L5). This is a mistake as, as you had identified, the probability in the original mobilenet repo is the [keep probability](https://github.com/tensorflow/models/blob/ad32e81e31232675319d5572b78bc196216fd52e/research/slim/nets/mobilenet_v1.py#L306) i.e. `(1 - p)`. I've opened PRs on the hub to update the model configs:\r\n\r\n- https://huggingface.co/google/mobilenet_v1_0.75_192/discussions/2\r\n- https://huggingface.co/google/mobilenet_v2_1.0_224/discussions/3\r\n- https://huggingface.co/google/mobilenet_v1_1.0_224/discussions/2\r\n- https://huggingface.co/google/mobilenet_v2_0.35_96/discussions/2\r\n- https://huggingface.co/google/mobilenet_v2_1.4_224/discussions/2\r\n- https://huggingface.co/google/mobilenet_v2_0.75_160/discussions/2\r\n\r\nThanks for flagging and the time take to explain. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
Hi @amyeroberts [here](https://github.com/huggingface/transformers/pull/17799) is the pull request with some interesting bug. Mobilenetv2 made from tensorflow checkpoint, but in Tensorflow dropout prob is the fraction of **non zero** values, but in PyTorch this is fraction of **zero** values. It should be **1 - prob** for PyTorch.
Load Mobilenetv1 model from 'google/mobilenet_v1_1.0_224'
Dropout prob should be 1 - tf_prob
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22767/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22766
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22766/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22766/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22766/events
|
https://github.com/huggingface/transformers/pull/22766
| 1,667,875,858
|
PR_kwDOCUB6oc5OTr_l
| 22,766
|
Fix failing torchscript tests for `CpmAnt` model
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
The failure is caused by some type issues (dict, tuple, etc) in the outputs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22766/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22766",
"html_url": "https://github.com/huggingface/transformers/pull/22766",
"diff_url": "https://github.com/huggingface/transformers/pull/22766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22766.patch",
"merged_at": 1681469625000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22765
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22765/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22765/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22765/events
|
https://github.com/huggingface/transformers/pull/22765
| 1,667,862,986
|
PR_kwDOCUB6oc5OTpO6
| 22,765
|
Fix word_ids hyperlink
|
{
"login": "mayankagarwals",
"id": 39498938,
"node_id": "MDQ6VXNlcjM5NDk4OTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayankagarwals",
"html_url": "https://github.com/mayankagarwals",
"followers_url": "https://api.github.com/users/mayankagarwals/followers",
"following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}",
"gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions",
"organizations_url": "https://api.github.com/users/mayankagarwals/orgs",
"repos_url": "https://api.github.com/users/mayankagarwals/repos",
"events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayankagarwals/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"CC @amyeroberts ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
Fixes https://github.com/huggingface/transformers/issues/22729
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22765/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22765",
"html_url": "https://github.com/huggingface/transformers/pull/22765",
"diff_url": "https://github.com/huggingface/transformers/pull/22765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22765.patch",
"merged_at": 1681485495000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22764
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22764/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22764/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22764/events
|
https://github.com/huggingface/transformers/pull/22764
| 1,667,821,637
|
PR_kwDOCUB6oc5OTgKc
| 22,764
|
Fix a mistake in Llama weight converter log output.
|
{
"login": "aljungberg",
"id": 154423,
"node_id": "MDQ6VXNlcjE1NDQyMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/154423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aljungberg",
"html_url": "https://github.com/aljungberg",
"followers_url": "https://api.github.com/users/aljungberg/followers",
"following_url": "https://api.github.com/users/aljungberg/following{/other_user}",
"gists_url": "https://api.github.com/users/aljungberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aljungberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aljungberg/subscriptions",
"organizations_url": "https://api.github.com/users/aljungberg/orgs",
"repos_url": "https://api.github.com/users/aljungberg/repos",
"events_url": "https://api.github.com/users/aljungberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/aljungberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a mistake in Llama weight converter log output.
Before: `Saving a {tokenizer_class} to {tokenizer_path}`
After: `Saving a LlamaTokenizerFast to outdir.`
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22764/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22764",
"html_url": "https://github.com/huggingface/transformers/pull/22764",
"diff_url": "https://github.com/huggingface/transformers/pull/22764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22764.patch",
"merged_at": 1681464406000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22763
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22763/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22763/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22763/events
|
https://github.com/huggingface/transformers/pull/22763
| 1,667,778,484
|
PR_kwDOCUB6oc5OTXHj
| 22,763
|
Generate: pin number of beams in BART test
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
MEMBER
| null |
# What does this PR do?
A recent change added the `num_beams==1` requirement for contrastive search. One of the tests started failing because of that change -- BART has `num_beams=4` [in its config](https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L49), so the test was now triggering beam search, and not contrastive search. This PR corrects it.
(The long-term solution is to add argument validation to detect these cases)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22763/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22763/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22763",
"html_url": "https://github.com/huggingface/transformers/pull/22763",
"diff_url": "https://github.com/huggingface/transformers/pull/22763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22763.patch",
"merged_at": 1681462646000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22762
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22762/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22762/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22762/events
|
https://github.com/huggingface/transformers/issues/22762
| 1,667,767,437
|
I_kwDOCUB6oc5jaByN
| 22,762
|
RecursionError: maximum recursion depth exceeded while getting the str of an object.
|
{
"login": "EZlzh",
"id": 52556332,
"node_id": "MDQ6VXNlcjUyNTU2MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/52556332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EZlzh",
"html_url": "https://github.com/EZlzh",
"followers_url": "https://api.github.com/users/EZlzh/followers",
"following_url": "https://api.github.com/users/EZlzh/following{/other_user}",
"gists_url": "https://api.github.com/users/EZlzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EZlzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EZlzh/subscriptions",
"organizations_url": "https://api.github.com/users/EZlzh/orgs",
"repos_url": "https://api.github.com/users/EZlzh/repos",
"events_url": "https://api.github.com/users/EZlzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/EZlzh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"same problem, is there any progress?",
"Hey! The main issue is that they did not update the tokenizer files at `\"decapoda-research/llama-7b-hf\"` but they are using the latest version of transformers. The tokenizer was fixed see #22402 and corrected. Nothing we can do on our end...",
"@ArthurZucker I am facing a similar issue with openllama\r\n\r\n```python\r\nsave_dir = \"../open_llama_7b_preview_300bt/open_llama_7b_preview_300bt_transformers_weights/\"\r\ntokenizer = AutoTokenizer.from_pretrained(save_dir)\r\ntokenizer.bos_token_id\r\n```\r\n\r\ncalling `tokenizer.bos_token_id` this causes max recursion depth error.\r\n\r\n```python\r\ntokenizer\r\nLlamaTokenizerFast(name_or_path='../open_llama_7b_preview_300bt/open_llama_7b_preview_300bt_transformers_weights/', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': AddedToken(\"\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken(\"\", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken(\"\", rstrip=False, lstrip=False, single_word=False, normalized=True)}, clean_up_tokenization_spaces=False)\r\n```\r\n\r\n\r\n\r\n`transformers version = 4.29.1`\r\n\r\n`tokenizer_config.json`\r\n\r\n```\r\n{\r\n \"bos_token\": \"\",\r\n \"eos_token\": \"\",\r\n \"model_max_length\": 1e+30,\r\n \"tokenizer_class\": \"LlamaTokenizer\",\r\n \"unk_token\": \"\"\r\n}\r\n```\r\n\r\nInitializing as following works but I am not sure if this should be needed:\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(save_dir, unk_token=\"<unk>\",\r\n bos_token=\"<s>\",\r\n eos_token=\"</s>\")\r\n```\r\n",
"So.... Again, if you are not using the latest / most recently converted tokenizer, I cannot help you. Checkout [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) which has a working tokenizer."
] | 1,681
| 1,685
| 1,685
|
NONE
| null |
**System Info**
Python 3.8.10
transformers 4.29.0.dev0
sentencepiece 0.1.97
**Information**
- [x] The official example scripts
- [ ] My own modified scripts
**Reproduction**
In https://github.com/CarperAI/trlx/tree/main/examples
```python https://ppo_sentiments_llama.py```
The loops occur as follows:
/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_fast.py:250 in │
│ convert_tokens_to_ids │
│ │
│ 247 │ │ │ return None │
│ 248 │ │ │
│ 249 │ │ if isinstance(tokens, str): │
│ ❱ 250 │ │ │ return self._convert_token_to_id_with_added_voc(tokens) │
│ 251 │ │ │
│ 252 │ │ ids = [] │
│ 253 │ │ for token in tokens: │
│ │
│ /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_fast.py:260 in │
│ _convert_token_to_id_with_added_voc │
│ │
│ 257 │ def _convert_token_to_id_with_added_voc(self, token: str) -> int: │
│ 258 │ │ index = self._tokenizer.token_to_id(token) │
│ 259 │ │ if index is None: │
│ ❱ 260 │ │ │ return self.unk_token_id │
│ 261 │ │ return index │
│ 262 │ │
│ 263 │ def _convert_id_to_token(self, index: int) -> Optional[str]: │
│ │
│ /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py:1141 in │
│ unk_token_id │
│ │
│ 1138 │ │ """ │
│ 1139 │ │ if self._unk_token is None: │
│ 1140 │ │ │ return None │
│ ❱ 1141 │ │ return self.convert_tokens_to_ids(self.unk_token) │
│ 1142 │ │
│ 1143 │ @property │
│ 1144 │ def sep_token_id(self) -> Optional[int]: │
│ │
│ /usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_fast.py:250 in │
│ convert_tokens_to_ids │
│ │
│ 247 │ │ │ return None │
│ 248 │ │ │
│ 249 │ │ if isinstance(tokens, str): │
│ ❱ 250 │ │ │ return self._convert_token_to_id_with_added_voc(tokens) │
│ 251 │ │ │
│ 252 │ │ ids = [] │
│ 253 │ │ for token in tokens:
...
Until
/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py:1021 in unk_token │
│ │
│ 1018 │ │ │ if self.verbose: │
│ 1019 │ │ │ │ logger.error("Using unk_token, but it is not set yet.") │
│ 1020 │ │ │ return None │
│ ❱ 1021 │ │ return str(self._unk_token) │
│ 1022 │ │
│ 1023 │ @property │
│ 1024 │ def sep_token(self) -> str:
RecursionError: maximum recursion depth exceeded while getting the str of an object
**Expected behavior**
Is the algorithm expected to call the function `convert_tokens_to_ids` in `tokenization_utils.py` instead of `tokenization_utils_fast.py`?
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22762/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/huggingface/transformers/issues/22762/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22761
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22761/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22761/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22761/events
|
https://github.com/huggingface/transformers/pull/22761
| 1,667,737,654
|
PR_kwDOCUB6oc5OTOUl
| 22,761
|
Pix2struct: doctest fix
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"oops, wrong core maintainer :D",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
MEMBER
| null |
# What does this PR do?
Fixes the failing doctest: different machines may produce minor FP32 differences. This PR formats the printed number to a few decimal places.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22761/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22761",
"html_url": "https://github.com/huggingface/transformers/pull/22761",
"diff_url": "https://github.com/huggingface/transformers/pull/22761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22761.patch",
"merged_at": 1681461640000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22760
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22760/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22760/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22760/events
|
https://github.com/huggingface/transformers/issues/22760
| 1,667,620,565
|
I_kwDOCUB6oc5jZd7V
| 22,760
|
WhisperForAudioClassification RuntimeError tensor size doesn't match
|
{
"login": "xiao1ongbao",
"id": 111791611,
"node_id": "U_kgDOBqnN-w",
"avatar_url": "https://avatars.githubusercontent.com/u/111791611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiao1ongbao",
"html_url": "https://github.com/xiao1ongbao",
"followers_url": "https://api.github.com/users/xiao1ongbao/followers",
"following_url": "https://api.github.com/users/xiao1ongbao/following{/other_user}",
"gists_url": "https://api.github.com/users/xiao1ongbao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiao1ongbao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiao1ongbao/subscriptions",
"organizations_url": "https://api.github.com/users/xiao1ongbao/orgs",
"repos_url": "https://api.github.com/users/xiao1ongbao/repos",
"events_url": "https://api.github.com/users/xiao1ongbao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiao1ongbao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @xiao1ongbao! I'm hoping that your fine-tuning run was successful! Let me know if you encounter any issues - more than happy to help here 🤗",
"Hey! How did you solve that?",
"Do you have a reproducible code snippet to trigger the error @lnfin? I didn't encounter this in my experiments!"
] | 1,681
| 1,684
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.28.0
- Platform: Linux-5.4.190-107.353.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Multi-GPU on the same machine
### Who can help?
@sanchit-gandhi finetuning WhisperForAudioClassification model and getting this error. The dataset and script works fine when switching to the Wav2Vec2 model.
File "/opt/conda/envs/voice/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
output = module(*input, **kwargs)
File "/opt/conda/envs/voice/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jovyan/.local/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1739, in forward
encoder_outputs = self.encoder(
File "/opt/conda/envs/voice/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jovyan/.local/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 823, in forward
hidden_states = inputs_embeds + embed_pos
RuntimeError: The size of tensor a (750) must match the size of tensor b (1500) at non-singleton dimension 1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Followed this tutorial, and switched Wav2vec2 to WhisperForAudioClassification model.
Using a dataset with 100K+ wav files.
Training fails with the above error.
### Expected behavior
It would train with no errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22760/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22759
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22759/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22759/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22759/events
|
https://github.com/huggingface/transformers/pull/22759
| 1,667,611,349
|
PR_kwDOCUB6oc5OS0Lk
| 22,759
|
chore: allow protobuf 3.20.3 requirement
|
{
"login": "jose-turintech",
"id": 93319775,
"node_id": "U_kgDOBY_yXw",
"avatar_url": "https://avatars.githubusercontent.com/u/93319775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jose-turintech",
"html_url": "https://github.com/jose-turintech",
"followers_url": "https://api.github.com/users/jose-turintech/followers",
"following_url": "https://api.github.com/users/jose-turintech/following{/other_user}",
"gists_url": "https://api.github.com/users/jose-turintech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jose-turintech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jose-turintech/subscriptions",
"organizations_url": "https://api.github.com/users/jose-turintech/orgs",
"repos_url": "https://api.github.com/users/jose-turintech/repos",
"events_url": "https://api.github.com/users/jose-turintech/events{/privacy}",
"received_events_url": "https://api.github.com/users/jose-turintech/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22759). All of your documentation changes will be reflected on that endpoint.",
"cc @ydshieh ",
"Hi @jose-turintech Thank you for this PR ❤️ .\r\n\r\nUnfortunately, as you can see in the CI results, the changes cause some errors\r\n\r\n```python\r\nFAILED tests/models/albert/test_tokenization_albert.py::AlbertTokenizationTest::test_pickle_subword_regularization_tokenizer\r\nFAILED tests/models/bert_generation/test_tokenization_bert_generation.py::BertGenerationTokenizationTest::test_pickle_subword_regularization_tokenizer\r\n```\r\n(Fatal Python error: Segmentation fault)\r\n\r\nSo we are not able to merge this PR so far. There might be some way to fix these 2 issues, but I am not sure. Let me know if you want to dive into this.\r\n",
"Hey @jose-turintech ,\r\n\r\nThanks for submitting this PR! The latest `tensorflow==2.12` release depends on `protobuf >= 3.20.3`, so this would unblock installing the latest `tensorflow` alongside `transformers`. \r\n\r\nAfter setting up this PR's environment, I just ran this locally and those tests seem to pass. Would it be possible to trigger a re-run @ydshieh? Alternatively, would it be possible to get extra logs on the CI failures? ",
"> Hey @jose-turintech ,\r\n> \r\n> Thanks for submitting this PR! The latest `tensorflow==2.12` release depends on `protobuf >= 3.20.3`, so this would unblock installing the latest `tensorflow` alongside `transformers`.\r\n> \r\n> After setting up this PR's environment, I just ran this locally and those tests seem to pass. Would it be possible to trigger a re-run @ydshieh? Alternatively, would it be possible to get extra logs on the CI failures?\r\n\r\nHello @adriangonz, just updated my branch with latest changes on origin in order to test if the PR check would retrigger. It seems so; so i guess we'll see if the PR passes all check within some minutes.\r\n\r\nThanks for your comment.",
"As you can see in [the latest run](https://app.circleci.com/pipelines/github/huggingface/transformers/63908/workflows/bcaedcfc-54af-42e8-9e9c-a97d9612b185/jobs/789487), the 2 failed tests are still there.\r\n\r\nThe error (provided at the end below) is some processes crashed, and there is no more informative log being given by the `pytest`.\r\n\r\nWhen I run the two failed tests individually on my local env., they pass.\r\n\r\nHowever, since the latest release of `tensorflow-probaility` broke everything in the CI since we don't support `TensorFlow 2.12` yet and it needs that version, we will take a more deep look to see if we can unblock this situation.\r\n\r\n```bash\r\n=================================== FAILURES ===================================\r\n______ tests/models/bert_generation/test_tokenization_bert_generation.py _______\r\n[gw0] linux -- Python 3.8.12 /home/circleci/.pyenv/versions/3.8.12/bin/python\r\nworker 'gw0' crashed while running 'tests/models/bert_generation/test_tokenization_bert_generation.py::BertGenerationTokenizationTest::test_pickle_subword_regularization_tokenizer'\r\n_______________ tests/models/albert/test_tokenization_albert.py ________________\r\n[gw6] linux -- Python 3.8.12 /home/circleci/.pyenv/versions/3.8.12/bin/python\r\nworker 'gw6' crashed while running 'tests/models/albert/test_tokenization_albert.py::AlbertTokenizationTest::test_pickle_subword_regularization_tokenizer'\r\n``` ",
"@ydshieh i've merged origin into this branch once again to retrigger CI checks just to see if test passed after the downtime of some huggingface transformers yesterday. Tests pass now after your modifications :) .\r\n\r\nOnly difference with main is the tensorflow-probaility>0.20 restriction is not applied in this CI build.\r\n\r\nThanks for taking the time to take a look into the issue.",
"@jose-turintech I accidentally pushed the experimental changes to this PR branch, I am really sorry. The CI is green as I removed some 3rd packages (tensorflow for example), which it should be kept.\r\n\r\nI am still looking how to resolve the issue. There is a big problem (at least the desired environment when running inside CircleCI runner). I will keep you update soon.",
"In the meantime, let us convert this PR into a draft mode, so it won't be merged by accident. Thank you for your comprehension.",
"## Issue\r\n\r\n(for the hack to fix, see the next reply)\r\n\r\nThis PR want to use `protobuf==3.20.3` so we can use `tensorflow==2.12`. However, some tests like `test_pickle_subword_regularization_tokenizer` fails with this environment. The following describes the issue.\r\n\r\n- First, setup the env. to have `tensorflow-cpu==2.12` with `potobuf==3.20.3` but `without torch installed`. \r\n- Use a `sentencepiece` tokenizer with `enable_sampling=True`\r\n- `run the code with pytest` --> crash (core dump)\r\n - (run with a script --> no crash)\r\n - (run the code with pytest where torch is also there --> no crash)\r\n\r\nHere are 2 code snippets to reproduce (and not): run in python 3.8 will more likely to reproduce\r\n ______________________________________________________________________________\r\n\r\n- create `tests/test_foo.py` and run `python -m pytest -v tests/test_foo.py` --> crash\r\n\r\n```\r\nfrom transformers import AlbertTokenizer\r\ndef test_foo():\r\n\r\n fn = \"tests/fixtures/spiece.model\"\r\n text = \"This is a test for subword regularization.\"\r\n\r\n # `encode` works with `False`\r\n sp_model_kwargs = {\"enable_sampling\": False}\r\n tokenizer = AlbertTokenizer(fn, sp_model_kwargs=sp_model_kwargs)\r\n\r\n # test 1 (usage in `transformers`)\r\n # _ = tokenizer.tokenize(text)\r\n\r\n # test 2 (direct use in sentencepiece)\r\n pieces = tokenizer.sp_model.encode(text, out_type=str)\r\n\r\n # `encode` fails with `True` if torch is not installed and tf==2.12\r\n sp_model_kwargs = {\"enable_sampling\": True}\r\n tokenizer = AlbertTokenizer(fn, sp_model_kwargs=sp_model_kwargs)\r\n\r\n # test 1 (usage in `transformers`)\r\n # _ = tokenizer.tokenize(text)\r\n\r\n # This gives `Segmentation fault (core dumped)` under the above mentioned conditions\r\n # test 2 (direct use in sentencepiece)\r\n pieces = tokenizer.sp_model.encode(text, out_type=str)\r\n print(pieces)\r\n```\r\n\r\n- create `foo.py` and run `python foo.py` -> no crash\r\n\r\n```\r\nfrom transformers import AlbertTokenizer\r\n\r\nfn = \"tests/fixtures/spiece.model\"\r\ntext = \"This is a test for subword regularization.\"\r\n\r\n# `encode` works with `False`\r\nsp_model_kwargs = {\"enable_sampling\": False}\r\ntokenizer = AlbertTokenizer(fn, sp_model_kwargs=sp_model_kwargs)\r\n\r\n# test 1 (usage in `transformers`)\r\n# _ = tokenizer.tokenize(text)\r\n\r\n# test 2 (direct use in sentencepiece)\r\npieces = tokenizer.sp_model.encode(text, out_type=str)\r\n\r\n# `encode` works with `True` if torch is not installed and tf==2.12\r\nsp_model_kwargs = {\"enable_sampling\": True}\r\ntokenizer = AlbertTokenizer(fn, sp_model_kwargs=sp_model_kwargs)\r\n\r\n# test 1 (usage in `transformers`)\r\n# _ = tokenizer.tokenize(text)\r\n\r\n# This works\r\n# test 2 (direct use in sentencepiece)\r\npieces = tokenizer.sp_model.encode(text, out_type=str)\r\nprint(pieces)\r\n```",
"## (Hacky) Fix \r\nThe relevant failing tests are:\r\n- test_subword_regularization_tokenizer\r\n- test_pickle_subword_regularization_tokenizer\r\n\r\nAs mentioned above, those failing tests only happen when running with pytest. Furthermore, those test don't actually need `protobuf` in order to run. However, in the TF CI job, `protobuf` is a dependency from TensorFlow.\r\n\r\nIt turns out that running those 2 problematic tests in a subprocess will avoid the crash. It's unclear what actually happens though.\r\n\r\nThis PR modify those 2 tests to be run in a subprocess, so we can have `protobuf==3.20.3` along with `tensroflow==2.12`.",
"The TF job is successful with the last fix, see this job run\r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/64174/workflows/688a1174-8a6f-4599-9479-f51bbc2aacdb/jobs/793536/artifacts\r\n\r\nThe other jobs should be fine (we will see in 30 minutes) as they already pass before.",
"> Thank you again @jose-turintech for this PR, which allows to use TF 2.12!\r\n\r\nThank you very much for taking the time to fix the issues, you did all the work; really appreciate it."
] | 1,681
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
Allow latest bugfix release for protobuf (3.20.3)
# What does this PR do?
Change in requirements for python library so it allows the use of latest bugfix release for protobuf (3.20.3) instead of restricting it to the upper bound limit for this dependency to 3.20.2 (<=3.20.2).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22759/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22759/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22759",
"html_url": "https://github.com/huggingface/transformers/pull/22759",
"diff_url": "https://github.com/huggingface/transformers/pull/22759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22759.patch",
"merged_at": 1683742977000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22758
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22758/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22758/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22758/events
|
https://github.com/huggingface/transformers/issues/22758
| 1,667,495,706
|
I_kwDOCUB6oc5jY_ca
| 22,758
|
GPTNeoX position_ids not defined
|
{
"login": "murthyrudra",
"id": 14203368,
"node_id": "MDQ6VXNlcjE0MjAzMzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14203368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/murthyrudra",
"html_url": "https://github.com/murthyrudra",
"followers_url": "https://api.github.com/users/murthyrudra/followers",
"following_url": "https://api.github.com/users/murthyrudra/following{/other_user}",
"gists_url": "https://api.github.com/users/murthyrudra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/murthyrudra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/murthyrudra/subscriptions",
"organizations_url": "https://api.github.com/users/murthyrudra/orgs",
"repos_url": "https://api.github.com/users/murthyrudra/repos",
"events_url": "https://api.github.com/users/murthyrudra/events{/privacy}",
"received_events_url": "https://api.github.com/users/murthyrudra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"`transformers` isn't involved with deepspeed's inference engine, other than being used by it indirectly, so please refile at https://github.com/microsoft/DeepSpeed/issues. Thank you."
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.13
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@ArthurZucker @stas00
Hi, I am performing inference using `GPT-NeoX 20B` model using greedy search. Without deepspeed the text generation works fine. However, when I use deepspeed for inference, I am getting the following error
```bash
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ ~/examplesinference/asqa_inference.py:297 in │
│ <module> │
│ │
│ 294 │
│ 295 │
│ 296 if __name__ == "__main__": │
│ ❱ 297 │ main() │
│ 298 │
│ │
│ ~/examplesinference/asqa_inference.py:271 in main │
│ │
│ 268 │ │ │ │ + "\nQ: " │
│ 269 │ │ │ ) │
│ 270 │ │ new_prompt = prompt + d["question"] + "\nA:" │
│ ❱ 271 │ │ output = predict_text_greedy( │
│ 272 │ │ │ model, │
│ 273 │ │ │ tokenizer, │
│ 274 │ │ │ new_prompt, │
│ │
│ ~/examplesinference/asqa_inference.py:98 in │
│ predict_text_greedy │
│ │
│ 95 │ │
│ 96 │ model.eval() │
│ 97 │ with torch.no_grad(): │
│ ❱ 98 │ │ generated_ids = model.generate( │
│ 97 │ with torch.no_grad(): │ [64/49095]
│ ❱ 98 │ │ generated_ids = model.generate( │
│ 99 │ │ │ input_ids, │
│ 100 │ │ │ max_new_tokens=50, │
│ 101 │ │ │ use_cache=use_cache, │
│ │
│ ~/my_envlib/python3.9/site-packages/deepspeed/inference/engine.py:588 in │
│ _generate │
│ │
│ 585 │ │ │ │ "add your request to: https://github.com/microsoft/DeepSpeed/issues/2506 │
│ 586 │ │ │ ) │
│ 587 │ │ │
│ ❱ 588 │ │ return self.module.generate(*inputs, **kwargs) │
│ 589 │
│ │
│ ~/my_envlib/python3.9/site-packages/torch/utils/_contextlib.py:115 in │
│ decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ ~/my_envlib/python3.9/site-packages/transformers/generation/utils.py:1437 in │
│ generate │
│ │
│ 1434 │ │ │ │ ) │
│ 1435 │ │ │ │
│ 1436 │ │ │ # 11. run greedy search │
│ ❱ 1437 │ │ │ return self.greedy_search( │
│ 1438 │ │ │ │ input_ids, │
│ 1439 │ │ │ │ logits_processor=logits_processor, │
│ 1440 │ │ │ │ stopping_criteria=stopping_criteria, │
│ │
│ ~/my_envlib/python3.9/site-packages/transformers/generation/utils.py:2248 in │
│ greedy_search │
│ │
│ 2245 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │
│ 2246 │ │ │ │
│ 2247 │ │ │ # forward pass to get next token │
│ ❱ 2248 │ │ │ outputs = self( │
│ 2249 │ │ │ │ **model_inputs, │
│ 2250 │ │ │ │ return_dict=True, │
│ 2251 │ │ │ │ output_attentions=output_attentions, │
│ │
│ ~/my_envlib/python3.9/site-packages/torch/nn/modules/module.py:1501 in │
│ _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ [12/49095]
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ ~/my_envlib/python3.9/site-packages/transformers/models/gpt_neox/modeling_gp │
│ t_neox.py:662 in forward │
│ │
│ 659 │ │ ```""" │
│ 660 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 661 │ │ │
│ ❱ 662 │ │ outputs = self.gpt_neox( │
│ 663 │ │ │ input_ids, │
│ 664 │ │ │ attention_mask=attention_mask, │
│ 665 │ │ │ position_ids=position_ids, │
│ │
│ ~/my_envlib/python3.9/site-packages/torch/nn/modules/module.py:1501 in │
│ _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ ~/my_envlib/python3.9/site-packages/transformers/models/gpt_neox/modeling_gp │
│ t_neox.py:553 in forward │
│ │
│ 550 │ │ │ │ │ head_mask[i], │
│ 551 │ │ │ │ ) │
│ 552 │ │ │ else: │
│ ❱ 553 │ │ │ │ outputs = layer( │
│ 554 │ │ │ │ │ hidden_states, │
│ 555 │ │ │ │ │ attention_mask=attention_mask, │
│ 556 │ │ │ │ │ position_ids=position_ids, │
│ │
│ ~/my_envlib/python3.9/site-packages/torch/nn/modules/module.py:1501 in │
│ _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: forward() got an unexpected keyword argument 'position_ids'
```
This is how I am wrapping deepspeed around the model
```python
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer.padding_side = "left"
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
reduced_model_name = model_name.split("/")[-1]
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model = deepspeed.init_inference(
model, mp_size=world_size, dtype=torch.float32, replace_with_kernel_inject=True
)
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import deepspeed
model_name = 'EleutherAI/gpt-neox-20b'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer.padding_side = "left"
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
reduced_model_name = model_name.split("/")[-1]
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model = deepspeed.init_inference(
model, mp_size=world_size, dtype=torch.float32, replace_with_kernel_inject=True
)
model.to(device)
model.eval()
input_ids = tokenizer('The quick brown fox jumped over the lazy dog', return_tensors="pt").input_ids.to(
dtype=torch.long, device=device
)
with torch.no_grad():
generated_ids = model.generate(
input_ids,
max_new_tokens=50,
pad_token_id=tokenizer.eos_token_id,
)
preds = [
tokenizer.decode(
g, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
for g in generated_ids
]
```
### Expected behavior
There should be no difference whether I wrap `deepspeed` around the model or not.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22758/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22757
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22757/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22757/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22757/events
|
https://github.com/huggingface/transformers/issues/22757
| 1,667,494,532
|
I_kwDOCUB6oc5jY_KE
| 22,757
|
Huge Num Epochs (9223372036854775807) when using Trainer API with streaming dataset
|
{
"login": "oonisim",
"id": 15814603,
"node_id": "MDQ6VXNlcjE1ODE0NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/15814603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oonisim",
"html_url": "https://github.com/oonisim",
"followers_url": "https://api.github.com/users/oonisim/followers",
"following_url": "https://api.github.com/users/oonisim/following{/other_user}",
"gists_url": "https://api.github.com/users/oonisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oonisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oonisim/subscriptions",
"organizations_url": "https://api.github.com/users/oonisim/orgs",
"repos_url": "https://api.github.com/users/oonisim/repos",
"events_url": "https://api.github.com/users/oonisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/oonisim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"That's because the dataset you are using does not have a length, so the Trainer sets the number of epochs to a very high number to make sure it does the number of steps you are asking for.",
"@sgugger , thanks for the explanation. \r\n\r\nMay I suggest updating the document adding the Trainer behavior and requirements for streaming dataset e.g. to use max_steps and what value to set. Otherwise users may keep raising questions on max_steps (there have been at least 3 questions in forum) and epochs? \r\n\r\nI am afraid otherwise you may need to spend your time for each time we raise it.\r\n\r\nCurrently [Datasets - Stream](https://huggingface.co/docs/datasets/stream) and [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer#trainer) documents have no such information as far as I looked at (please correct if there is).",
"We welcome any PR making the documentation better :-) "
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
# System Info
Running on SageMaker Studio g4dn 2xlarge.
```
!cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
```
```
!transformers-cli env
- `transformers` version: 4.28.0
- Platform: Linux-4.14.309-231.529.amzn2.x86_64-x86_64-with-debian-10.6
- Python version: 3.7.10
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: <fill in>
```
```
!nvidia-smi
Fri Apr 14 04:32:30 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:1E.0 Off | 0 |
| N/A 32C P0 25W / 70W | 13072MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
# Background
Fine tune BLOOM model for summarization.
- Model: [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m)
- Task: Summarization (using PromptSource with ```input_ids``` set to tokenized text and ```labels``` set to tokenized summary.
- Framework: Pytorch
- Training: Trainer API
- Dataset: [xsum](https://huggingface.co/datasets/xsum)
# Problem
When using the streaming huggingface dataset, Trainer API shows huge ```Num Epochs = 9,223,372,036,854,775,807```.
```
trainer.train()
-----
***** Running training *****
Num examples = 6,144
Num Epochs = 9,223,372,036,854,775,807 <-----
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 1
Gradient Accumulation steps = 1
Total optimization steps = 6,144
Number of trainable parameters = 559,214,592
```
The ```TrainingArguments``` used:
```
DATASET_STREAMING: bool = True
NUM_EPOCHS: int = 3
DATASET_TRAIN_NUM_SELECT: int = 2048
MAX_STEPS: int = NUM_EPOCHS * DATASET_TRAIN_NUM_SELECT if DATASET_STREAMING else -1
training_args = TrainingArguments(
output_dir="bloom_finetuned",
max_steps=MAX_STEPS, # <--- 2048 * 3
num_train_epochs=NUM_EPOCHS,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
learning_rate=2e-5,
weight_decay=0.01,
no_cuda=False,
)
```
When not using streaming ```DATASET_STREAMING=False``` as in the code, the ```Num Epochs``` is displayed as expected.
```
***** Running training *****
Num examples = 2,048
Num Epochs = 3
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 1
Gradient Accumulation steps = 1
Total optimization steps = 6,144
Number of trainable parameters = 559,214,592
```
### Who can help?
trainer: @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the code below
```
! pip install torch transformers datasets evaluate scikit-learn rouge rouge-score promptsource --quiet
```
```
import multiprocessing
import re
from typing import (
List,
Dict,
Callable,
)
import evaluate
import numpy as np
from datasets import (
load_dataset,
get_dataset_split_names
)
from promptsource.templates import (
DatasetTemplates,
Template
)
from transformers import (
AutoTokenizer,
DataCollatorWithPadding,
DataCollatorForSeq2Seq,
BloomForCausalLM,
TrainingArguments,
Trainer
)
## Huggingface Datasets
DATASET_NAME: str = "xsum"
DATASET_STREAMING: bool = True # If using Dataset streaming
DATASET_TRAIN_NUM_SELECT: int = 2048 # Number of rows to use for training
DATASET_VALIDATE_NUM_SELECT: int = 128
# Huggingface Tokenizer (BLOOM default token length is 2048)
MAX_TOKEN_LENGTH: int = 512 # Max token length to avoid out of memory
PER_DEVICE_BATCH_SIZE: int = 1 # GPU batch size
# Huggingface Model
MODEL = "bigscience/bloom-560m"
# Training
NUM_EPOCHS: int = 3
MAX_STEPS: int = NUM_EPOCHS * DATASET_TRAIN_NUM_SELECT if DATASET_STREAMING else -1
train = load_dataset("xsum", split="train", streaming=DATASET_STREAMING)
prompt_templates = DatasetTemplates( dataset_name=DATASET_NAME)
template: Template = prompt_templates['summarize_DOC']
# # Preprocess
tokenizer = AutoTokenizer.from_pretrained(MODEL, use_fast=True)
def get_convert_to_request_response(template: Template) -> Callable:
def _convert_to_prompt_response(example: Dict[str, str]) -> Dict[str, str]:
"""Generate prompt, response as a dictionary:
{
"prompt": "Summarize: ...",
"response": "..."
}
NOTE: DO NOT use with dataset map function( batched=True). Use batch=False
Args:
example: single {document, summary} pair to be able to apply template
Returns: a dictionary of pro
"""
# assert isinstance(example, dict), f"expected dict but {type(example)}.\n{example}"
assert isinstance(example['document'], str), f"expected str but {type(example['document'])}."
prompt, response = template.apply(example=example, truncate=False)
return {
"prompt": re.sub(r'[\s\'\"]+', ' ', prompt),
"response": re.sub(r'[\s\'\"]+', ' ', response)
}
return _convert_to_prompt_response
convert_to_request_response: Callable = get_convert_to_request_response(template=template)
def tokenize_prompt_response(examples):
"""Generate the model inputs in the dictionary with format:
{
"input_ids": List[int],
"attention_mask": List[int]",
"labels": List[int]
}
Note: Huggngface dataaset map(batched=True, batch_size=n) merges values of
n dictionarys into a values of the key. If you have n instances of {"key", "v"}, then
you will get {"key": ["v", "v", "v", ...] }.
Args:
examples: a dictionary of format {
"prompt": [prompt+],
"response": [respnse+]
} where + means more than one instance because of Dataset.map(batched=True)
"""
inputs: Dict[str, List[int]] = tokenizer(
text_target=examples["prompt"],
max_length=MAX_TOKEN_LENGTH,
truncation=True
)
labels: Dict[str, List[int]] = tokenizer(
text_target=examples["response"],
max_length=MAX_TOKEN_LENGTH,
truncation=True,
padding='max_length',
)
inputs["labels"] = labels["input_ids"]
return inputs
remove_column_names: List[str] = list(train.features.keys())
tokenized_train = train.map(
function=convert_to_request_response,
batched=False,
batch_size=2048,
drop_last_batch=False,
remove_columns=remove_column_names,
).map(
function=tokenize_prompt_response,
batched=True,
batch_size=32,
drop_last_batch=True,
remove_columns=['prompt', 'response']
).shuffle(
seed=42
).with_format(
"torch"
)
if DATASET_STREAMING:
tokenized_train = tokenized_train.take(DATASET_TRAIN_NUM_SELECT)
else:
tokenized_train = tokenized_train.select(
indices=range(DATASET_TRAIN_NUM_SELECT)
)
del train
tokenized_validation = load_dataset(
path="xsum",
split="validation",
streaming=DATASET_STREAMING
).map(
function=convert_to_request_response,
batched=False,
batch_size=2048,
drop_last_batch=False,
remove_columns=remove_column_names,
).map(
function=tokenize_prompt_response,
batched=True,
batch_size=32,
drop_last_batch=True,
remove_columns=['prompt', 'response']
).with_format(
"torch"
)
if DATASET_STREAMING:
tokenized_validation = tokenized_validation.take(DATASET_TRAIN_NUM_SELECT)
else:
tokenized_validation = tokenized_validation.select(
indices=range(DATASET_TRAIN_NUM_SELECT)
)
# # Training
model = BloomForCausalLM.from_pretrained(MODEL)
model.cuda()
def predict(prompt: str) -> str:
inputs = tokenizer(prompt, return_tensors='pt')
print(inputs["input_ids"].shape)
response_tokens = model.generate(
inputs["input_ids"].cuda(),
max_new_tokens=1,
do_sample=False,
top_k=50,
top_p=0.9
)[0]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
return response
# DataCollatorWithPadding does not pad 'labels' which causes an error at train()
# https://stackoverflow.com/a/74228547/4281353
data_collator = DataCollatorWithPadding(
tokenizer=tokenizer,
padding='max_length',
pad_to_multiple_of=8,
max_length=MAX_TOKEN_LENGTH,
return_tensors='pt'
)
# ## Evaluation
rouge = evaluate.load("rouge")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
# ## Trainer API
training_args = TrainingArguments(
output_dir="bloom_finetuned",
max_steps=MAX_STEPS,
num_train_epochs=NUM_EPOCHS,
per_device_train_batch_size=PER_DEVICE_BATCH_SIZE,
per_device_eval_batch_size=PER_DEVICE_BATCH_SIZE,
learning_rate=2e-5,
weight_decay=0.01,
fp16=True,
no_cuda=False,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit=3,
log_level="debug",
disable_tqdm=False,
push_to_hub=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_validation,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
```
### Expected behavior
Get the intended epochs 3 or explanation of the Num Epochs (9223372036854775807).
When not using streaming ```DATASET_STREAMING=False``` as in the code, the ```Num Epochs``` is displayed as expected.
```
***** Running training *****
Num examples = 2,048
Num Epochs = 3
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 1
Gradient Accumulation steps = 1
Total optimization steps = 6,144
Number of trainable parameters = 559,214,592
```
# Related
* [TrainingArguments class - max_steps formula when using streaming dataset](https://discuss.huggingface.co/t/trainingarguments-class-max-steps-formula-when-using-streaming-dataset/36531)
* [Streaming Dataset of Sequence Length 2048](https://discuss.huggingface.co/t/streaming-dataset-of-sequence-length-2048/17649)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22757/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22756
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22756/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22756/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22756/events
|
https://github.com/huggingface/transformers/issues/22756
| 1,667,461,041
|
I_kwDOCUB6oc5jY2-x
| 22,756
|
TypeError: export() got an unexpected keyword argument 'preprocessor'
|
{
"login": "susht3",
"id": 12723964,
"node_id": "MDQ6VXNlcjEyNzIzOTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/12723964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susht3",
"html_url": "https://github.com/susht3",
"followers_url": "https://api.github.com/users/susht3/followers",
"following_url": "https://api.github.com/users/susht3/following{/other_user}",
"gists_url": "https://api.github.com/users/susht3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susht3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susht3/subscriptions",
"organizations_url": "https://api.github.com/users/susht3/orgs",
"repos_url": "https://api.github.com/users/susht3/repos",
"events_url": "https://api.github.com/users/susht3/events{/privacy}",
"received_events_url": "https://api.github.com/users/susht3/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"@susht3, could you please follow the issue template and provide a minimal reproducible code snippet, information about the running environment (found using `transformers-cli env`) and a full trackback of the error? "
] | 1,681
| 1,681
| null |
NONE
| null |
### Model description
onnx_inputs, onnx_outputs = export(
preprocessor=tokenizer, model=model, config=onnx_config, opset=10, output=onnx_model_path
)
i got error: TypeError: export() got an unexpected keyword argument 'preprocessor'
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22756/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/22755
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22755/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22755/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22755/events
|
https://github.com/huggingface/transformers/issues/22755
| 1,667,408,984
|
I_kwDOCUB6oc5jYqRY
| 22,755
|
no transformers version 4.29.0.dev0
|
{
"login": "skye95git",
"id": 41561936,
"node_id": "MDQ6VXNlcjQxNTYxOTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/41561936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skye95git",
"html_url": "https://github.com/skye95git",
"followers_url": "https://api.github.com/users/skye95git/followers",
"following_url": "https://api.github.com/users/skye95git/following{/other_user}",
"gists_url": "https://api.github.com/users/skye95git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skye95git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skye95git/subscriptions",
"organizations_url": "https://api.github.com/users/skye95git/orgs",
"repos_url": "https://api.github.com/users/skye95git/repos",
"events_url": "https://api.github.com/users/skye95git/events{/privacy}",
"received_events_url": "https://api.github.com/users/skye95git/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"`pip install git+https://github.com/huggingface/transformers`\r\n\r\nThe above will install the git/head version which is 4.29.0.dev0.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
https://github.com/huggingface/transformers/blob/bfb3925fcbbb4fda83e023448e36e4d6c6f16a4c/examples/pytorch/language-modeling/run_mlm.py#L56
I try to run script `run_mlm.py`, meet an error: ImportError: This example requires a source install from HuggingFace Transformers (see `https://huggingface.co/transformers/installation.html#installing-from-source`), but the version found is 4.28.0.
Check out https://huggingface.co/transformers/examples.html for the examples corresponding to other versions of HuggingFace Transformers.
There seems no transformers version 4.29.0.dev0.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22755/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22754
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22754/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22754/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22754/events
|
https://github.com/huggingface/transformers/pull/22754
| 1,667,369,874
|
PR_kwDOCUB6oc5OSBzy
| 22,754
|
🌐 [i18n-KO] translate `create_a_model` doc to Korean
|
{
"login": "gabrielwithappy",
"id": 102908949,
"node_id": "U_kgDOBiJEFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102908949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabrielwithappy",
"html_url": "https://github.com/gabrielwithappy",
"followers_url": "https://api.github.com/users/gabrielwithappy/followers",
"following_url": "https://api.github.com/users/gabrielwithappy/following{/other_user}",
"gists_url": "https://api.github.com/users/gabrielwithappy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabrielwithappy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabrielwithappy/subscriptions",
"organizations_url": "https://api.github.com/users/gabrielwithappy/orgs",
"repos_url": "https://api.github.com/users/gabrielwithappy/repos",
"events_url": "https://api.github.com/users/gabrielwithappy/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabrielwithappy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Team PseudoLab, may you please review this PR?\r\n@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd\r\n",
"리뷰 해주신 내용 확인했습니다. \r\n확인해보고 다시 업데이트 할께요~",
"@sgugger, @ArthurZucker, @eunseojo May you please review this PR? ",
"There are still a couple of comments left to address @gabrielwithappy ",
"@sgugger oh, I just found! I will chech and change to draft before ready-to-review \r\n:-)",
"@sgugger, @ArthurZucker, @eunseojo \r\nMay you please review this PR?\r\nI checked anchors of titles in the doc and fixed all reviews \r\nBRs. "
] | 1,681
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `create_a_model.mdx` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? -->
<!-- Team PseudoLab, may you please review this PR?
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd -->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22754/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22754/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22754",
"html_url": "https://github.com/huggingface/transformers/pull/22754",
"diff_url": "https://github.com/huggingface/transformers/pull/22754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22754.patch",
"merged_at": 1682355739000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22753
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22753/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22753/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22753/events
|
https://github.com/huggingface/transformers/issues/22753
| 1,667,255,193
|
I_kwDOCUB6oc5jYEuZ
| 22,753
|
Add callbacks method to the trainer which are called when the loop starts and when skipping steps ends
|
{
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"It sounds like an interesting addition, but the callbacks don't have access to the training dataset, so I'm not sure how you would use that for your use case.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### Feature request
Add callbacks method in the training loop when the loop starts and when the steps skipping end.
### Motivation
When using an iterable dataset, all preprocessing is done on the fly when a batch is requested by the dataloader. As a result, when resuming training from a checkpoint, steps skipping can be extremely long because all the preprocessing steps are done for nothing.
If we have callback(s) that signal when the step skipping starts and ends, we could for instance set an environment variable that we can use in all our processing to signal to not do anything with the current batch.
The callback on_step_begin is only called once the first useful batch is loaded, it is then too late to signal to actually perform the processings.
I'm also open to propostions if you know another way of rapidly skipping batches when using an iterable dataset
### Your contribution
I could contribute to a PR if needed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22753/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22752
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22752/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22752/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22752/events
|
https://github.com/huggingface/transformers/pull/22752
| 1,667,042,024
|
PR_kwDOCUB6oc5OQ7ZU
| 22,752
|
Introduce `PartialState` as the device handler in the `Trainer`
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 2107554019,
"node_id": "MDU6TGFiZWwyMTA3NTU0MDE5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models",
"name": "Distributed Training / Models",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@pacman100 I'm starting small with what is minimally needed with the API. E.g. the AcceleratorState isn't needed until we get into parts such as mixed precision. The device handling can be done separately altogether as it doesn't need to rely on an accelerator object the user passes in\r\n\r\nThis may eventually be the `AcceleratorState`, but for now just starting with the `PartialState` seemed okay to me. ",
"PR is good for final review, tested on single + multi gpu + deepspeed and seemed to work fine. Ignore the `examples_tensorflow` failure, as that's actually a good failure (if one could exist)",
"@muellerzr \r\n\r\nI don't know if here is a right place to report but\r\n\r\nI think this merge causing error in Seq2SeqTrainingArguments.\r\n\r\nbefore this merge, I was able to run this code\r\n\r\n```\r\nfrom transformers import Seq2SeqTrainingArguments\r\n\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./train_test\", # change to a repo name of your choice\r\n per_device_train_batch_size=8,\r\n gradient_accumulation_steps=2, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=5,\r\n max_steps=40,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=8,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=10,\r\n eval_steps=10,\r\n logging_steps=25,\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=False,\r\n)\r\n```\r\n\r\nError messages\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNameError Traceback (most recent call last)\r\n[<ipython-input-25-7693116680cc>](https://localhost:8080/#) in <cell line: 3>()\r\n 1 from transformers import Seq2SeqTrainingArguments\r\n 2 \r\n----> 3 training_args = Seq2SeqTrainingArguments(\r\n 4 output_dir=\"./whisper-large-ja-7\", # change to a repo name of your choice\r\n 5 per_device_train_batch_size=8,\r\n\r\nusr/local/lib/python3.9/dist-packages/transformers/training_args_seq2seq.py in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, eval_delay, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, log_level, log_level_replica, log_on_each_node, logging_dir, logging_strategy, logging_first_step, logging_steps, logging_nan_inf_filter, save_strategy, save_steps, save_total_limit, save_safetensors, save_on_each_node, no_cuda, use_mps_device, seed, data_seed, jit_mode_eval, use_ipex, bf16, fp16, fp16_opt_level, half_precision_backend, bf16_full_eval, fp16_full_eval, tf32, local_rank, xpu_backend, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, sharded_ddp, fsdp, fsdp_min_num_params, fsdp_config, fsdp_transformer_layer_cls_to_wrap, deepspeed, label_smoothing_factor, optim, optim_args, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, ddp_bucket_cap_mb, dataloader_pin_memory, skip_mem...\r\n\r\n[/usr/local/lib/python3.9/dist-packages/transformers/training_args.py](https://localhost:8080/#) in __post_init__(self)\r\n 1253 self.framework == \"pt\"\r\n 1254 and is_torch_available()\r\n-> 1255 and (self.device.type != \"cuda\")\r\n 1256 and (get_xla_device_type(self.device) != \"GPU\")\r\n 1257 and (self.fp16 or self.fp16_full_eval)\r\n\r\n[/usr/local/lib/python3.9/dist-packages/transformers/training_args.py](https://localhost:8080/#) in device(self)\r\n 1613 \"\"\"\r\n 1614 requires_backends(self, [\"torch\"])\r\n-> 1615 return self._setup_devices\r\n 1616 \r\n 1617 @property\r\n\r\n[/usr/local/lib/python3.9/dist-packages/transformers/utils/generic.py](https://localhost:8080/#) in __get__(self, obj, objtype)\r\n 52 cached = getattr(obj, attr, None)\r\n 53 if cached is None:\r\n---> 54 cached = self.fget(obj)\r\n 55 setattr(obj, attr, cached)\r\n 56 return cached\r\n\r\n[/usr/local/lib/python3.9/dist-packages/transformers/training_args.py](https://localhost:8080/#) in _setup_devices(self)\r\n 1547 device = self.distributed_state.device\r\n 1548 else:\r\n-> 1549 self.distributed_state = PartialState(backend=self.xpu_backend)\r\n 1550 device = self.distributed_state.device\r\n 1551 self._n_gpu = 1\r\n\r\nNameError: name 'PartialState' is not defined\r\n```",
"@rennn2002 Could you try updating your version of accelerate i.e. `pip install --upgrade accelerate`? "
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR is the start of the `Trainer` integration of [Accelerate](https://github.com/huggingface/accelerate)
The integration will follow multiple stages to ensure small changes happen iteratively. This first one simply changes the device handler/setter to be the `PartialState` in `Accelerate` and nothing more. In a follow-up PR, I will start to include more utilities utilizing it.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22752/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22752",
"html_url": "https://github.com/huggingface/transformers/pull/22752",
"diff_url": "https://github.com/huggingface/transformers/pull/22752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22752.patch",
"merged_at": 1681758586000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22751
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22751/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22751/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22751/events
|
https://github.com/huggingface/transformers/issues/22751
| 1,667,025,967
|
I_kwDOCUB6oc5jXMwv
| 22,751
|
DeepSpeed integration not respecting `--warmup_steps` in multi-gpu setting
|
{
"login": "cchen-dialpad",
"id": 47165889,
"node_id": "MDQ6VXNlcjQ3MTY1ODg5",
"avatar_url": "https://avatars.githubusercontent.com/u/47165889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cchen-dialpad",
"html_url": "https://github.com/cchen-dialpad",
"followers_url": "https://api.github.com/users/cchen-dialpad/followers",
"following_url": "https://api.github.com/users/cchen-dialpad/following{/other_user}",
"gists_url": "https://api.github.com/users/cchen-dialpad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cchen-dialpad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cchen-dialpad/subscriptions",
"organizations_url": "https://api.github.com/users/cchen-dialpad/orgs",
"repos_url": "https://api.github.com/users/cchen-dialpad/repos",
"events_url": "https://api.github.com/users/cchen-dialpad/events{/privacy}",
"received_events_url": "https://api.github.com/users/cchen-dialpad/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thank you for the detailed report and an easy way to reproduce it, @cchen-dialpad \r\n\r\nI'm very much doubting this has anything to do with deepspeed, since you're not using its LR Scheduler, but the default `get_linear_schedule_with_warmup` in HF Trainer. That is you should see the same behavior w/o using deepspeed.\r\n\r\nNow I recommend adding: `--logging_steps 1 --logging_strategy steps` and checking the actual LR reports after each step, rather than the graph which most likely extrapolates.\r\n\r\nCan you try with the latest `transformers`? I can't repro your report with it. I lowered the number of warm up steps to 10 and you can see that it gets there in 10 steps:\r\n\r\n```\r\n$ deepspeed --num_gpus=4 examples/pytorch/language-modeling/run_clm.py --logging_steps 1 --logging_strategy steps --report_to none --evaluation_strategy steps --save_strategy steps --save_steps 10000 --max_steps 200 --num_train_epochs 2 --learning_rate 0.00006 --warmup_steps 10 --model_name_or_path gpt2-medium --do_train --do_eval --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --output_dir ./tmp/gpt2_medium-test-mlflow --overwrite_output_dir --per_device_train_batch_size=1 --per_device_eval_batch_size=1 --max_train_samples=200 --max_eval_samples=10 --evaluation_strategy no --deepspeed ds.json\r\n\r\n{'loss': 3.3841, 'learning_rate': 6e-06, 'epoch': 0.02} \r\n{'loss': 3.1834, 'learning_rate': 1.2e-05, 'epoch': 0.04} \r\n{'loss': 3.0764, 'learning_rate': 1.8e-05, 'epoch': 0.06} \r\n{'loss': 3.1638, 'learning_rate': 2.4e-05, 'epoch': 0.08} \r\n{'loss': 3.0197, 'learning_rate': 3e-05, 'epoch': 0.1} \r\n{'loss': 3.3902, 'learning_rate': 3.6e-05, 'epoch': 0.12} \r\n{'loss': 3.0261, 'learning_rate': 4.2e-05, 'epoch': 0.14} \r\n{'loss': 3.1501, 'learning_rate': 4.8e-05, 'epoch': 0.16} \r\n{'loss': 3.1557, 'learning_rate': 5.4000000000000005e-05, 'epoch': 0.18} \r\n{'loss': 3.1952, 'learning_rate': 6e-05, 'epoch': 0.2} \r\n```\r\n\r\nAlso fyi if you were to run under fp16, you could get optimizer skipping steps while it's tuning up its mixed precision scale factor, so that could also contribute to taking more steps than requested. But you're not using fp16 so this shouldn't be the cause. Just sharing this as FYI.",
"@stas00 Ah, it's working properly with transformers `v4.28.0` and `v4.27.0`. Not sure how it was fixed from the release info, but thanks for checking! Closing now.",
"> Also fyi if you were to run under fp16, you could get optimizer skipping steps while it's tuning up its mixed precision scale factor, so that could also contribute to taking more steps than requested. But you're not using fp16 so this shouldn't be the cause. Just sharing this as FYI.\r\n\r\nThanks for sharing, I believe I will run into this later :)",
"same issues with version 4.29.dev0, the lr is increasing at the first several steps while configured with a warmup scheduler in deepspeed config."
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-1030-gcp-x86_64-with-debian-bullseye-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.4
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Deepspeed version: 0.8.3
### Who can help?
@stas00 @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. DeepSpeed config file `ds_config_zero2_no_optim.json` (adapted from https://huggingface.co/docs/transformers/main_classes/deepspeed#deepspeed-trainer-integration)
```
{
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": "auto",
"contiguous_gradients": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
2. DeepSpeed launch the training script with 4 GPUs (the script is copied from https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py)
```
deepspeed --num_gpus=4 run_clm.py \
--deepspeed $HOME/projects/ds_config_zero2_no_optim.json \
--report_to mlflow \
--evaluation_strategy steps \
--logging_strategy steps \
--logging_steps 20 \
--save_strategy steps \
--save_steps 10000 \
--max_steps 2560 \
--num_train_epochs 2 \
--learning_rate 0.00006 \
--warmup_steps 100 \
--model_name_or_path gpt2-medium \
--do_train \
--do_eval \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--output_dir ./tmp/gpt2_medium-test-mlflow \
--overwrite_output_dir \
--per_device_train_batch_size=1 \
--per_device_eval_batch_size=1 \
--max_train_samples=2560 \
--max_eval_samples=10
```
3. Note that the `--warmup_steps 100` and `--learning_rate 0.00006`, so by default, learning rate should increase linearly to 6e-5 at step 100. But the learning rate curve shows that it took 360 steps, and the slope is not a straight line.
<img width="1036" alt="image" src="https://user-images.githubusercontent.com/47165889/231864162-12f80df5-2827-4bb9-b706-aac66eae5a47.png">
4. Interestingly, if you deepspeed launch with just a single GPU `--num_gpus=1`, the curve seems correct
<img width="927" alt="image" src="https://user-images.githubusercontent.com/47165889/231865235-3d564e5d-4d60-4c5f-ad75-588e35283789.png">
6. The above model is `gpt2-medium`, but training other models such as `gpt2` (with 2 GPUs) also has a similar behavior. For example, note below that `gpt2` at step-100 has a learning rate about `5.82e-05` :
<img width="945" alt="image" src="https://user-images.githubusercontent.com/47165889/231867661-232a017b-d192-490d-822f-0de8d269ba4d.png">
### Expected behavior
The default scheduler should warmup the learning rate linearly, to the specified max learning rate at the specified `warmup_steps`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22751/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22750
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22750/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22750/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22750/events
|
https://github.com/huggingface/transformers/pull/22750
| 1,666,992,570
|
PR_kwDOCUB6oc5OQxDZ
| 22,750
|
Revert (for now) the change on `Deta` in #22437
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
See #22656 for some discussion. Basically, the loading of checkpoints for this model is currently not working correctly, and we do want to avoid this situation as early as possible.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22750/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22750",
"html_url": "https://github.com/huggingface/transformers/pull/22750",
"diff_url": "https://github.com/huggingface/transformers/pull/22750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22750.patch",
"merged_at": 1681414350000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22749
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22749/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22749/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22749/events
|
https://github.com/huggingface/transformers/pull/22749
| 1,666,939,338
|
PR_kwDOCUB6oc5OQl0d
| 22,749
|
Allow fine-tune wav2vec2 with local path custom dataset
|
{
"login": "ProgramadorArtificial",
"id": 130674366,
"node_id": "U_kgDOB8nuvg",
"avatar_url": "https://avatars.githubusercontent.com/u/130674366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ProgramadorArtificial",
"html_url": "https://github.com/ProgramadorArtificial",
"followers_url": "https://api.github.com/users/ProgramadorArtificial/followers",
"following_url": "https://api.github.com/users/ProgramadorArtificial/following{/other_user}",
"gists_url": "https://api.github.com/users/ProgramadorArtificial/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ProgramadorArtificial/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ProgramadorArtificial/subscriptions",
"organizations_url": "https://api.github.com/users/ProgramadorArtificial/orgs",
"repos_url": "https://api.github.com/users/ProgramadorArtificial/repos",
"events_url": "https://api.github.com/users/ProgramadorArtificial/events{/privacy}",
"received_events_url": "https://api.github.com/users/ProgramadorArtificial/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@ProgramadorArtificial Thanks for taking the time to open this PR. \r\n\r\nThe example scripts are just examples and won't work out-of-the-box for all problems. As you've done here, a few lines of code might be needed to adapt to different use cases. This change is very specific. As such, it's not an addition we'll be merging in. \r\n",
"Hey @ProgramadorArtificial! In this case, it might be easier to first load your audio data as a HF datasets object (here's an excellent guide on how to do so: https://huggingface.co/docs/datasets/audio_load). You can then save this data locally and/or push it to the Hub:\r\n```python\r\n# set a save path and HF Hub dataset id\r\nLOCAL_DATASET_DIR = ...\r\nHF_DATASET_ID = ...\r\n\r\ndataset.save_to_disk(LOCAL_DATASET_DIR)\r\n# and/or push to hub\r\ndataset.push_to_hub(HF_DATASET_ID)\r\n```\r\nYou'll then be able to use the training script directly with your dataset (either from the local path or loading from the Hub, in the same way that we do for Common Voice in the example).",
"Hey @ProgramadorArtificial - just re-iterating that the examples script is assumed to work with a HF dataset that has an audio column already present (i.e. one that has an [`Audio` feature](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Audio)). If your custom dataset does not have the audio files loaded up, you'll need to perform a round of pre-processing to get your dataset into the right format for this script (expected to have two columns: \"audio\" and \"text\")."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
# What does this PR do?
In the examples for fine-tuning Wav2Vec2 it is possible to use a HuggingFace dataset that is ready to be trained. But when we want to use a local/custom dataset, it's easier to create a .tsv that points to the file path. Which doesn't work in current version - returning error: "AttributeError: 'Value' object has no attribute 'sampling_rate'"
So this PR adds a code to check if the "audio_column_name" has an audio format or a path (string), if it is a string, load the file and convert it to audio format.
Below is an example of a .tsv file that does not work with the current version and works with this PR:
path | sentence
-- | --
data/clips/common_voice_tr_21921195.mp3 | Pirin sözleri hâlâ yankılanıyor.
data/clips/common_voice_tr_21921199.mp3 | Müze Gecesi beş yıldır her yıl düzenleniyor.
data/clips/common_voice_tr_21921206.mp3 | Yunanistan'ın Selanik kenti de ilk ona girdi.
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22749/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22749",
"html_url": "https://github.com/huggingface/transformers/pull/22749",
"diff_url": "https://github.com/huggingface/transformers/pull/22749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22749.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22748
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22748/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22748/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22748/events
|
https://github.com/huggingface/transformers/pull/22748
| 1,666,865,982
|
PR_kwDOCUB6oc5OQWiE
| 22,748
|
Generate: handle text conditioning with multimodal encoder-decoder models
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @younesbelkada @NielsRogge FYI -- this PR consolidates your recent changes regarding text conditioning on multimodal models. The next models should be easier to add :) ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot @gante! 🙏 "
] | 1,681
| 1,681
| 1,681
|
MEMBER
| null |
# What does this PR do?
Consolidates `decoder_input_ids` preparation changes in a single place, for all future multimodal encoder-decoder models on PT and TF.
In a nutshell, this PR generalizes the following use cases:
1. The user passes `decoder_input_ids`, but it is missing the BOS token (some tokenizers, like the T5 tokenizer, do not prepend a BOS token). In this case, a BOS token is prepended.
2. The user passes `input_ids`, but the encoder has no `input_ids` input. In this case, `input_ids` is handled just like `decoder_input_ids`.
Slow tests were run on T5, Pix2Struct, BLIP, and BLIP2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22748/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22748",
"html_url": "https://github.com/huggingface/transformers/pull/22748",
"diff_url": "https://github.com/huggingface/transformers/pull/22748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22748.patch",
"merged_at": 1681411874000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22747
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22747/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22747/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22747/events
|
https://github.com/huggingface/transformers/pull/22747
| 1,666,662,258
|
PR_kwDOCUB6oc5OPqEb
| 22,747
|
[trainer] update url
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
Fixes: https://github.com/huggingface/transformers/issues/22142
update the link to its stable version now that pt-2.0 is out.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22747/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22747",
"html_url": "https://github.com/huggingface/transformers/pull/22747",
"diff_url": "https://github.com/huggingface/transformers/pull/22747.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22747.patch",
"merged_at": 1681403035000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22746
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22746/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22746/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22746/events
|
https://github.com/huggingface/transformers/pull/22746
| 1,666,486,954
|
PR_kwDOCUB6oc5OPD1L
| 22,746
|
fix(llama): fix LlamaTokenzier
|
{
"login": "rockmagma02",
"id": 77961318,
"node_id": "MDQ6VXNlcjc3OTYxMzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/77961318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rockmagma02",
"html_url": "https://github.com/rockmagma02",
"followers_url": "https://api.github.com/users/rockmagma02/followers",
"following_url": "https://api.github.com/users/rockmagma02/following{/other_user}",
"gists_url": "https://api.github.com/users/rockmagma02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rockmagma02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rockmagma02/subscriptions",
"organizations_url": "https://api.github.com/users/rockmagma02/orgs",
"repos_url": "https://api.github.com/users/rockmagma02/repos",
"events_url": "https://api.github.com/users/rockmagma02/events{/privacy}",
"received_events_url": "https://api.github.com/users/rockmagma02/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your quick review, I have re-run the CI tests. 🤗 @amyeroberts @ArthurZucker "
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #22742
- #22742
This PR removes an extra `sep` token in the sequence length calculation.
Ref:
https://github.com/huggingface/transformers/blob/7df1343292a9d75f1410cb37a99f423dcde15dae/src/transformers/models/llama/tokenization_llama.py#L178-L187
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22746/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22746",
"html_url": "https://github.com/huggingface/transformers/pull/22746",
"diff_url": "https://github.com/huggingface/transformers/pull/22746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22746.patch",
"merged_at": 1681406379000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22745
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22745/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22745/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22745/events
|
https://github.com/huggingface/transformers/pull/22745
| 1,666,471,500
|
PR_kwDOCUB6oc5OPAvJ
| 22,745
|
`DocumentQuestionAnsweringPipeline` only for fast ⚡ tokenizers
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22745). All of your documentation changes will be reflected on that endpoint."
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
So let's make this more clear and explicit.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22745/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22745",
"html_url": "https://github.com/huggingface/transformers/pull/22745",
"diff_url": "https://github.com/huggingface/transformers/pull/22745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22745.patch",
"merged_at": 1681399380000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22744
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22744/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22744/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22744/events
|
https://github.com/huggingface/transformers/issues/22744
| 1,666,462,752
|
I_kwDOCUB6oc5jVDQg
| 22,744
|
Bug in Seq2SeqTrainer?
|
{
"login": "qmeeus",
"id": 25608944,
"node_id": "MDQ6VXNlcjI1NjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmeeus",
"html_url": "https://github.com/qmeeus",
"followers_url": "https://api.github.com/users/qmeeus/followers",
"following_url": "https://api.github.com/users/qmeeus/following{/other_user}",
"gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions",
"organizations_url": "https://api.github.com/users/qmeeus/orgs",
"repos_url": "https://api.github.com/users/qmeeus/repos",
"events_url": "https://api.github.com/users/qmeeus/events{/privacy}",
"received_events_url": "https://api.github.com/users/qmeeus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This has been fixed on main, so make sure to do a source install (or in a couple of hours you can upgrade to v4.28.0 once it's released).",
"Shame on me, I did not pull the latest changes\r\n\r\nThank you for your quick answer",
"No worries at all!"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### System Info
Transformers version: 4.28.0.dev0
Pytorch version: 2
### Who can help?
@sanchit-gandhi @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Finetune Whisper on multiGPU, using Seq2SeqTrainer
During generation step, I am getting an `AttributeError` ('DataParallel' object has no attribute 'generation_config') due to [this line of code](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py#L280). Note that `generation_config` is a correct attribute in the model.
I did finetune a modified version of Whisper on 2 GPUs in the past, but I did not experience the same error. I think that previously, generation was done on one GPU, even if the trainer was using data parallel. Not exactly sure if something has changed or if I did something wrong
The model is `openai/whisper-large-v2`
### Expected behavior
Either generation should not use `DataParallel` or `Seq2SeqTrainer` should access `model.model.generation_config` if the model is an instance of `DataParallel`
Happy to write a PR if necessary, let me know
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22744/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22743
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22743/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22743/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22743/events
|
https://github.com/huggingface/transformers/pull/22743
| 1,666,304,355
|
PR_kwDOCUB6oc5OOccJ
| 22,743
|
Fix `serving_output` for TF composite models (encoder-decoder like models)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
**[If the concept is approved, I will apply the same changes to other TF encoder-decoder family of models]**
The composite models use its components' configurations. See for example
https://github.com/huggingface/transformers/blob/95e7057507c9eca8e997abb98645eee2621a5aea/src/transformers/modeling_tf_utils.py#L426-L430
However, in some places, our codebase still try to access some attributes at the top level of the configuration (i.e. not inside the 2 components), like
https://github.com/huggingface/transformers/blob/95e7057507c9eca8e997abb98645eee2621a5aea/src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py#L664-L669
In particular, `self.config` may not have `use_cache`, for example, for this checkpoint `"nlpconnect/vit-gpt2-image-captioning"`. We should instead look `self.config.deocder.use_cache`.
**This PR try to follow the rule of ` # Encoder Decoder models delegate the application of the configuration options to their inner models. `.**
**This PR is also (another) one necessary step to fix #22731.**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22743/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22743",
"html_url": "https://github.com/huggingface/transformers/pull/22743",
"diff_url": "https://github.com/huggingface/transformers/pull/22743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22743.patch",
"merged_at": 1681422323000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22742
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22742/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22742/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22742/events
|
https://github.com/huggingface/transformers/issues/22742
| 1,666,059,885
|
I_kwDOCUB6oc5jTg5t
| 22,742
|
Bug in LlamaTokenizer when `return_token_type_ids=True`
|
{
"login": "rockmagma02",
"id": 77961318,
"node_id": "MDQ6VXNlcjc3OTYxMzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/77961318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rockmagma02",
"html_url": "https://github.com/rockmagma02",
"followers_url": "https://api.github.com/users/rockmagma02/followers",
"following_url": "https://api.github.com/users/rockmagma02/following{/other_user}",
"gists_url": "https://api.github.com/users/rockmagma02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rockmagma02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rockmagma02/subscriptions",
"organizations_url": "https://api.github.com/users/rockmagma02/orgs",
"repos_url": "https://api.github.com/users/rockmagma02/repos",
"events_url": "https://api.github.com/users/rockmagma02/events{/privacy}",
"received_events_url": "https://api.github.com/users/rockmagma02/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey yep that's indeed a bug thanks for reporting"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
tokenizers: @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I use LLama's tokenizer and pass `return_token_type_ids=True`, I found that the length of the return value `token_type_ids` is different from `input_ids` and `attention_mask`.
```python
In [2]: from transformers import AutoTokenizer
In [3]: tok = AutoTokenizer.from_pretrained('/mnt/checkpoint/llama_7B_hf', use_fast=False)
In [4]: inputs = 'I LOVE'
In [5]: outputs = 'huggingface'
In [6]: tok(inputs, outputs, return_token_type_ids=True)
Out[6]: {'input_ids': [1, 306, 11247, 12064, 1, 298, 688, 3460, 2161], 'token_type_ids': [0, 0, 0, 0, 0, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}
In [7]: list(map(len, _.values()))
Out[7]: [9, 10, 9]
```
### Expected behavior
```python
In [6]: tok(inputs, outputs, return_token_type_ids=True)
Out[6]: {'input_ids': [1, 306, 11247, 12064, 1, 298, 688, 3460, 2161], 'token_type_ids': [0, 0, 0, 0, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}
In [7]: list(map(len, _.values()))
Out[7]: [9, 9, 9]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22742/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22741
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22741/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22741/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22741/events
|
https://github.com/huggingface/transformers/pull/22741
| 1,666,030,732
|
PR_kwDOCUB6oc5ONhg6
| 22,741
|
Remove `DS_BUILD_AIO=1`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"funny, it means that it built it at runtime and it worked. So it finds the right library at run time.\r\n\r\nSo actually if we wanted to skip the test we would need to removed apt install libaio-dev.\r\n\r\nBut this is a better outcome, so let's keep it the way you proposed."
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
In order to make
```
PASSED tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_stage3_nvme_offload
```
Note it pass, not skipped. I am not sure if this is normal however, but you can see the results in [this job run](https://github.com/huggingface/transformers/actions/runs/4686799372)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22741/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22741",
"html_url": "https://github.com/huggingface/transformers/pull/22741",
"diff_url": "https://github.com/huggingface/transformers/pull/22741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22741.patch",
"merged_at": 1681402103000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22740
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22740/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22740/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22740/events
|
https://github.com/huggingface/transformers/pull/22740
| 1,666,005,060
|
PR_kwDOCUB6oc5ONcAi
| 22,740
|
Change `torch_dtype` to str when `saved_model=True` in `save_pretrained` for TF models
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
One issue in #22731 is: the config contains `torch_type` with a torch dtype class as value. Usually, our `save_pretrained` will take care of it. But when `saved_model=True` in `save_pretrained` for TF models, it is not handled, and TF/Keras complains about it.
See the first part in [this comment](https://github.com/huggingface/transformers/issues/22731#issuecomment-1506538658)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22740/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22740",
"html_url": "https://github.com/huggingface/transformers/pull/22740",
"diff_url": "https://github.com/huggingface/transformers/pull/22740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22740.patch",
"merged_at": 1681393936000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22739
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22739/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22739/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22739/events
|
https://github.com/huggingface/transformers/issues/22739
| 1,665,977,933
|
I_kwDOCUB6oc5jTM5N
| 22,739
|
AssertionError when converting openai clip's weight to hf
|
{
"login": "wingvortex",
"id": 45763667,
"node_id": "MDQ6VXNlcjQ1NzYzNjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/45763667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wingvortex",
"html_url": "https://github.com/wingvortex",
"followers_url": "https://api.github.com/users/wingvortex/followers",
"following_url": "https://api.github.com/users/wingvortex/following{/other_user}",
"gists_url": "https://api.github.com/users/wingvortex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wingvortex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wingvortex/subscriptions",
"organizations_url": "https://api.github.com/users/wingvortex/orgs",
"repos_url": "https://api.github.com/users/wingvortex/repos",
"events_url": "https://api.github.com/users/wingvortex/events{/privacy}",
"received_events_url": "https://api.github.com/users/wingvortex/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @wingvortex, thanks for raising this issue. \r\n\r\n~~The trackback in the issue description doesn't contain the error message - could you share that please?~~ Scratch that: I see it's in the `torch.allclose` assert\r\n\r\nFor the checkpoints being converted, could you confirm which of the ViT-B-32 checkpoints are being used e.g. `('ViT-B-32', 'laion400m_e31'),`",
"@amyeroberts It's a checkpoint downloaded by `clip.load('ViT-B/32')`",
"@wingvortex Thanks again for reporting and the additional info. I managed to track it down to an indexing error in the conversion script, which should be resolved when #22776 is merged. "
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.18.0-305.25.1.el8_4.x86_64-x86_64-with-redhat-8.5-Ootpa
- Python version: 3.7.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (gpu)
- Jax version: 0.3.15
- JaxLib version: 0.3.15
### Who can help?
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I'm trying to convert huggingface's clip weight back to openai's because I need to adapt a finetuned model, but it seems there is no such script available. Luckily I found one that converts clip from openai to hugginface here: [https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py](url)
So I start with this script. But When I run it:
`python convert_clip_original_pytorch_to_hf.py --checkpoint_path 'path/to/ViT-B-32.pt' --pytorch_dump_folder_path './'`
I got the following error:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/test/convert_clip_original_pytorch_to_hf.py:1 │
│ 48 in <module> │
│ │
│ 145 │ parser.add_argument("--config_path", default=None, type=str, help="Path to hf config │
│ 146 │ args = parser.parse_args() │
│ 147 │ │
│ ❱ 148 │ convert_clip_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.co │
│ 149 │
│ │
│ /home/anaconda3/envs/hf/lib/python3.7/site-packages/torch/autograd │
│ /grad_mode.py:27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/test/convert_clip_original_pytorch_to_hf.py:1 │
│ 36 in convert_clip_checkpoint │
│ │
│ 133 │ pt_logits_per_image, pt_logits_per_text = pt_model(pixel_values, input_ids) │
│ 134 │ │
│ 135 │ assert torch.allclose(hf_logits_per_image, pt_logits_per_image, atol=1e-3) │
│ ❱ 136 │ assert torch.allclose(hf_logits_per_text, pt_logits_per_text, atol=1e-3) │
│ 137 │ │
│ 138 │ hf_model.save_pretrained(pytorch_dump_folder_path) │
│ 139 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
### Expected behavior
A hf's clip weight should be generated from original openai's by running this script.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22739/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22738
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22738/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22738/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22738/events
|
https://github.com/huggingface/transformers/issues/22738
| 1,665,970,535
|
I_kwDOCUB6oc5jTLFn
| 22,738
|
[LLAMA]: LLAMA tokenizer
|
{
"login": "YuanLiuuuuuu",
"id": 30762564,
"node_id": "MDQ6VXNlcjMwNzYyNTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/30762564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuanLiuuuuuu",
"html_url": "https://github.com/YuanLiuuuuuu",
"followers_url": "https://api.github.com/users/YuanLiuuuuuu/followers",
"following_url": "https://api.github.com/users/YuanLiuuuuuu/following{/other_user}",
"gists_url": "https://api.github.com/users/YuanLiuuuuuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuanLiuuuuuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuanLiuuuuuu/subscriptions",
"organizations_url": "https://api.github.com/users/YuanLiuuuuuu/orgs",
"repos_url": "https://api.github.com/users/YuanLiuuuuuu/repos",
"events_url": "https://api.github.com/users/YuanLiuuuuuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuanLiuuuuuu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"Hey! This is because: \r\n1. You are not using the correct version of transformers:\r\n2. You are not using the correct checkpoints\r\nHere is the snipper I used to get the expected output:\r\n```python \r\n>>> from transformers import LlamaTokenizer\r\n>>> tokenizer = LlamaTokenizer.from_pretrained(\"huggyllama/llama-7b\", add_eos_token= True)\r\n>>> tokenizer.decode(tokenizer.encode(\"Hello\", add_special_tokens = True))\r\n'<s> Hello</s>'\r\n```\r\n\r\n",
"> Hey! This is because:\r\n> \r\n> 1. You are not using the correct version of transformers:\r\n> 2. You are not using the correct checkpoints\r\n> Here is the snipper I used to get the expected output:\r\n> \r\n> ```python\r\n> >>> from transformers import LlamaTokenizer\r\n> >>> tokenizer = LlamaTokenizer.from_pretrained(\"huggyllama/llama-7b\", add_eos_token= True)\r\n> >>> tokenizer.decode(tokenizer.encode(\"Hello\", add_special_tokens = True))\r\n> '<s> Hello</s>'\r\n> ```\r\n\r\nWhich version of transformers do you use?",
"`main` but latest release also has these changes",
"Faced the same issue. It seems like a mismatch between transformers and llama chkt version.\r\n\r\nIt appears that in commit c0f99b4d2ec73090595914dde4c16da207e21d73, a major change has been made to llama tokenizer, so you either install an earlier version (commit 9eae4aa57650c1dbe1becd4e0979f6ad1e572ac0 or before), or convert llama weight using the latest commit. ",
"Or you can just use the tokenizer files from `huggyllama` and save them wherever you want ",
"> Faced the same issue. It seems like a mismatch between transformers and llama chkt version.\r\n> \r\n> It appears that in commit [c0f99b4](https://github.com/huggingface/transformers/commit/c0f99b4d2ec73090595914dde4c16da207e21d73), a major change has been made to llama tokenizer, so you either install an earlier version (commit [9eae4aa](https://github.com/huggingface/transformers/commit/9eae4aa57650c1dbe1becd4e0979f6ad1e572ac0) or before), or convert llama weight using the latest commit.\r\n\r\nMany thanks! Already fix it.",
"> Faced the same issue. It seems like a mismatch between transformers and llama chkt version.\r\n> \r\n> It appears that in commit [c0f99b4](https://github.com/huggingface/transformers/commit/c0f99b4d2ec73090595914dde4c16da207e21d73), a major change has been made to llama tokenizer, so you either install an earlier version (commit [9eae4aa](https://github.com/huggingface/transformers/commit/9eae4aa57650c1dbe1becd4e0979f6ad1e572ac0) or before), or convert llama weight using the latest commit.\r\n\r\nYou have just saved my life! Debugged for days for this weird problem.",
"> Or you can just use the tokenizer files from `huggyllama` and save them wherever you want\r\n\r\nHi, how can I convert a llama model trained with older transformers (commit [9eae4aa](https://github.com/huggingface/transformers/commit/9eae4aa57650c1dbe1becd4e0979f6ad1e572ac0) or before) to be compatible with the latest transformer code?"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.13.2
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained(..., add_eos_token=True)
input_text = "Hello, Huggingface"
tokens = tokenizer(input_text)
# token.input_ids = [0, ..., 0]
```
### Expected behavior
at commit id ``7ade6ef7d``
the ``eos_token_id`` and ``bos_token_id`` is 0 and 0, while those from LLAMA official repo released by META is 2 and 1.
Why there is such kind of distinction?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22738/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22737
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22737/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22737/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22737/events
|
https://github.com/huggingface/transformers/pull/22737
| 1,665,657,137
|
PR_kwDOCUB6oc5OMSlh
| 22,737
|
Indexing fix for gpt_bigcode
|
{
"login": "jlamypoirier",
"id": 18523627,
"node_id": "MDQ6VXNlcjE4NTIzNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/18523627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlamypoirier",
"html_url": "https://github.com/jlamypoirier",
"followers_url": "https://api.github.com/users/jlamypoirier/followers",
"following_url": "https://api.github.com/users/jlamypoirier/following{/other_user}",
"gists_url": "https://api.github.com/users/jlamypoirier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlamypoirier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlamypoirier/subscriptions",
"organizations_url": "https://api.github.com/users/jlamypoirier/orgs",
"repos_url": "https://api.github.com/users/jlamypoirier/repos",
"events_url": "https://api.github.com/users/jlamypoirier/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlamypoirier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
The `past_key_values` is a list of tensors in gpt_bigcode, and not a list of lists like with most models, so we don't want to index twice. The code works either way but it's safer to avoid the unnecessary indexing. I also fixed the associated type hint.
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22737/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22737",
"html_url": "https://github.com/huggingface/transformers/pull/22737",
"diff_url": "https://github.com/huggingface/transformers/pull/22737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22737.patch",
"merged_at": 1681380037000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22736
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22736/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22736/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22736/events
|
https://github.com/huggingface/transformers/pull/22736
| 1,665,218,884
|
PR_kwDOCUB6oc5OKzW2
| 22,736
|
add onnx support for llama
|
{
"login": "sam-h-bean",
"id": 43734688,
"node_id": "MDQ6VXNlcjQzNzM0Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/43734688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-h-bean",
"html_url": "https://github.com/sam-h-bean",
"followers_url": "https://api.github.com/users/sam-h-bean/followers",
"following_url": "https://api.github.com/users/sam-h-bean/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-h-bean/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-h-bean/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-h-bean/subscriptions",
"organizations_url": "https://api.github.com/users/sam-h-bean/orgs",
"repos_url": "https://api.github.com/users/sam-h-bean/repos",
"events_url": "https://api.github.com/users/sam-h-bean/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-h-bean/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @michaelbenayoun "
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
Add ONNX serialization support for Llama models.
Fixes # (issue)
https://github.com/huggingface/optimum/issues/918
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@JingyaHuang @chainyo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22736/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22736",
"html_url": "https://github.com/huggingface/transformers/pull/22736",
"diff_url": "https://github.com/huggingface/transformers/pull/22736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22736.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22735
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22735/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22735/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22735/events
|
https://github.com/huggingface/transformers/pull/22735
| 1,665,218,619
|
PR_kwDOCUB6oc5OKzTL
| 22,735
|
[Doctest] Add configuration_mvp.py
|
{
"login": "elabongaatuo",
"id": 32382363,
"node_id": "MDQ6VXNlcjMyMzgyMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/32382363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elabongaatuo",
"html_url": "https://github.com/elabongaatuo",
"followers_url": "https://api.github.com/users/elabongaatuo/followers",
"following_url": "https://api.github.com/users/elabongaatuo/following{/other_user}",
"gists_url": "https://api.github.com/users/elabongaatuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elabongaatuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elabongaatuo/subscriptions",
"organizations_url": "https://api.github.com/users/elabongaatuo/orgs",
"repos_url": "https://api.github.com/users/elabongaatuo/repos",
"events_url": "https://api.github.com/users/elabongaatuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/elabongaatuo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
Adds configuration_mvp.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look?this passes the test as well, with two warnings.thank you :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22735/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22735",
"html_url": "https://github.com/huggingface/transformers/pull/22735",
"diff_url": "https://github.com/huggingface/transformers/pull/22735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22735.patch",
"merged_at": 1681366759000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22734
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22734/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22734/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22734/events
|
https://github.com/huggingface/transformers/issues/22734
| 1,665,217,994
|
I_kwDOCUB6oc5jQTXK
| 22,734
|
FSDP hangs before training starts
|
{
"login": "agneet42",
"id": 22055826,
"node_id": "MDQ6VXNlcjIyMDU1ODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/22055826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agneet42",
"html_url": "https://github.com/agneet42",
"followers_url": "https://api.github.com/users/agneet42/followers",
"following_url": "https://api.github.com/users/agneet42/following{/other_user}",
"gists_url": "https://api.github.com/users/agneet42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agneet42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agneet42/subscriptions",
"organizations_url": "https://api.github.com/users/agneet42/orgs",
"repos_url": "https://api.github.com/users/agneet42/repos",
"events_url": "https://api.github.com/users/agneet42/events{/privacy}",
"received_events_url": "https://api.github.com/users/agneet42/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Ok the issue is not related to FSDP per say. Looks like something related to communication between the GPU's. I fixed it by \r\n modifying an environment variable as follows : `export NCCL_P2P_DISABLE=1` \r\nClosing!"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: Linux-5.4.0-128-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@pacman100 @stas @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behaviour :
1. I created the FSDP Config file using accelerate config as follows :
```
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: GPTJBlock
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 6
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
2. My bash script looks like this :
```
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5
accelerate launch train_llm.py \
--output_dir /path/to/dir \
--model_name_or_dir "EleutherAI/gpt-j-6B" \
--do_train --per_device_train_batch_size 8 \
--do_eval --per_device_eval_batch_size 8 \
--num_train_epochs 3 \
--evaluation_strategy "steps" \
--eval_steps 1000 \
--save_strategy "steps" \
--save_steps 1000 \
--learning_rate 5e-5 \
--logging_steps 1 \
--bf16 \
--run_name run_fsdp \
--gradient_checkpointing true \
--warmup_ratio 0.03 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap "GPTJBlock"
```
3. My train_llm.py file look like this this -
```
if __name__ == "__main__":
parser = HfArgumentParser(TrainingArguments)
parser.add_argument("--model_name_or_dir")
training_args, args = parser.parse_args_into_dataclasses()
transformers.logging.set_verbosity_debug()
model = AutoModelForCausalLM.from_pretrained(args.model_name_or_dir, use_cache=True, ignore_mismatched_sizes=True)
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_dir, use_cache=True)
tokenizer.pad_token_id = tokenizer.eos_token_id
train_path = 'path/to/train'
train_data = glob(train_path)
val_path = 'path/to/val'
val_data = glob(val_path)
dataset = load_dataset("json", data_files = {"train": train_data, "validation" : val_data})
dataset = dataset.map(transform, batched=True, remove_columns = ["id" ,"tokens"])
train_dataset = dataset["train"]
val_dataset = dataset["validation"]
trainer = Trainer(
model,
training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tokenizer,
data_collator=DataCollatorForTokenClassification(tokenizer, padding='longest'),
compute_metrics=None,
callbacks = [TensorBoardCallback()]
)
if trainer.is_world_process_zero():
print(dataset)
trainer.pop_callback(MLflowCallback)
if training_args.do_train:
if trainer.is_world_process_zero():
print("Training...")
start = time.time()
trainer.train()
mlflow.log_metric(
"time/epoch", (time.time() - start) / 60 / training_args.num_train_epochs
)
```
4. After running my bash script, I see some amount of GPU being used (10G/80G) on all of the 6 GPU's, but it hangs after logging this --
```
***** Running training *****
Num examples = 364978
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 48
Gradient Accumulation steps = 1
Total optimization steps = 22812
Number of trainable parameters = 1030852826
0%| | 0/22812
```
### Expected behavior
Expected Behaviour is for the training to start and the processes to not hang.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22734/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22733
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22733/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22733/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22733/events
|
https://github.com/huggingface/transformers/pull/22733
| 1,665,021,969
|
PR_kwDOCUB6oc5OKIws
| 22,733
|
[Doctest] Add configuration_m2m_100.py
|
{
"login": "elabongaatuo",
"id": 32382363,
"node_id": "MDQ6VXNlcjMyMzgyMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/32382363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elabongaatuo",
"html_url": "https://github.com/elabongaatuo",
"followers_url": "https://api.github.com/users/elabongaatuo/followers",
"following_url": "https://api.github.com/users/elabongaatuo/following{/other_user}",
"gists_url": "https://api.github.com/users/elabongaatuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elabongaatuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elabongaatuo/subscriptions",
"organizations_url": "https://api.github.com/users/elabongaatuo/orgs",
"repos_url": "https://api.github.com/users/elabongaatuo/repos",
"events_url": "https://api.github.com/users/elabongaatuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/elabongaatuo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for your assistance @ydshieh 😃"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
Add configuration_m2m_100.py to utils/documentation_tests.txt for doctest.
Based on issue https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
it passes the test with a few warnings.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22733/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22733",
"html_url": "https://github.com/huggingface/transformers/pull/22733",
"diff_url": "https://github.com/huggingface/transformers/pull/22733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22733.patch",
"merged_at": 1681366627000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22732
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22732/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22732/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22732/events
|
https://github.com/huggingface/transformers/issues/22732
| 1,664,819,670
|
I_kwDOCUB6oc5jOyHW
| 22,732
|
BLIP special additional token self.tokenizer.enc_token_id
|
{
"login": "DianeBouchacourt",
"id": 13796686,
"node_id": "MDQ6VXNlcjEzNzk2Njg2",
"avatar_url": "https://avatars.githubusercontent.com/u/13796686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DianeBouchacourt",
"html_url": "https://github.com/DianeBouchacourt",
"followers_url": "https://api.github.com/users/DianeBouchacourt/followers",
"following_url": "https://api.github.com/users/DianeBouchacourt/following{/other_user}",
"gists_url": "https://api.github.com/users/DianeBouchacourt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DianeBouchacourt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DianeBouchacourt/subscriptions",
"organizations_url": "https://api.github.com/users/DianeBouchacourt/orgs",
"repos_url": "https://api.github.com/users/DianeBouchacourt/repos",
"events_url": "https://api.github.com/users/DianeBouchacourt/events{/privacy}",
"received_events_url": "https://api.github.com/users/DianeBouchacourt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"Hey! Thanks for submitting this issue. I think @younesbelkada is the most familiar with Blip and should be able to answer whether or not the `enc_token_id` should be added. BLIP seems to be using a `BertTokenizer`, and looking at [this](https://huggingface.co/Salesforce/blip-image-captioning-large/blob/main/tokenizer_config.json) I don't think that we are adding it. ",
"Any news here? ",
"Hi @DianeBouchacourt \r\nDo you notice any notable qualitative difference when adding that token? When porting the model I got predictions matched with the current approach ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,691
| 1,691
|
NONE
| null |
### System Info
Hello,
It seems to me that the Salesforce implementation of BLIP uses a special tokem 'ENC' at the beginning of each text sentence, it is added
https://github.com/salesforce/BLIP/blob/a176f1e9cc5a232d2cc6e21b77d2c7e18ceb3c37/models/blip.py#L190
and used when encoding text conditioned on images e.g. https://github.com/salesforce/BLIP/blob/a176f1e9cc5a232d2cc6e21b77d2c7e18ceb3c37/models/blip.py#L67
or here
https://github.com/salesforce/BLIP/blob/b7bb1eeb6e901044a9eb1016f408ee908b216bc7/models/blip_retrieval.py#L124
Shouldn't we do the same? What is special about that token?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
When in conditional mode with images, we should add
`encoder_input_ids[:,0] = self.tokenizer.enc_token_id`?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22732/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22732/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22731
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22731/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22731/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22731/events
|
https://github.com/huggingface/transformers/issues/22731
| 1,664,800,774
|
I_kwDOCUB6oc5jOtgG
| 22,731
|
Saving TFVisionEncoderDecoderModel as SavedModel: `The following keyword arguments are not supported by this model: ['attention_mask', 'token_type_ids'].`
|
{
"login": "DevinTDHa",
"id": 33089471,
"node_id": "MDQ6VXNlcjMzMDg5NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/33089471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevinTDHa",
"html_url": "https://github.com/DevinTDHa",
"followers_url": "https://api.github.com/users/DevinTDHa/followers",
"following_url": "https://api.github.com/users/DevinTDHa/following{/other_user}",
"gists_url": "https://api.github.com/users/DevinTDHa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DevinTDHa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DevinTDHa/subscriptions",
"organizations_url": "https://api.github.com/users/DevinTDHa/orgs",
"repos_url": "https://api.github.com/users/DevinTDHa/repos",
"events_url": "https://api.github.com/users/DevinTDHa/events{/privacy}",
"received_events_url": "https://api.github.com/users/DevinTDHa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ydshieh ",
"Hi @DevinTDHa Just a quick update: instead of `input_ids` in the signature, we have to use `decoder_input_ids`, as the text inputs are for the decoder.\r\n\r\n```python\r\n \"pixel_values\": tf.TensorSpec((None, None, None, None), tf.float32, name=\"pixel_values\"),\r\n \"decoder_input_ids\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_input_ids\"),\r\n```\r\nThis change will fix the issue you mentioned, but the saving is still not working due to other problems - I am still looking how to fix them. ",
"Two extra steps to make the saving working are: \r\n\r\n- First, after `model = TFVisionEncoderDecoderModel.from_pretrained(MODEL_NAME, from_pt=True)` in your code, add\r\n ```python\r\n model.config.torch_dtype = None\r\n ```\r\n- Then, in the file `src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py`, for the class `TFVisionEncoderDecoderModel`, change the method from\r\n ```python\r\n def serving_output(self, output):\r\n pkv = tf.tuple(output.past_key_values)[1] if self.config.use_cache else None\r\n ...\r\n ```\r\n to\r\n ```python\r\n def serving_output(self, output):\r\n pkv = tf.tuple(output.past_key_values)[1] if self.config.decoder.use_cache else None\r\n ...\r\n ```\r\nYou can do these changes in your own fork if you want to proceed quickly.\r\n\r\nI will discuss the team about the fix in our codebase.",
"Thanks a lot, especially for the suggested edits!",
"@DevinTDHa \r\n\r\n\r\nIn fact, what I did that works is I added the following block for the class `TFVisionEncoderDecoderModel` in the file `src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py`\r\n```python\r\n@tf.function(\r\n input_signature=[\r\n {\r\n \"pixel_values\": tf.TensorSpec((None, None, None, None), tf.float32, name=\"pixel_values\"),\r\n \"decoder_input_ids\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_input_ids\"),\r\n }\r\n ]\r\n)\r\ndef serving(self, inputs):\r\n \"\"\"\r\n Method used for serving the model.\r\n\r\n Args:\r\n inputs (`Dict[str, tf.Tensor]`):\r\n The input of the saved model as a dictionary of tensors.\r\n \"\"\"\r\n output = self.call(inputs)\r\n\r\n return self.serving_output(output)\r\n```\r\n\r\nI am not sure why using the approach in your notebook doesn't work (i.e. by specifying `serving_fn` explicitly)",
"The fixes have been merged to the `main` branch. The only thing to do manually is to add the correct `input_signature` to the proper place as shown in the above comment. However, this could not be done in `transformers` codebase I believe, but you can still do it in your own fork.\r\n\r\nI will discuss with our TF experts regarding why specifying `signatures` as you did is not working. But I am going to close this issue. If you still have any related question on this issue, don't hesitate to leave comments 🤗 ",
"Hi @Rocketknight1 Since you are a TF saving expert 🔥 , could you take a look on the code snippet below, and see why it doesn't work when we specify `signatures` manually, please? (it works if I add `serving` method to `TFVisionEncoderDecoderModel` directly.\r\n\r\n(You have to pull `main` branch to incorporate 2 fixes first)\r\n\r\nThank you in advanceeeeeeee ~\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFVisionEncoderDecoderModel\r\n\r\n# load a fine-tuned image captioning model and corresponding tokenizer and image processor\r\nMODEL_NAME = \"nlpconnect/vit-gpt2-image-captioning\"\r\nmodel = TFVisionEncoderDecoderModel.from_pretrained(MODEL_NAME, from_pt=True)\r\nEXPORT_PATH = f\"exports/{MODEL_NAME}\"\r\n\r\n# ========================================================================================================================\r\n# This works\r\n\r\n# Add this block to `TFVisionEncoderDecoderModel` in `src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py`\r\n\"\"\"\r\n @tf.function(\r\n input_signature=[\r\n {\r\n \"pixel_values\": tf.TensorSpec((None, None, None, None), tf.float32, name=\"pixel_values\"),\r\n \"decoder_input_ids\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_input_ids\"),\r\n }\r\n ]\r\n )\r\n\r\n def serving(self, inputs):\r\n output = self.call(inputs)\r\n return self.serving_output(output)\r\n\"\"\"\r\n#model.save_pretrained(\r\n# EXPORT_PATH,\r\n# saved_model=True,\r\n# # signatures={\"serving_default\": my_serving_fn},\r\n#)\r\n# ========================================================================================================================\r\n# Not working (without changing `TFVisionEncoderDecoderModel`)\r\n\r\n@tf.function(\r\n input_signature=[\r\n {\r\n \"pixel_values\": tf.TensorSpec((None, None, None, None), tf.float32, name=\"pixel_values\"),\r\n \"decoder_input_ids\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_input_ids\"),\r\n }\r\n ]\r\n)\r\ndef my_serving_fn(inputs):\r\n output = model.call(inputs)\r\n return model.serving_output(output)\r\n\r\n# This fails\r\nmodel.save_pretrained(\r\n EXPORT_PATH,\r\n saved_model=True,\r\n signatures={\"serving_default\": my_serving_fn},\r\n)\r\n# ========================================================================================================================\r\n\r\n```",
"@ydshieh I have a question regarding this actually:\r\n\r\nCurrently I'm trying to access the decoder (GPT-2) from the saved model but it seems to my knowledge that it is not possible. The default serving signature you suggested outputs the encoder (ViT) outputs only (or am I wrong in this regard?)\r\n\r\nHowever, trying to create a serving for the `model.generate()` function, seems to cause the same error. The error is the same as with saving the model with a custom signature. Would this be possible in theory (combining encoder and decoder in one serving function)?",
"> @ydshieh I have a question regarding this actually:\r\n> \r\n> Currently I'm trying to access the decoder (GPT-2) from the saved model but it seems to my knowledge that it is not possible. The default serving signature you suggested outputs the encoder (ViT) outputs only (or am I wrong in this regard?)\r\n>\r\n\r\nI believe it gives the outputs of both the encoder and decoder. But if you find it is not the case, please open a new issue and we are more than happy to look into it 🤗 .\r\n\r\n> However, trying to create a serving for the `model.generate()` function, seems to cause the same error. The error is the same as with saving the model with a custom signature.\r\nI have never created a saved model format with `generate` and not sure if it will work in most case(s) - @gante Do you have any knowledge if this is supposed to work (in most cases). cc @Rocketknight1 too. \r\n\r\n> Would this be possible in theory (combining encoder and decoder in one serving function)?\r\nSee my comment in the first paragraph 😃 \r\n\r\n\r\n"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: Linux-6.2.6-76060206-generic-x86_64-with-debian-bookworm-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.4
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
@gante Could be related to #16400?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello,
I am trying to save a TFVisionEncoderDecoderModel in a SavedModel format. Specifically, I am using the `nlpconnect/vit-gpt2-image-captioning` pretrained model. It seems like the model is able to be intiallised from the PyTorch checkpoint. However, when trying to save it as a SavedModel, it fails with the error.
```
ValueError: The following keyword arguments are not supported by this model: ['attention_mask', 'token_type_ids'].
```
Link to Google Colab Reproduction:
https://colab.research.google.com/drive/1N2TVejxiBT5S7bRJ2LSmJ8IIR45folGA#scrollTo=aIL92KqPDDjf
Thanks for your time!
### Expected behavior
The model should be saved as a SavedModel without problems, similarly to other pretrained models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22731/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22730
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22730/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22730/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22730/events
|
https://github.com/huggingface/transformers/issues/22730
| 1,664,739,725
|
I_kwDOCUB6oc5jOemN
| 22,730
|
About Whisper finetuning on a out-of-vocabulary language datset
|
{
"login": "LYPinASR",
"id": 112866899,
"node_id": "U_kgDOBro2Uw",
"avatar_url": "https://avatars.githubusercontent.com/u/112866899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LYPinASR",
"html_url": "https://github.com/LYPinASR",
"followers_url": "https://api.github.com/users/LYPinASR/followers",
"following_url": "https://api.github.com/users/LYPinASR/following{/other_user}",
"gists_url": "https://api.github.com/users/LYPinASR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LYPinASR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LYPinASR/subscriptions",
"organizations_url": "https://api.github.com/users/LYPinASR/orgs",
"repos_url": "https://api.github.com/users/LYPinASR/repos",
"events_url": "https://api.github.com/users/LYPinASR/events{/privacy}",
"received_events_url": "https://api.github.com/users/LYPinASR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @LYPinASR, thanks for raising an issue! \r\n\r\nQuestions like this are best placed [in the forum](https://discuss.huggingface.co/), as we try to reserve github issues for bug reporting and feature requests. Here's a relevant thread about a previous finetuning event which might help: https://discuss.huggingface.co/t/open-to-the-community-whisper-fine-tuning-event/26681",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### Feature request
Hello!
I am working on Whisper, and I want to finetune the model on _Amharic_ in OpenASR20 dataset. As far as I know, _Amharic_ is not one of the languages participating in Whisper training. If I want to finetune the model on _Amharic_, what should I do?
Looking forward to your reply! Thank you!
### Motivation
Whisper finetuning on a out-of-vocabulary language datset
### Your contribution
I can do as needed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22730/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22728
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22728/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22728/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22728/events
|
https://github.com/huggingface/transformers/pull/22728
| 1,664,440,043
|
PR_kwDOCUB6oc5OILMU
| 22,728
|
`torch.distributed` group initialization for `torch_neuron` disabled when `optimum-neuron` is installed
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
MEMBER
| null |
As per title.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22728/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22728",
"html_url": "https://github.com/huggingface/transformers/pull/22728",
"diff_url": "https://github.com/huggingface/transformers/pull/22728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22728.patch",
"merged_at": 1681314170000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22727
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22727/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22727/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22727/events
|
https://github.com/huggingface/transformers/pull/22727
| 1,664,417,829
|
PR_kwDOCUB6oc5OIGWT
| 22,727
|
Update warning levels
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates the warning levels at 2 places:
- use logger.info instead of logger.warning when an image processor gets loaded based on a preprocessor_config.json file.
- use `logger.warning_once` instead of warnings.warn(..., FutureWarning) for DETR and friends.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22727/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22727",
"html_url": "https://github.com/huggingface/transformers/pull/22727",
"diff_url": "https://github.com/huggingface/transformers/pull/22727.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22727.patch",
"merged_at": 1681316725000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22726
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22726/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22726/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22726/events
|
https://github.com/huggingface/transformers/pull/22726
| 1,664,403,571
|
PR_kwDOCUB6oc5OIDNb
| 22,726
|
Modify pipeline_tutorial.mdx
|
{
"login": "ARKA1112",
"id": 24940818,
"node_id": "MDQ6VXNlcjI0OTQwODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/24940818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ARKA1112",
"html_url": "https://github.com/ARKA1112",
"followers_url": "https://api.github.com/users/ARKA1112/followers",
"following_url": "https://api.github.com/users/ARKA1112/following{/other_user}",
"gists_url": "https://api.github.com/users/ARKA1112/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ARKA1112/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ARKA1112/subscriptions",
"organizations_url": "https://api.github.com/users/ARKA1112/orgs",
"repos_url": "https://api.github.com/users/ARKA1112/repos",
"events_url": "https://api.github.com/users/ARKA1112/events{/privacy}",
"received_events_url": "https://api.github.com/users/ARKA1112/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @Narsil ",
"You are welcome, Happy to help!!"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
generator(model="openai/whisper-large") always returns error. As the error says the generator expects an input, just like the .flac file above. Even the generator object has no parameters called model. While there are parameters which can be passed to generator like 'batch_size' but to pass a model i believe the the parameter has to be passed while instantiating the pipeline and not as a parameter to the instance.
I believe the correct term should be:
generator = pipeline(model="openai/whisper-large", device=0)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22726/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22726",
"html_url": "https://github.com/huggingface/transformers/pull/22726",
"diff_url": "https://github.com/huggingface/transformers/pull/22726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22726.patch",
"merged_at": 1681309226000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22725
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22725/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22725/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22725/events
|
https://github.com/huggingface/transformers/pull/22725
| 1,664,180,208
|
PR_kwDOCUB6oc5OHSy4
| 22,725
|
[Image processors] Fix warnings
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review! After offline discussion, will close this PR in favor of a new one that just updates the warning levels. The removal of `max_size` can be done in a follow-up PR."
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR aims to reduce the number of warnings shown to the user for 2 use cases:
- DETR and friends' `max_size` argument which is deprecated => if this gets approved, I'll run fix-copies to fix the other DETR-based models
- the pattern matching warning when a configuration doesn't have an image processor in the config
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22725/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22725",
"html_url": "https://github.com/huggingface/transformers/pull/22725",
"diff_url": "https://github.com/huggingface/transformers/pull/22725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22725.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22724
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22724/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22724/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22724/events
|
https://github.com/huggingface/transformers/pull/22724
| 1,664,174,281
|
PR_kwDOCUB6oc5OHRjN
| 22,724
|
add fast support and option
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
Adresses #22669
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22724/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22724",
"html_url": "https://github.com/huggingface/transformers/pull/22724",
"diff_url": "https://github.com/huggingface/transformers/pull/22724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22724.patch",
"merged_at": 1681315805000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22723
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22723/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22723/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22723/events
|
https://github.com/huggingface/transformers/pull/22723
| 1,664,155,021
|
PR_kwDOCUB6oc5OHNvE
| 22,723
|
remove wrong doc in readme
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
Fixes a typo caught in #22710
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22723/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22723",
"html_url": "https://github.com/huggingface/transformers/pull/22723",
"diff_url": "https://github.com/huggingface/transformers/pull/22723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22723.patch",
"merged_at": 1681297873000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22722
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22722/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22722/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22722/events
|
https://github.com/huggingface/transformers/issues/22722
| 1,664,130,735
|
I_kwDOCUB6oc5jMJ6v
| 22,722
|
Inconsistency in Model Output [ Token Classification]
|
{
"login": "pratikchhapolika",
"id": 11159549,
"node_id": "MDQ6VXNlcjExMTU5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikchhapolika",
"html_url": "https://github.com/pratikchhapolika",
"followers_url": "https://api.github.com/users/pratikchhapolika/followers",
"following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions",
"organizations_url": "https://api.github.com/users/pratikchhapolika/orgs",
"repos_url": "https://api.github.com/users/pratikchhapolika/repos",
"events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikchhapolika/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,681
| 1,681
| 1,681
|
NONE
| null |
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
from transformers import pipeline
```
```
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
```
```
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
```
```
ner_results = nlp(example)
print(ner_results)
```
**Output 1:**
```
[{'entity': 'B-PER', 'score': 0.9990139, 'index': 4, 'word': 'Wolfgang', 'start': 11, 'end': 19},
{'entity': 'B-LOC', 'score': 0.999645, 'index': 9, 'word': 'Berlin', 'start': 34, 'end': 40}]
```
```
inputs = tokenizer.encode_plus(example, return_tensors="pt", add_special_tokens=True, max_length=512, padding="max_length", truncation=True)
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
# Feed the encoded segment into the model to obtain the predicted labels for each token
outputs = model(input_ids, attention_mask=attention_mask)
logits = outputs.logits
predicted_labels = torch.argmax(logits, dim=2)[0]
```
`[0, 0, 0, 0, 3, 0, 0, 0, 0, 0]`
```
label_list = [ "O","B-MISC","I-MISC","B-PER","I-PER","B-ORG","I-ORG","B-LOC","I-LOC"]
final_label_names = [label_list[label] for label in predicted_labels]
```
**Output 2:**
`['O','O','O','O', 'B-PER', 'O','O','O','O','O']`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22722/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22721
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22721/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22721/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22721/events
|
https://github.com/huggingface/transformers/issues/22721
| 1,664,099,589
|
I_kwDOCUB6oc5jMCUF
| 22,721
|
Error when loading LlamaTokenizer
|
{
"login": "creamiracle",
"id": 14272291,
"node_id": "MDQ6VXNlcjE0MjcyMjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/14272291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/creamiracle",
"html_url": "https://github.com/creamiracle",
"followers_url": "https://api.github.com/users/creamiracle/followers",
"following_url": "https://api.github.com/users/creamiracle/following{/other_user}",
"gists_url": "https://api.github.com/users/creamiracle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/creamiracle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/creamiracle/subscriptions",
"organizations_url": "https://api.github.com/users/creamiracle/orgs",
"repos_url": "https://api.github.com/users/creamiracle/repos",
"events_url": "https://api.github.com/users/creamiracle/events{/privacy}",
"received_events_url": "https://api.github.com/users/creamiracle/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Given the error, the tokenizer checkpoints that you are using are clearly outdated. Convert the model again using `main` then upload them to your LAN. Also not that you using the fast tokenizer will require to have the latest `tokenizers` library"
] | 1,681
| 1,681
| 1,681
|
NONE
| null |
### System Info
centos 7
python version is 3.7.12
transformers 4.28.0.dev0
sentencepiece 0.1.97
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
My situation is using a LAN, so I can't using function like tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf", add_eos_token=True) which will download the files.
Becasue of this, I have uploaded the llama-7b-hf files into the LAN, and use the directroy to load, such as tokenizer = LlamaTokenizer.from_pretrained("/home/qilin7/chat/llama-7b-hf", add_eos_token=True)
When I do this, I get the error like below:
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_1357/3612021170.py in
5 tokenizer = LlamaTokenizer.from_pretrained("/home/qilin7/chat/llama-7b-hf", add_eos_token=True)
6 tokenizer.pad_token = tokenizer.eos_token
7 tokenizer.pad_token_id = tokenizer.eos_token_id
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1818 local_files_only=local_files_only,
1819 _commit_hash=commit_hash,
-> 1820 **kwargs,
1821 )
1822
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, local_files_only, _commit_hash, *init_inputs, **kwargs)
1963 # Instantiate tokenizer.
1964 try:
-> 1965 tokenizer = cls(*init_inputs, **init_kwargs)
1966 except OSError:
1967 raise OSError(
/opt/conda/lib/python3.7/site-packages/transformers/models/llama/tokenization_llama.py in init(self, vocab_file, unk_token, bos_token, eos_token, pad_token, sp_model_kwargs, add_bos_token, add_eos_token, clean_up_tokenization_spaces, **kwargs)
94 self.add_eos_token = add_eos_token
95 self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
---> 96 self.sp_model.Load(vocab_file)
97
98 def getstate(self):
/media/cfs/.pylib/lib/python3.7/site-packages/sentencepiece/init.py in Load(self, model_file, model_proto)
903 if model_proto:
904 return self.LoadFromSerializedProto(model_proto)
--> 905 return self.LoadFromFile(model_file)
906
907
/media/cfs/.pylib/lib/python3.7/site-packages/sentencepiece/init.py in LoadFromFile(self, arg)
308
309 def LoadFromFile(self, arg):
--> 310 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
311
312 def _EncodeAsIds(self, text, enable_sampling, nbest_size, alpha, add_bos, add_eos, reverse, emit_unk_piece):
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
### Expected behavior
I had ask the problem on sentencepiece, the had replyed like this
https://github.com/google/sentencepiece/issues/850#issuecomment-1504857453
Thanks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22721/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22720
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22720/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22720/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22720/events
|
https://github.com/huggingface/transformers/pull/22720
| 1,663,820,062
|
PR_kwDOCUB6oc5OGFMt
| 22,720
|
Ko translate fast tokenizer
|
{
"login": "kihoon71",
"id": 75935546,
"node_id": "MDQ6VXNlcjc1OTM1NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kihoon71",
"html_url": "https://github.com/kihoon71",
"followers_url": "https://api.github.com/users/kihoon71/followers",
"following_url": "https://api.github.com/users/kihoon71/following{/other_user}",
"gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions",
"organizations_url": "https://api.github.com/users/kihoon71/orgs",
"repos_url": "https://api.github.com/users/kihoon71/repos",
"events_url": "https://api.github.com/users/kihoon71/events{/privacy}",
"received_events_url": "https://api.github.com/users/kihoon71/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22720). All of your documentation changes will be reflected on that endpoint."
] | 1,681
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
# What does this PR do?
Translated the fast_tokenizer.mdx file of the documentation to Korean.
Thank you in advance for your review.
Part of [#20179](https://github.com/huggingface/transformers/issues/20179)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Team PseudoLab, may you please review this PR?
[@0525hhgus](https://github.com/0525hhgus), [@KIHOON71](https://github.com/KIHOON71), [@sim-so](https://github.com/sim-so), [@gabrielwithappy](https://github.com/gabrielwithappy), [@HanNayeoniee](https://github.com/HanNayeoniee), [@wonhyeongseo](https://github.com/wonhyeongseo), [@jungnerd](https://github.com/jungnerd)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22720/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22720",
"html_url": "https://github.com/huggingface/transformers/pull/22720",
"diff_url": "https://github.com/huggingface/transformers/pull/22720.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22720.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22719
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22719/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22719/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22719/events
|
https://github.com/huggingface/transformers/issues/22719
| 1,663,664,971
|
I_kwDOCUB6oc5jKYNL
| 22,719
|
GPT2DoubleHeadsModel Multiple Choice Head Always Has 1 Out Feature
|
{
"login": "ImperatorSmugleaf",
"id": 57930837,
"node_id": "MDQ6VXNlcjU3OTMwODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/57930837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ImperatorSmugleaf",
"html_url": "https://github.com/ImperatorSmugleaf",
"followers_url": "https://api.github.com/users/ImperatorSmugleaf/followers",
"following_url": "https://api.github.com/users/ImperatorSmugleaf/following{/other_user}",
"gists_url": "https://api.github.com/users/ImperatorSmugleaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ImperatorSmugleaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ImperatorSmugleaf/subscriptions",
"organizations_url": "https://api.github.com/users/ImperatorSmugleaf/orgs",
"repos_url": "https://api.github.com/users/ImperatorSmugleaf/repos",
"events_url": "https://api.github.com/users/ImperatorSmugleaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/ImperatorSmugleaf/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! Thanks for opening and issue and reporting this. I'll check if this is expected or not, and if we can open a PR to fix this. \r\nSince gpt2 is a very old model, touching it might be a bit complicated 😓 ",
"Okay! So it seems that the way you are using the model is a bit different from what [the documentation mentions ](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2#transformers.GPT2DoubleHeadsModel). \r\nHere is a version adapted to your code:\r\n```python \r\nimport torch\r\nfrom transformers import AutoTokenizer, GPT2DoubleHeadsModel\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2DoubleHeadsModel.from_pretrained(\"gpt2\")\r\ntokenizer.pad_token = tokenizer.eos_token \r\n# Add a [CLS] to the vocabulary (we should train it also!)\r\nnum_added_tokens = tokenizer.add_special_tokens({\"cls_token\": \"[CLS]\"})\r\n# Update the model embeddings with the new vocabulary size\r\nembedding_layer = model.resize_token_embeddings(len(tokenizer))\r\n \r\nchoices = [\"I love NLP! [CLS]\", \"Hello, world! [CLS]\", \"I don't like carrots [CLS]\", \"This is bad. [CLS]\"]\r\nencoded_choices = [tokenizer.encode(s, padding = \"max_length\") for s in choices]\r\ncls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices]\r\n\r\ninput_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 4\r\nmc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1\r\n\r\noutputs = model(input_ids, mc_token_ids=mc_token_ids, mc_labels = torch.tensor([0]))\r\nlm_logits = outputs.logits\r\nmc_logits = outputs.mc_logits\r\nmc_loss = outputs.mc_loss\r\n```\r\nthis seems to produce a correct output, though I am not sur it follows your intended usage. Closing as it is expected. "
] | 1,681
| 1,686
| 1,686
|
NONE
| null |
### System Info
- `transformers` version: 4.27.4
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.0
- Huggingface_hub version: 0.13.4
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using the GPT2DoubleHeadsModel to label tweets as either bot generated or human generated, and I encountered an issue where no matter what I did, the multiple choice head for the model only ever had 1 out feature (example in second code block below). I wrote some small sample code for a sentiment classifier to demonstrate.
```Python
from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', do_lower_case=True, pad_token='0', padding_side='right', truncation_side='right')
model = GPT2DoubleHeadsModel.from_pretrained('gpt2', num_labels=2)
print(model)
example_sequences = ["I love NLP!", "Hello, world!", "I don't like carrots", "This is bad."]
example_labels = torch.LongTensor([1, 1, 0, 0])
nput_ids_and_masks = tokenizer(example_sequences, truncation=True, padding=True, return_tensors='pt')
model(input_ids = input_ids_and_masks['input_ids'], attention_mask=input_ids_and_masks['attention_mask'], mc_labels=example_labels)
```
```
GPT2DoubleHeadsModel(
(transformer): GPT2Model(
(wte): Embedding(50257, 768)
(wpe): Embedding(1024, 768)
(drop): Dropout(p=0.1, inplace=False)
(h): ModuleList(
(0): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=768, out_features=50257, bias=False)
(multiple_choice_head): SequenceSummary(
(summary): Linear(in_features=768, out_features=1, bias=True)
(activation): Identity()
(first_dropout): Dropout(p=0.1, inplace=False)
(last_dropout): Identity()
)
)
```
```
Traceback (most recent call last):
File "C:\Users\...\train.py", line 201, in <module>
model(input_ids = input_ids_and_masks['input_ids'], attention_mask=input_ids_and_masks['attention_mask'], mc_labels=example_labels)
File "C:\Users\_\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\_\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1318, in forward
mc_loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)), mc_labels.view(-1))
File "C:\Users\_\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\_\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\loss.py", line 1174, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "C:\Users\_\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py", line 3026, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (1) to match target batch_size (4).
```
After spending some time debugging, I noticed that the double heads model class in `modeling_gpt2.py` manually sets `config.num_labels = 1` in its `__init__`.
```Python
class GPT2DoubleHeadsModel(GPT2PreTrainedModel):
_keys_to_ignore_on_load_missing = [r"attn.masked_bias", r"attn.bias", r"lm_head.weight"]
def __init__(self, config):
super().__init__(config)
config.num_labels = 1
self.transformer = GPT2Model(config)
self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
self.multiple_choice_head = SequenceSummary(config)
# Model parallel
self.model_parallel = False
self.device_map = None
# Initialize weights and apply final processing
self.post_init()
```
After removing that line (`config.num_labels = 1`), my code worked perfectly. I was going to open a pull request for this, but I had trouble installing all the dependencies after I forked the repo :(, so this was the next best thing I could do.
### Expected behavior
Firstly, I would expect the multiple choice head of the model to have the number of out features specified in num_labels, so that it is able to perform multiclass classification tasks.
Secondly, I would expect that when giving the model batched input with the correct dimensions, as specified in the [documentation for the double heads model](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/gpt2#transformers.GPT2DoubleHeadsModel), the model would properly run the batched input.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22719/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22729
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22729/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22729/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22729/events
|
https://github.com/huggingface/transformers/issues/22729
| 1,664,470,108
|
I_kwDOCUB6oc5jNcxc
| 22,729
|
Report a hyperlink mistake
|
{
"login": "PolarisRisingWar",
"id": 48322321,
"node_id": "MDQ6VXNlcjQ4MzIyMzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/48322321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PolarisRisingWar",
"html_url": "https://github.com/PolarisRisingWar",
"followers_url": "https://api.github.com/users/PolarisRisingWar/followers",
"following_url": "https://api.github.com/users/PolarisRisingWar/following{/other_user}",
"gists_url": "https://api.github.com/users/PolarisRisingWar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PolarisRisingWar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PolarisRisingWar/subscriptions",
"organizations_url": "https://api.github.com/users/PolarisRisingWar/orgs",
"repos_url": "https://api.github.com/users/PolarisRisingWar/repos",
"events_url": "https://api.github.com/users/PolarisRisingWar/events{/privacy}",
"received_events_url": "https://api.github.com/users/PolarisRisingWar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting! Would you be interested in opening a PR to fix it? The exact file to be modified is https://github.com/huggingface/transformers/blob/main/docs/source/en/tasks/token_classification.mdx and then you would have the contribution :) Happy to open a PR otherwise.\r\n\r\nTransfering to the transformers repo as well as that's the correct one",
"Happy to open a PR if needed",
"@mayankagarwals Great! Feel free to open a PR and ping me for review :) ",
"Sure @amyeroberts ",
"@amyeroberts https://github.com/huggingface/transformers/pull/22765/files . Please review :) "
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
The website is https://huggingface.co/docs/transformers/tasks/token_classification
And the sentence is: Mapping all tokens to their corresponding word with the [word_ids](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#tokenizers.Encoding.word_ids) method.
The hyperlink is mistake, it should be the tokenizer in transformers package (https://huggingface.co/docs/transformers/v4.27.2/en/main_classes/tokenizer#transformers.BatchEncoding.word_ids) but the hyperlink given is in tokenizers package.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22729/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22718
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22718/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22718/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22718/events
|
https://github.com/huggingface/transformers/issues/22718
| 1,663,622,905
|
I_kwDOCUB6oc5jKN75
| 22,718
|
Failed to create cublas handle: cublas error
|
{
"login": "cosmo3769",
"id": 53268607,
"node_id": "MDQ6VXNlcjUzMjY4NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/53268607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cosmo3769",
"html_url": "https://github.com/cosmo3769",
"followers_url": "https://api.github.com/users/cosmo3769/followers",
"following_url": "https://api.github.com/users/cosmo3769/following{/other_user}",
"gists_url": "https://api.github.com/users/cosmo3769/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cosmo3769/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cosmo3769/subscriptions",
"organizations_url": "https://api.github.com/users/cosmo3769/orgs",
"repos_url": "https://api.github.com/users/cosmo3769/repos",
"events_url": "https://api.github.com/users/cosmo3769/events{/privacy}",
"received_events_url": "https://api.github.com/users/cosmo3769/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @cosmo3769, thanks for reporting this issue. I'll look into it. \r\n\r\np.s. I think you might have tagged the wrong Amy :) ",
"> Hi @cosmo3769, thanks for reporting this issue. I'll look into it.\r\n\r\nSure.\r\n\r\n> p.s. I think you might have tagged the wrong Amy :)\r\n\r\nOh yeah, sorry for that. 😅\r\n",
"@cosmo3769 It seems this issue might be coming from the kaggle notebook setup and/or hardware. I'm able to run the snippet loading `TFSegformerForSemanticSegmentation` without issue on a linux machine with 2 GPUs. \r\n\r\nCould you share some more information about the running environment: copy-paste the output of running `! transformers-cli env` in a cell. \r\n\r\nLooking up the error online, some other users have reported similar issues (e.g. [here](https://stackoverflow.com/questions/53698035/failed-to-get-convolution-algorithm-this-is-probably-because-cudnn-failed-to-in)) which were resolved with setting `os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'` which is simple enough to warrant a try :) \r\n\r\n",
"Yes, it solves the issue. Thanks. "
] | 1,681
| 1,682
| 1,682
|
NONE
| null |
### System Info
Kaggle with Accelerator **GPU P100**
### Who can help?
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to Reproduce the behaviour:
1. Go to Kaggle. Create a new notebook. Switch to **GPU P100** accelerator.
2. Use `TFSegformerForSemanticSegmentation`:
```
from transformers import TFSegformerForSemanticSegmentation
model_checkpoint = "nvidia/mit-b0"
id2label = {0: "outer", 1: "inner", 2: "border"}
label2id = {label: id for id, label in id2label.items()}
num_labels = len(id2label)
model = TFSegformerForSemanticSegmentation.from_pretrained(
model_checkpoint,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
)
```
While running this code block, I get this `cublas` error:
```
2023-04-12 02:44:14.603089: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:219] failed to create cublas handle: cublas error
2023-04-12 02:44:14.603208: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:221] Failure to initialize cublas may be due to OOM (cublas needs some free memory when you initialize it, and your deep-learning framework may have preallocated more than its fair share), or may be because this binary was not built with support for the GPU in your machine.
---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
/tmp/ipykernel_23/279563332.py in <module>
10 id2label=id2label,
11 label2id=label2id,
---> 12 ignore_mismatched_sizes=True,
13 )
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2764 model(model.dummy_inputs) # build the network with dummy inputs
2765 else:
-> 2766 model(model.dummy_inputs) # build the network with dummy inputs
2767
2768 if safetensors_from_pt:
/opt/conda/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
430
431 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 432 return func(self, **unpacked_inputs)
433
434 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/opt/conda/lib/python3.7/site-packages/transformers/models/segformer/modeling_tf_segformer.py in call(self, pixel_values, labels, output_attentions, output_hidden_states, return_dict)
859 output_attentions=output_attentions,
860 output_hidden_states=True, # we need the intermediate hidden states
--> 861 return_dict=return_dict,
862 )
863
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
430
431 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 432 return func(self, **unpacked_inputs)
433
434 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/opt/conda/lib/python3.7/site-packages/transformers/models/segformer/modeling_tf_segformer.py in call(self, pixel_values, output_attentions, output_hidden_states, return_dict, training)
484 output_hidden_states=output_hidden_states,
485 return_dict=return_dict,
--> 486 training=training,
487 )
488 sequence_output = encoder_outputs[0]
/opt/conda/lib/python3.7/site-packages/transformers/models/segformer/modeling_tf_segformer.py in call(self, pixel_values, output_attentions, output_hidden_states, return_dict, training)
414 embedding_layer, block_layer, norm_layer = x
415 # first, obtain patch embeddings
--> 416 hidden_states, height, width = embedding_layer(hidden_states)
417
418 # second, send embeddings through blocks
/opt/conda/lib/python3.7/site-packages/transformers/models/segformer/modeling_tf_segformer.py in call(self, pixel_values)
87
88 def call(self, pixel_values: tf.Tensor) -> Tuple[tf.Tensor, int, int]:
---> 89 embeddings = self.proj(self.padding(pixel_values))
90 height = shape_list(embeddings)[1]
91 width = shape_list(embeddings)[2]
NotFoundError: Exception encountered when calling layer 'proj' (type Conv2D).
{{function_node __wrapped__Conv2D_device_/job:localhost/replica:0/task:0/device:GPU:0}} No algorithm worked! Error messages:
Profiling failure on CUDNN engine 1: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED
in tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(4294): 'cudnnConvolutionForward( cudnn.handle(), alpha, input_nd_.handle(), input_data.opaque(), filter_.handle(), filter_data.opaque(), conv_.handle(), ToConvForwardAlgo(algo), scratch_memory.opaque(), scratch_memory.size(), beta, output_nd_.handle(), output_data.opaque())'
Profiling failure on CUDNN engine 0: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED
in tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(4294): 'cudnnConvolutionForward( cudnn.handle(), alpha, input_nd_.handle(), input_data.opaque(), filter_.handle(), filter_data.opaque(), conv_.handle(), ToConvForwardAlgo(algo), scratch_memory.opaque(), scratch_memory.size(), beta, output_nd_.handle(), output_data.opaque())'
Profiling failure on CUDNN engine 2: UNKNOWN: CUDNN_STATUS_EXECUTION_FAILED
in tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc(4294): 'cudnnConvolutionForward( cudnn.handle(), alpha, input_nd_.handle(), input_data.opaque(), filter_.handle(), filter_data.opaque(), conv_.handle(), ToConvForwardAlgo(algo), scratch_memory.opaque(), scratch_memory.size(), beta, output_nd_.handle(), output_data.opaque())' [Op:Conv2D]
Call arguments received by layer 'proj' (type Conv2D):
• inputs=tf.Tensor(shape=(3, 518, 518, 3), dtype=float32)
```
### Expected behavior
When I run the PyTorch version of `Segformer`, it loads the model successfully.
```
from transformers import SegformerForSemanticSegmentation
model_checkpoint = "nvidia/mit-b0"
id2label = {0: "outer", 1: "inner", 2: "border"}
label2id = {label: id for id, label in id2label.items()}
num_labels = len(id2label)
model = SegformerForSemanticSegmentation.from_pretrained(
model_checkpoint,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22718/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22718/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22717
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22717/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22717/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22717/events
|
https://github.com/huggingface/transformers/issues/22717
| 1,663,613,149
|
I_kwDOCUB6oc5jKLjd
| 22,717
|
The `xla_device` argument has been deprecated
|
{
"login": "yqiz-98",
"id": 102643910,
"node_id": "U_kgDOBh44xg",
"avatar_url": "https://avatars.githubusercontent.com/u/102643910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yqiz-98",
"html_url": "https://github.com/yqiz-98",
"followers_url": "https://api.github.com/users/yqiz-98/followers",
"following_url": "https://api.github.com/users/yqiz-98/following{/other_user}",
"gists_url": "https://api.github.com/users/yqiz-98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yqiz-98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yqiz-98/subscriptions",
"organizations_url": "https://api.github.com/users/yqiz-98/orgs",
"repos_url": "https://api.github.com/users/yqiz-98/repos",
"events_url": "https://api.github.com/users/yqiz-98/events{/privacy}",
"received_events_url": "https://api.github.com/users/yqiz-98/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @yqiz-98, thanks for reporting this issue. The warning is being raised, as the [config for the checkpoint mrm8488/t5-base-finetuned-question-generation-ap](https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap/blob/5b3fa1afa0bba84b23b2c27eb7b4bc35aae63876/config.json#L51) contains [the argument `xla_device`](https://github.com/huggingface/transformers/blob/370f0ca18c8e4577357df59936e790acdecef4ac/src/transformers/configuration_utils.py#L363). As the error message indicates, this is now deprecated and can be removed from the config. \r\n\r\nIt would be up to the user [mrm8488](https://huggingface.co/mrm8488) whether they want to make this update to the configuration file. The update would mean that the config file isn't fully compatible with versions of transformers < 4.4.x. I'd suggest opening a discussion [on the repo](https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap/discussions) to ask about this. \r\n\r\nWith this config file, the program can still run normally. I'd be surprised if this caused significant differences to the loading speed. One update we can do on our end, if update `logger.warning` to `logger.warning_once` so that the message is only printed to terminal once. \r\n ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### System Info
Transformers v4.4.0 pycharm python3.8
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap")
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap")
def get_question(answer, context, max_length=64):
input_text = "answer: %s context: %s </s>" % (answer, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
output = tokenizer.decode(output[0])[6:][:-4]
return output
context = "Alan Turing defined AI as the science that enables computers to perform tasks that require human intelligence. Academics at Stanford University consider AI to be the science and engineering of intelligent machines, especially intelligent computer programs. Wikipedia defines AI as the intelligence shown by artificially created systems, and the term also refers to the scientific field that studies whether and how such intelligent systems can be achieved. No matter how it is defined, it cannot be separated from intelligence. However, so far, human beings have not been able to give a unified definition of intelligence, and generally speaking, intelligence only refers to the expression form of human intelligence. Professor Zhong Yixin, former chairman of the Chinese Society for Artificial Intelligence, believes that human intelligence consists of finding problems, defining problems and solving problems, while artificial intelligence is only capable of solving problems. The author believes that intelligence is a kind of order, the embodiment of information, and also the ability to make the world develop in an orderly direction. Sadly, according to the principle of increasing entropy, no matter what the agents do, the universe is always moving in the direction of increasing entropy, which is more and more disorder and chaos. It is not known whether this is God's deliberate arrangement, or human observation of the universe beyond the universe."
answer = "solving problems"
output = get_question(answer, context)
#########[error message]:
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can safely remove it from your `config.json` file.
### Expected behavior
The program can run normally. This warning will reduce the previous script loading speed. How to eliminate this warning. Thank you.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22717/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22716
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22716/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22716/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22716/events
|
https://github.com/huggingface/transformers/issues/22716
| 1,663,508,373
|
I_kwDOCUB6oc5jJx-V
| 22,716
|
Unable to install transformers due to RuntimeError with libssl.so.10
|
{
"login": "Kodhandarama",
"id": 44093589,
"node_id": "MDQ6VXNlcjQ0MDkzNTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/44093589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kodhandarama",
"html_url": "https://github.com/Kodhandarama",
"followers_url": "https://api.github.com/users/Kodhandarama/followers",
"following_url": "https://api.github.com/users/Kodhandarama/following{/other_user}",
"gists_url": "https://api.github.com/users/Kodhandarama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kodhandarama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kodhandarama/subscriptions",
"organizations_url": "https://api.github.com/users/Kodhandarama/orgs",
"repos_url": "https://api.github.com/users/Kodhandarama/repos",
"events_url": "https://api.github.com/users/Kodhandarama/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kodhandarama/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Do you have openssl installd in your environment ?\r\n\r\n`conda install -c anaconda openssl` ? \r\n\r\nThis library is missing from your environment and `tokenizers` is looking for it when loading.\r\nOr do you have another version of ssl installed maybe ? You can do `locate libssl` to try and find it.",
"@Kodhandarama A similar issue has been raised in #21805. As noted there, this isn't a Transformers issue per se, but appears to arise when installing with `conda`. Other users reported installing with `pip` or running `conda update tokenizers` resolved the issue. ",
"@ArthurZucker FYI.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### System Info
Runnin gthis code on a red hat linux machine
I tried installing transformers with all the methods mentioned in https://huggingface.co/docs/transformers/installation
I am hitting the same error (related to pipelines):
Traceback (most recent call last):
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/utils/import_utils.py", line 1125, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/eng/s/sxc220013/anaconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/pipelines/__init__.py", line 29, in <module>
from ..models.auto.configuration_auto import AutoConfig
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/models/__init__.py", line 15, in <module>
from . import (
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/models/mt5/__init__.py", line 36, in <module>
from ..t5.tokenization_t5_fast import T5TokenizerFast
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/models/t5/tokenization_t5_fast.py", line 24, in <module>
from ...tokenization_utils_fast import PreTrainedTokenizerFast
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/tokenization_utils_fast.py", line 25, in <module>
import tokenizers.pre_tokenizers as pre_tokenizers_fast
File "/home/eng/s/sxc220013/anaconda3/lib/python3.9/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: libssl.so.10: cannot open shared object file: No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/utils/import_utils.py", line 1115, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/eng/s/sxc220013/Documents/transformers/src/transformers/utils/import_utils.py", line 1127, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
libssl.so.10: cannot open shared object file: No such file or directory
### Who can help?
@Narsil @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I ran the following commands :
conda install -c huggingface transformers
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"
### Expected behavior
I would expect that the following python command prints the output of the sentiment analysis.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22716/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22715
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22715/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22715/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22715/events
|
https://github.com/huggingface/transformers/issues/22715
| 1,663,408,106
|
I_kwDOCUB6oc5jJZfq
| 22,715
|
Can't load the model from your tutorial for inference
|
{
"login": "kopyl",
"id": 17604849,
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kopyl",
"html_url": "https://github.com/kopyl",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"repos_url": "https://api.github.com/users/kopyl/repos",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Honestly, there is just too much info in docs which does not help.",
"@kopyl, the first error due to GPU, you moved input into `.to(\"cuda\")`, but model is still on `cpu`. The second error due to lack of preprocessing [file](https://huggingface.co/microsoft/git-base/blob/main/preprocessor_config.json), so you can download it or\r\n`processor_trained = AutoProcessor.from_pretrained(\"microsoft/git-base\")`",
"@Xrenya oh, thanks a lot, this helped ❤️ \r\n\r\nAre you a maintainer of Transformers? If so, could I please ask you to add this info to the guide?\r\n\r\nAnd also do you know whether I should load a custom processor from training or the `microsoft/git-base` does the same?\r\nOr do I just train only the model and don't interfere with the processor while training and leaving the old one does not have any side effects?",
"I think it is already contain that information in the following sections **Preprocess the dataset** and **Load a base model** in [doc](https://huggingface.co/docs/transformers/main/tasks/image_captioning). \r\n```\r\nfrom transformers import AutoProcessor\r\n\r\ncheckpoint = \"microsoft/git-base\"\r\nprocessor = AutoProcessor.from_pretrained(checkpoint)\r\n```\r\n```\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint)\r\n```\r\nIf your custom preprocessing is different (parameters, e.g. image_mean, image_std etc. ) from `microsoft/git-base` then you should your preprocessing for training and inference, otherwise, you can just use their preprocessing. ",
"@Xrenya thanks :)\r\n\r\nHaven't seen this info in the doc :(",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
### System Info
Linux. Doesn't matter
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://huggingface.co/docs/transformers/main/tasks/image_captioning
### Expected behavior
I tried this:
`model_trained = AutoModelForCausalLM.from_pretrained("/workspace/git-base-trainer/checkpoint-100")`
But when I ran
```
inputs = processor(images=image, return_tensors="pt").to("cuda")
pixel_values = inputs.pixel_values
generated_ids = model_trained.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
i got this error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[61], line 4
1 inputs = processor(images=image, return_tensors="pt").to("cuda")
2 pixel_values = inputs.pixel_values
----> 4 generated_ids = model_trained.generate(pixel_values=pixel_values, max_length=50)
5 generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
6 # print(generated_caption)
File /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1406, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
1400 raise ValueError(
1401 f"num_return_sequences has to be 1, but is {generation_config.num_return_sequences} when doing"
1402 " greedy search."
1403 )
1405 # 11. run greedy search
-> 1406 return self.greedy_search(
1407 input_ids,
1408 logits_processor=logits_processor,
1409 stopping_criteria=stopping_criteria,
1410 pad_token_id=generation_config.pad_token_id,
1411 eos_token_id=generation_config.eos_token_id,
1412 output_scores=generation_config.output_scores,
1413 return_dict_in_generate=generation_config.return_dict_in_generate,
1414 synced_gpus=synced_gpus,
1415 **model_kwargs,
1416 )
1418 elif is_contrastive_search_gen_mode:
1419 if generation_config.num_return_sequences > 1:
File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2201, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
2198 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
2200 # forward pass to get next token
-> 2201 outputs = self(
2202 **model_inputs,
2203 return_dict=True,
2204 output_attentions=output_attentions,
2205 output_hidden_states=output_hidden_states,
2206 )
2208 if synced_gpus and this_peer_finished:
2209 continue # don't waste resources running the code we don't need
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/transformers/models/git/modeling_git.py:1496, in GitForCausalLM.forward(self, input_ids, attention_mask, position_ids, pixel_values, head_mask, inputs_embeds, labels, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
1493 if labels is not None:
1494 use_cache = False
-> 1496 outputs = self.git(
1497 input_ids,
1498 attention_mask=attention_mask,
1499 position_ids=position_ids,
1500 pixel_values=pixel_values,
1501 head_mask=head_mask,
1502 inputs_embeds=inputs_embeds,
1503 past_key_values=past_key_values,
1504 use_cache=use_cache,
1505 output_attentions=output_attentions,
1506 output_hidden_states=output_hidden_states,
1507 return_dict=return_dict,
1508 )
1510 sequence_output = outputs[0]
1511 logits = self.output(sequence_output)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/transformers/models/git/modeling_git.py:1236, in GitModel.forward(self, input_ids, attention_mask, position_ids, pixel_values, head_mask, inputs_embeds, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
1233 if pixel_values is not None:
1234 if pixel_values.ndim == 4:
1235 # here we assume pixel_values is of shape (batch_size, num_channels, height, width)
-> 1236 visual_features = self.image_encoder(pixel_values).last_hidden_state
1238 elif pixel_values.ndim == 5:
1239 # here we assume pixel_values is of shape (batch_size, num_frames, num_channels, height, width)
1240 visual_features = []
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/transformers/models/git/modeling_git.py:1039, in GitVisionModel.forward(self, pixel_values, output_attentions, output_hidden_states, return_dict)
1016 r"""
1017 Returns:
1018
(...)
1035 >>> last_hidden_state = outputs.last_hidden_state
1036 ```"""
1037 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1039 return self.vision_model(
1040 pixel_values=pixel_values,
1041 output_attentions=output_attentions,
1042 output_hidden_states=output_hidden_states,
1043 return_dict=return_dict,
1044 )
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/transformers/models/git/modeling_git.py:965, in GitVisionTransformer.forward(self, pixel_values, output_attentions, output_hidden_states, return_dict)
962 if pixel_values is None:
963 raise ValueError("You have to specify pixel_values")
--> 965 hidden_states = self.embeddings(pixel_values)
966 hidden_states = self.pre_layrnorm(hidden_states)
968 encoder_outputs = self.encoder(
969 inputs_embeds=hidden_states,
970 output_attentions=output_attentions,
971 output_hidden_states=output_hidden_states,
972 return_dict=return_dict,
973 )
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/transformers/models/git/modeling_git.py:630, in GitVisionEmbeddings.forward(self, pixel_values)
628 def forward(self, pixel_values: torch.FloatTensor) -> torch.Tensor:
629 batch_size = pixel_values.shape[0]
--> 630 patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid]
631 patch_embeds = patch_embeds.flatten(2).transpose(1, 2)
633 class_embeds = self.class_embedding.expand(batch_size, 1, -1)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py:463, in Conv2d.forward(self, input)
462 def forward(self, input: Tensor) -> Tensor:
--> 463 return self._conv_forward(input, self.weight, self.bias)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
455 if self.padding_mode != 'zeros':
456 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
457 weight, bias, self.stride,
458 _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
460 self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
```
I tried running this:
`processor_trained = AutoProcessor.from_pretrained("/workspace/git-base-trainer/checkpoint-100")`
But immediately got this error:
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[63], line 3
1 # tokenizer = AutoTokenizer.from_pretrained("/workspace/git-base-trainer/checkpoint-50")
----> 3 processor_trained = AutoProcessor.from_pretrained("/workspace/git-base-trainer/checkpoint-100")
4 # model_trained = AutoModelForCausalLM.from_pretrained("/workspace/git-base-trainer/checkpoint-100")
File /usr/local/lib/python3.10/dist-packages/transformers/models/auto/processing_auto.py:276, in AutoProcessor.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
274 # Last try: we use the PROCESSOR_MAPPING.
275 if type(config) in PROCESSOR_MAPPING:
--> 276 return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs)
278 # At this stage, there doesn't seem to be a `Processor` class available for this model, so let's try a
279 # tokenizer.
280 try:
File /usr/local/lib/python3.10/dist-packages/transformers/processing_utils.py:184, in ProcessorMixin.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
153 @classmethod
154 def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
155 r"""
156 Instantiate a processor associated with a pretrained model.
157
(...)
182 [`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`].
183 """
--> 184 args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
185 return cls(*args)
File /usr/local/lib/python3.10/dist-packages/transformers/processing_utils.py:228, in ProcessorMixin._get_arguments_from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
225 else:
226 attribute_class = getattr(transformers_module, class_name)
--> 228 args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
229 return args
File /usr/local/lib/python3.10/dist-packages/transformers/models/auto/image_processing_auto.py:315, in AutoImageProcessor.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
312 trust_remote_code = kwargs.pop("trust_remote_code", False)
313 kwargs["_from_auto"] = True
--> 315 config_dict, _ = ImageProcessingMixin.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
316 image_processor_class = config_dict.get("image_processor_type", None)
317 image_processor_auto_map = None
File /usr/local/lib/python3.10/dist-packages/transformers/image_processing_utils.py:268, in ImageProcessingMixin.get_image_processor_dict(cls, pretrained_model_name_or_path, **kwargs)
265 image_processor_file = IMAGE_PROCESSOR_NAME
266 try:
267 # Load from local folder or from cache or download from model Hub and cache
--> 268 resolved_image_processor_file = cached_file(
269 pretrained_model_name_or_path,
270 image_processor_file,
271 cache_dir=cache_dir,
272 force_download=force_download,
273 proxies=proxies,
274 resume_download=resume_download,
275 local_files_only=local_files_only,
276 use_auth_token=use_auth_token,
277 user_agent=user_agent,
278 revision=revision,
279 subfolder=subfolder,
280 )
281 except EnvironmentError:
282 # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to
283 # the original exception.
284 raise
File /usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:380, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
378 if not os.path.isfile(resolved_file):
379 if _raise_exceptions_for_missing_entries:
--> 380 raise EnvironmentError(
381 f"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout "
382 f"'[https://huggingface.co/{path_or_repo_id}/{](https://huggingface.co/%7Bpath_or_repo_id%7D/%7Brevision)[revision](https://huggingface.co/%7Bpath_or_repo_id%7D/%7Brevision)}' for available files."
383 )
384 else:
385 return None
OSError: /workspace/git-base-trainer/checkpoint-100 does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co//workspace/git-base-trainer/checkpoint-100/None' for available files.
```
But when i ran this (with old model):
```
inputs = processor(images=image, return_tensors="pt").to("cuda")
pixel_values = inputs.pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```
, i was getting a different caption from raw `microsoft/git-base` model, meaning that it fine-tuned, but for some reason (why the hell?) it overriden a loaded model and stayed in memory.
Meaning you can spend thousand of dollars on training but can't just load the model so it works the same as after training.
Could you please provide a clear tutorial on how to be able to do the inference from the same fine-tuned model after a server was shut down?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22715/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22714
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22714/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22714/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22714/events
|
https://github.com/huggingface/transformers/pull/22714
| 1,663,398,200
|
PR_kwDOCUB6oc5OEnfM
| 22,714
|
Adding Cross attention to GPT Neo
|
{
"login": "gagan3012",
"id": 49101362,
"node_id": "MDQ6VXNlcjQ5MTAxMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gagan3012",
"html_url": "https://github.com/gagan3012",
"followers_url": "https://api.github.com/users/gagan3012/followers",
"following_url": "https://api.github.com/users/gagan3012/following{/other_user}",
"gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions",
"organizations_url": "https://api.github.com/users/gagan3012/orgs",
"repos_url": "https://api.github.com/users/gagan3012/repos",
"events_url": "https://api.github.com/users/gagan3012/events{/privacy}",
"received_events_url": "https://api.github.com/users/gagan3012/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22714). All of your documentation changes will be reflected on that endpoint.",
"@patrickvonplaten @patil-suraj ",
"@gagan3012 Thanks for contributing and opening the feature request and PR. \r\n\r\nThese changes are for a very specific use-case and not one that we want everyone to have in the GPTNeo: it makes the code too unreadable just for using as a decoder in the EncoderDecoder model. We can leave the fork open if you want to share it with others, and you can also push it in any repo on the Hub using the dynamic code feature.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
NONE
| null |
# What does this PR do?
This PR adds Cross attention to GPT Neo
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #22485 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@ArthurZucker @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22714/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22714",
"html_url": "https://github.com/huggingface/transformers/pull/22714",
"diff_url": "https://github.com/huggingface/transformers/pull/22714.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22714.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22713
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22713/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22713/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22713/events
|
https://github.com/huggingface/transformers/pull/22713
| 1,663,314,584
|
PR_kwDOCUB6oc5OEUzF
| 22,713
|
Added parallel device usage for GPT-J
|
{
"login": "jprivera44",
"id": 9093934,
"node_id": "MDQ6VXNlcjkwOTM5MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9093934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jprivera44",
"html_url": "https://github.com/jprivera44",
"followers_url": "https://api.github.com/users/jprivera44/followers",
"following_url": "https://api.github.com/users/jprivera44/following{/other_user}",
"gists_url": "https://api.github.com/users/jprivera44/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jprivera44/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jprivera44/subscriptions",
"organizations_url": "https://api.github.com/users/jprivera44/orgs",
"repos_url": "https://api.github.com/users/jprivera44/repos",
"events_url": "https://api.github.com/users/jprivera44/events{/privacy}",
"received_events_url": "https://api.github.com/users/jprivera44/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR is within the issue [22561](https://github.com/huggingface/transformers/issues/22561), and is related to issue [22535](https://github.com/huggingface/transformers/pull/22535) which concerns model parallelism. Specifically, this PR fixes the issue in GPT-J where tensors might accidentally be moved to different devices, causing a mismatch. The implemented fix ensures that all tensors are on the same device, preventing potential errors.
Test case:
`
#Setting up the tokenizer and model
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = GPTJForSequenceClassification.from_pretrained("EleutherAI/gpt-j-6B")
#Now move the model to the GPU
model.to("cuda:0")
#setting up the text
text = "this is an example of text for device mismatch for GPT-J"
inputs = tokenizer(text,return_tensors = "pt")
#I've already move the model to Cuda:0
for k,v in inputs.items():
inputs[k] = v.to('cuda:0')
labels = torch.tensor([1]).to('cpu')
#Forward pass
outputs = model(**inputs,labels = labels)`
I recreated the issue by running the code without the fix, which resulted in the following error: "RuntimeError: Expected all tensors to be on the same device, ...". After implementing the fix, the error disappeared, and the model now keeps all tensors on the same device, as expected.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 22561
## Motivation and Context
I worked on helping with the code to make all transformers compatible with model parallelism, specifically GPT-J.
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22713/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22713",
"html_url": "https://github.com/huggingface/transformers/pull/22713",
"diff_url": "https://github.com/huggingface/transformers/pull/22713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22713.patch",
"merged_at": 1681299087000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22712
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22712/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22712/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22712/events
|
https://github.com/huggingface/transformers/pull/22712
| 1,663,295,331
|
PR_kwDOCUB6oc5OEQuS
| 22,712
|
[tests] switch to torchrun
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Unfortunately, conda refuses to install `libstdcxx-ng=12`, and gives a super super long report of conflict packages after 20 or more minutes of examization.\r\n\r\nI think we can merge this PR first. And I can try if there is anyway to make to get `GLIBCXX_3.4.30` installed.\r\n@stas00 Does this work for you?",
"The `GLIBCXX_3.4.30` is totally unrelated to this issue so let's deal with it separately."
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
This PR fixes the following errors in nightly CI tests
```
FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_apex
1422
FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_ddp
1423
FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_0_base
1424
FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_1_low
1425
FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_2_high
1426
FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_3_mixed
```
by switching from deprecated `distributed.launch` to `distributed.run`
```
: File "/workspace/transformers/examples/pytorch/translation/run_translation.py", line 664, in <module>
384
stderr: main()
385
stderr: File "/workspace/transformers/examples/pytorch/translation/run_translation.py", line 262, in main
386
stderr: model_args, data_args, training_args = parser.parse_args_into_dataclasses()
387
stderr: File "/workspace/transformers/src/transformers/hf_argparser.py", line 341, in parse_args_into_dataclasses
388
stderr: raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
389
stderr: ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=1']
390
stderr: /opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
391
stderr: and will be removed in future. Use torchrun.
392
stderr: Note that --use-env is set by default in torchrun.
393
stderr: If your script expects `--local-rank` argument to be set, please
394
stderr: change it to read from `os.environ['LOCAL_RANK']` instead. See
395
stderr: https://pytorch.org/docs/stable/distributed.html#launch-utility for
396
stderr: further instructions
```
I updated `tests/trainer/test_trainer_distributed.py` while at it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22712/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22712",
"html_url": "https://github.com/huggingface/transformers/pull/22712",
"diff_url": "https://github.com/huggingface/transformers/pull/22712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22712.patch",
"merged_at": 1681313146000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22711
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22711/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22711/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22711/events
|
https://github.com/huggingface/transformers/pull/22711
| 1,663,000,626
|
PR_kwDOCUB6oc5ODR-e
| 22,711
|
Patching clip model to create mask tensor on the device
|
{
"login": "shanmugamr1992",
"id": 111910568,
"node_id": "U_kgDOBqueqA",
"avatar_url": "https://avatars.githubusercontent.com/u/111910568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shanmugamr1992",
"html_url": "https://github.com/shanmugamr1992",
"followers_url": "https://api.github.com/users/shanmugamr1992/followers",
"following_url": "https://api.github.com/users/shanmugamr1992/following{/other_user}",
"gists_url": "https://api.github.com/users/shanmugamr1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shanmugamr1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shanmugamr1992/subscriptions",
"organizations_url": "https://api.github.com/users/shanmugamr1992/orgs",
"repos_url": "https://api.github.com/users/shanmugamr1992/repos",
"events_url": "https://api.github.com/users/shanmugamr1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/shanmugamr1992/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@shanmugamr1992 Thanks for opening this PR! \r\n\r\nFor the quality tests to pass, you'll need to run `make fixup` locally and commit any formatting changes that were applied. There were some recent updates to our formatting libraries, so you might need to run `pip install -e .[\"quality\"]` to ensure that you have the up-to-date settings. \r\n\r\nCould you add this update to the other implementations of `_build_causal_attention_mask` for pytorch models in the repo too please? ",
"The main idea is to reduce host and device syncs",
"@amyeroberts Could you verify and merge it please thanks .",
"@shanmugamr1992 Thanks again for your contribution and updating. I think just a final rebase / fix conflicts as there were some recent updates on the docs on `main`, then we're good to merge :) ",
"@amyeroberts All done. Thanks a lot for the wonderful suggestions. "
] | 1,681
| 1,685
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22711/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22711",
"html_url": "https://github.com/huggingface/transformers/pull/22711",
"diff_url": "https://github.com/huggingface/transformers/pull/22711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22711.patch",
"merged_at": 1681984733000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22710
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22710/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22710/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22710/events
|
https://github.com/huggingface/transformers/issues/22710
| 1,662,989,300
|
I_kwDOCUB6oc5jHzP0
| 22,710
|
Llama: Generating text token by token removes whitespaces
|
{
"login": "qtrrb",
"id": 70319920,
"node_id": "MDQ6VXNlcjcwMzE5OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/70319920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qtrrb",
"html_url": "https://github.com/qtrrb",
"followers_url": "https://api.github.com/users/qtrrb/followers",
"following_url": "https://api.github.com/users/qtrrb/following{/other_user}",
"gists_url": "https://api.github.com/users/qtrrb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qtrrb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qtrrb/subscriptions",
"organizations_url": "https://api.github.com/users/qtrrb/orgs",
"repos_url": "https://api.github.com/users/qtrrb/repos",
"events_url": "https://api.github.com/users/qtrrb/events{/privacy}",
"received_events_url": "https://api.github.com/users/qtrrb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante \r\nNote that the streaming API is new and still experimental :-)",
"Also note that we aim to match the original tokenizer 1 to 1, while trying not to add super model specific pieces of code in the more general api 😉 ",
"> Note that the streaming API is new and still experimental :-)\r\n\r\nI may be wrong as I'm not sure what you are mentionning, but I don't believe I am using the streaming API\r\nThe example code is something I wrote based on the generate() function in transformers, It's an old piece of code that I attempted to run with LLaMa and that seems to pose issues\r\n\r\n",
"Oh my bad, I read to fast. I don't know what this `decode_with_prefix_space` mentioned in the documentation is. It does not exist in the codebase at all (not for LLaMA and not for any other sentencepiece model either).",
"> I don't know what this decode_with_prefix_space mentioned in the documentation is. It does not exist in the codebase at all (not for LLaMA and not for any other sentencepiece model either).\r\n\r\nI haven't had much time to check why , but i noticed that it was removed in the latest commit to LLaMA's tokenizer\r\nhttps://github.com/huggingface/transformers/commit/c0f99b4d2ec73090595914dde4c16da207e21d73\r\n\r\n> Also note that we aim to match the original tokenizer 1 to 1, while trying not to add super model specific pieces of code in the more general api \r\n\r\nI understand, is it then normal for the tokenizer to do this as it is based on sentencepiece? ",
"Not sure I understand your question. Yes it is normal, if you use the sentencepiece tokenizer, you will get the same results when decoding. \r\n```python \r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained('huggyllama/llama-30b')\r\n>>> tokenizer.batch_decode(tokenizer.encode(\"Hey how are you doing?\"), skip_special_tokens = True)\r\n['', 'Hey', 'how', 'are', 'you', 'doing', '?']\r\n``` \r\nwhether you are using the fast or slow tokenizer, both will output the same. \r\nIf you use `tokenizer.sp_model.decode` (which is basically the sentencpiece model that the original llama uses) then you have no additional prefix space (see the original codebase [here](https://github.com/facebookresearch/llama/blob/main/llama/tokenizer.py)). However, the doc should be updated to remove the `decode_with_prefix_space` thanks for catching this. ",
"> Not sure I understand your question. Yes it is normal, if you use the sentencepiece tokenizer, you will get the same results when decoding.\r\n\r\nI apologize, my question was meant to ask if it is normal for the tokenizer to not have an option to `decode_with_prefix_space`\r\nAs it is the case, I will close this issue, I will find another way to fix my piece of code ",
"This feature was added on my request so I do not think it is justified that this got closed.\r\ndecode_with_prefix_space was meant as a workaround for the fact that the tokenizer is unsuitable for downstream tasks like the one mentioned in this issue. The default behavior was off, but it allowed implementations to turn it on and get proper generations.\r\nI do not know if it ever worked right, but without this my users keep complaining that every generation lacks the space. If this is not added back I will have to do this myself in my own code, but that would be unfortunate for the rest of the ecosystem.",
"@henk717 While I can't help with the removal of `decode_with_prefix_space` from the library, here is how I dealt with it, hope this helps\r\n\r\n```python\r\n while True:\r\n # Get logits for the next token\r\n logits = model(input_ids).logits[:, -1, :]\r\n # Apply logits processor\r\n next_tokens_scores = logits_processor(input_ids, logits)\r\n\r\n probs = torch.nn.functional.softmax(next_tokens_scores, dim=-1)\r\n next_token_id = torch.multinomial(probs, num_samples=1)\r\n\r\n # Note: This is done to handle Sentencepiece based tokenizers,\r\n # as they don't preprend the prefix space to the start of a word\r\n tokens_previous = tokenizer.decode(input_ids[0], skip_special_tokens=True)\r\n input_ids = torch.cat((input_ids, next_token_id), dim=1)\r\n tokens = tokenizer.decode(input_ids[0], skip_special_tokens=True)\r\n\r\n new_tokens = tokens[len(tokens_previous) :]\r\n```\r\nThis is a fairly unclean way to fix the issue, but it works.\r\n",
"> @henk717 While I can't help with the removal of `decode_with_prefix_space` from the library, here is how I dealt with it, hope this helps\r\n> \r\n> ```python\r\n> while True:\r\n> # Get logits for the next token\r\n> logits = model(input_ids).logits[:, -1, :]\r\n> # Apply logits processor\r\n> next_tokens_scores = logits_processor(input_ids, logits)\r\n> \r\n> probs = torch.nn.functional.softmax(next_tokens_scores, dim=-1)\r\n> next_token_id = torch.multinomial(probs, num_samples=1)\r\n> \r\n> # Note: This is done to handle Sentencepiece based tokenizers,\r\n> # as they don't preprend the prefix space to the start of a word\r\n> tokens_previous = tokenizer.decode(input_ids[0], skip_special_tokens=True)\r\n> input_ids = torch.cat((input_ids, next_token_id), dim=1)\r\n> tokens = tokenizer.decode(input_ids[0], skip_special_tokens=True)\r\n> \r\n> new_tokens = tokens[len(tokens_previous) :]\r\n> ```\r\n> \r\n> This is a fairly unclean way to fix the issue, but it works.\r\n\r\nit works for me, thank you!",
"Is there a better way to fix this than calling the tokenizer.decode on the entire sequence? It seems a very ineffiecient workaround.",
"You could use the `convert_ids_to_tokens` which should be more efficient no? "
] | 1,681
| 1,697
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@ArthurZucker @sgugger
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following script
```python
import torch
from transformers import (
LlamaForCausalLM,
LlamaTokenizer,
LogitsProcessorList,
RepetitionPenaltyLogitsProcessor,
TemperatureLogitsWarper,
TopPLogitsWarper,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = LlamaTokenizer.from_pretrained("./path/to/llama")
model = LlamaForCausalLM.from_pretrained(
"./models/llama-7b-hf", torch_dtype=torch.float16
).to(device)
@torch.no_grad()
def stream_generate(
prompt: str,
temperature=1.0,
max_new_tokens=512,
top_p=1.0,
repetition_penalty=1.0,
):
global tokenizer, model
if tokenizer is None or model is None:
return {"error": "Model not loaded."}
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
original_size = len(input_ids[0])
logits_processor = LogitsProcessorList(
[
TemperatureLogitsWarper(temperature=temperature),
RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty),
TopPLogitsWarper(top_p=top_p),
]
)
while True:
# Get logits for the next token
logits = model(input_ids).logits[:, -1, :]
logits = logits_processor(input_ids, logits)
probs = torch.nn.functional.softmax(logits, dim=-1)
next_token_id = torch.multinomial(probs, num_samples=1)
next_token = tokenizer.decode(next_token_id[0], skip_special_tokens=True)
print(next_token, end="", flush=True)
input_ids = torch.cat((input_ids, next_token_id), dim=1)
if len(input_ids[0]) >= original_size + max_new_tokens:
break
stream_generate("In a shocking finding, ")
```
### Expected behavior
Text should be printed in a streaming manner, similar to OpenAI's playground, this behaviour properly happens with models like GPT-2 or GPT-J, however, with LLaMA, there are no whitespaces inbetween words.
I believe this is related to this, mentionned in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/llama)
> The LLaMA tokenizer is based on [sentencepiece](https://github.com/google/sentencepiece). One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. “Banana”), the tokenizer does not prepend the prefix space to the string. To have the tokenizer output the prefix space, set decode_with_prefix_space=True in the LlamaTokenizer object or in the tokenizer configuration.
However, it seems that `decode_with_prefix_space` has been removed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22710/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22710/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22709
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22709/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22709/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22709/events
|
https://github.com/huggingface/transformers/pull/22709
| 1,662,836,236
|
PR_kwDOCUB6oc5OCu1V
| 22,709
|
Fix passing kwargs to the tokenizer in FillMaskPipeline preprocess method
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22709). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
COLLABORATOR
| null |
As per title, the kwargs was not passed.
The modification follows https://github.com/huggingface/transformers/blob/fe1f5a639d93c9272856c670cff3b0e1a10d5b2b/src/transformers/pipelines/text_classification.py#L91-L179
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22709/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22709",
"html_url": "https://github.com/huggingface/transformers/pull/22709",
"diff_url": "https://github.com/huggingface/transformers/pull/22709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22709.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22708
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22708/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22708/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22708/events
|
https://github.com/huggingface/transformers/pull/22708
| 1,662,521,490
|
PR_kwDOCUB6oc5OBpwO
| 22,708
|
Fix decorator order
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger : I think this observation might be interesting to you.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, this requirement is documented here https://huggingface.co/docs/transformers/testing#to-gpu-or-not-to-gpu but it's not \"enforced\" so it's easy to miss\r\n\r\nPerhaps it can be added to the quality checks?"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
Fix decorator order.
For some tests, like `test_basic_distributed`, we have the following on `main`.
```
@require_torch_multi_gpu
@parameterized.expand(params, name_func=parameterized_custom_name_func)
def test_basic_distributed(self, stage, dtype):
```
but the (generated) tests are actually run even on a single GPU machine.
We need to change the order:
```python
@parameterized.expand(params, name_func=parameterized_custom_name_func)
@require_torch_multi_gpu
```
(PS: it doesn't mean the tests will fail on a single GPU VM. It still passes, but I am not sure if it makes sense.)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22708/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22708",
"html_url": "https://github.com/huggingface/transformers/pull/22708",
"diff_url": "https://github.com/huggingface/transformers/pull/22708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22708.patch",
"merged_at": 1681228756000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22707
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22707/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22707/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22707/events
|
https://github.com/huggingface/transformers/issues/22707
| 1,662,393,232
|
I_kwDOCUB6oc5jFhuQ
| 22,707
|
the generated results are different between generating a batch input_ids and a single sequence input_ids
|
{
"login": "ZeroneBo",
"id": 86118108,
"node_id": "MDQ6VXNlcjg2MTE4MTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/86118108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeroneBo",
"html_url": "https://github.com/ZeroneBo",
"followers_url": "https://api.github.com/users/ZeroneBo/followers",
"following_url": "https://api.github.com/users/ZeroneBo/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeroneBo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeroneBo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeroneBo/subscriptions",
"organizations_url": "https://api.github.com/users/ZeroneBo/orgs",
"repos_url": "https://api.github.com/users/ZeroneBo/repos",
"events_url": "https://api.github.com/users/ZeroneBo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeroneBo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @younesbelkada (I am mostly thinking about sampling + beam + randomness) ",
"> cc @younesbelkada (I am mostly thinking about sampling + beam + randomness)\r\n\r\nI don't think so, the default arg do_sample of method `generate` is `False`, and I manually set this arg `False`, the result is same as the issue.\r\nAnd I generated `a` and decoded `ga` few times, every times the result `oa` is not change; the same as `b`, `gb` and `ob` .",
"Hey @ZeroneBo 👋 \r\n\r\nWhen an input is passed with padding, the padded tokens are not removed from the input. Instead, they are numerically masked in the attention layer -- they will have a minuscule impact on the output, but it is not exactly the same as the case without padding. This means that slightly different outputs may be observed with `.generate()` between the padded and unpadded cases (and the differences increase when FP16/BF16/INT8 is used).\r\n\r\nNevertheless, these changes should be very infrequent and, when they happen, the difference tends to retain a similar meaning. If you notice that `XGLM` is modifying the outputs frequently in the presence of padding, let us know :)",
"Hey @gante , thanks for explaining.\r\nYes, there is significant difference between this two ways in my case, generating batchs in six test sets getting a avg score 7.398, and generating sequences one by one in the six test sets without padding getting a avg score 14.538. The performance gap is very obvious. Although \"one by one\" generating gets better score, it costs much more time. I hope \"batch\" generating can get a same performance.\r\nI have an another question. When training or generating a CLM, the padding side should always be in left? Is there some cases the padding side must be in right? Sometimes the outputs differs because of different padding side with a same model. And how can I know which side I should pad to get a better performance?",
"> Yes, there is significant difference between this two ways in my case, generating batchs in six test sets getting a avg score 7.398, and generating sequences one by one in the six test sets without padding getting a avg score 14.538.\r\n\r\n@ZeroneBo This large performance drop should not happen at all, it probably means that the code related to batched generation for Donut is incorrect 👀 I've added this to my to do list -- if you'd be able to share some way to reproduce the issue with an open model and dataset, it would speed up the process for me 🙏 \r\n\r\n> I have an another question. When training or generating a CLM, the padding side should always be in left? Is there some cases the padding side must be in right? Sometimes the outputs differs because of different padding side with a same model. And how can I know which side I should pad to get a better performance?\r\n\r\nThe rule of thumb is the following: if you expect the model to continue generation from your input text (as in the GPT models), then padding should be on the left. Otherwise, the text will only be used as input (and not as part of the output), in which case you should use right padding.",
"@gante Here are a part of data from the original data and the model I used, it may be helpful.\r\nhttps://github.com/ZeroneBo/xglm-tmp",
"@ZeroneBo as a short-term remedy, if you set `model.config.use_cache = False`, your batched results should match the one-by-one results (but it will be slightly slower).\r\n\r\nMeanwhile, I'm working on a fix so you can use cache (= faster) while getting the same results :)",
"Hello @gante , I have tried to set `model.config.use_cache = False` in both finetuning and generating code, but it did't work. The performance gap exits.",
"@ZeroneBo can you try installing from `main` and rerunning your experiments? I couldn't reproduce the problems you described, as left-padding issues were fixed :) ",
"@gante That' effective, thanks for your work. 👍",
"Hi, do we have a recommended way to use `XGLM` to do translation task? Why do we need to add \"=\" manually?\r\n```python\r\ntokenizer = XGLMTokenizer.from_pretrained(base_model)\r\ntokenizer.padding_side = \"left\"\r\ntokenizer.add_special_tokens({'additional_special_tokens': ['=']})\r\nmodel = XGLMForCausalLM.from_pretrained(base_model,device_map=\"auto\",)\r\nmodel = PeftModel.from_pretrained(model,lora_weights,)\r\nmodel.eval()\r\n```",
"@Hannibal046 Hi, In my impression, the paper used the \"=\" template to do translation task. You can also use other tokens or symbols in the task.",
"I know, but i am just curious about why \"=\" is not a special token at the first place.",
"> Hey @ZeroneBo 👋\r\n> \r\n> When an input is passed with padding, the padded tokens are not removed from the input. Instead, they are numerically masked in the attention layer -- they will have a minuscule impact on the output, but it is not exactly the same as the case without padding. This means that slightly different outputs may be observed with `.generate()` between the padded and unpadded cases (and the differences increase when FP16/BF16/INT8 is used).\r\n> \r\n> Nevertheless, these changes should be very infrequent and, when they happen, the difference tends to retain a similar meaning. If you notice that `XGLM` is modifying the outputs frequently in the presence of padding, let us know :)\r\n\r\n@gante May I ask why adding padding would result in different results? Why is this principle?",
"@kakaxisisan you can read about it [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535) :)"
] | 1,681
| 1,707
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.23
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @gante @youne
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I finetuned a pre-trained XGLM model with LoRA, and when I use the model to generate sequences with a same input ( the only diffience between two ways is one sentence is in a list with other sentence and generate them together, the other is a single sentence in a list ),the result is different with the same setting!
the details are following:
```python
tokenizer = XGLMTokenizer.from_pretrained(base_model)
tokenizer.padding_side = "left"
tokenizer.add_special_tokens({'additional_special_tokens': ['=']})
model = XGLMForCausalLM.from_pretrained(base_model,device_map="auto",)
model = PeftModel.from_pretrained(model,lora_weights,)
model.eval()
model = torch.compile(model)
p = ['今年 , 这种 主导作用 依然 非常 突出 。 = ', '国际足联 将 严惩 足球场 上 的 欺骗 行为 = ', '枪手 被 警方 击毙 。 = ']
a = tokenizer(p, padding=True, return_tensors='pt')
ga = model.generate(**a, num_beams=5, max_new_tokens=128)
oa = tokenizer.batch_decode(ga, skip_special_tokens=False)
b = ['今年 , 这种 主导作用 依然 非常 突出 。 = '] # b equals p[0]
b = tokenizer(b, padding=True, return_tensors='pt')
gb = model.generate(**b, num_beams=5, max_new_tokens=128)
ob = tokenizer.batch_decode(gb, skip_special_tokens=False)
### result:
print(oa)
['<pad><pad><pad><pad><pad></s> 今年, 这种 主导作用 依然 非常 突出 。= this year, this dominant role is still very prominent.</s>', '</s> 国际足联 将 严惩 足球场 上 的 欺骗 行为= fifa to punish cheaters in stadiums</s><pad><pad><pad><pad><pad>', '<pad><pad><pad><pad><pad><pad><pad><pad><pad></s> 枪手 被 警方 击<unk> 。= the gunman was killed by police.</s><pad><pad><pad><pad>']
print(ob)
['</s> 今年, 这种 主导作用 依然 非常 突出 。= this year, this dominant role has still been very prominent.</s>']
```
the output following the "=" token is different in the case.
### Expected behavior
the output following the "=" token is different in the case.
I am confused why the same input but different output just because of the additional pad token? But the attention mask of pad is 0.
And I want to know how to change the code to make them be same?
Is it a bug or is there a problem with the code?
Looking forward to a quick response, and thanks very much!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22707/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22706
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22706/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22706/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22706/events
|
https://github.com/huggingface/transformers/issues/22706
| 1,662,276,423
|
I_kwDOCUB6oc5jFFNH
| 22,706
|
transformers trainer llama Trying to resize storage that is not resizable
|
{
"login": "lw3259111",
"id": 12690488,
"node_id": "MDQ6VXNlcjEyNjkwNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12690488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lw3259111",
"html_url": "https://github.com/lw3259111",
"followers_url": "https://api.github.com/users/lw3259111/followers",
"following_url": "https://api.github.com/users/lw3259111/following{/other_user}",
"gists_url": "https://api.github.com/users/lw3259111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lw3259111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lw3259111/subscriptions",
"organizations_url": "https://api.github.com/users/lw3259111/orgs",
"repos_url": "https://api.github.com/users/lw3259111/repos",
"events_url": "https://api.github.com/users/lw3259111/events{/privacy}",
"received_events_url": "https://api.github.com/users/lw3259111/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @lw3259111, thanks for raising this issue. \r\n\r\nSo that we can best try and help, could you provide some more information about how to reproduce this error. Specifically the following: \r\n* The running environment and important dependency versions. This can be found running `transformers-cli env` in your terminal\r\n\r\n* A minimal code snippet to reproduce the error. If, for anonymity, it's not possible to share a checkpoint name, it's OK to do something like the example below. This so we know how e.g. the `Trainer` class is being called and the possible code path triggering this issue. \r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\n\r\ncheckpoint = \"checkpoint-name\" # Dummy name \r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint)\r\n```\r\n\r\n",
"@amyeroberts \r\n`\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.28.0.dev0\r\n- Platform: Linux-4.15.0-208-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.13.2\r\n- Safetensors version: 0.3.0\r\n- PyTorch version (GPU?): 2.0.0+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>`\r\n\r\n**model_name = \"checkpoints-1200\"**\r\nThe error :\r\n<img width=\"1438\" alt=\"image\" src=\"https://user-images.githubusercontent.com/12690488/231175851-dbdc50ee-26f4-4f4e-966f-bb3ae06f291f.png\">\r\n<img width=\"1437\" alt=\"image\" src=\"https://user-images.githubusercontent.com/12690488/231176038-4f6d80bb-0e02-45e9-8ce9-f0b3650f615f.png\">\r\n<img width=\"1439\" alt=\"image\" src=\"https://user-images.githubusercontent.com/12690488/231176126-4945d00f-6ced-48af-98cd-35d83daff78c.png\">\r\n",
"@lw3259111, great, thanks for the additional details. For the checkpoint that's being loaded, which model architecture does it map to i.e. which `XxxForCausalLM` model? ",
"@amyeroberts \r\nI want to load `LlamaForCausalLM` model\r\nand the same error has beed found in follow link\r\n`\r\nhttps://github.com/tatsu-lab/stanford_alpaca/issues/61#issuecomment-1504117715\r\n\r\n\r\nhttps://github.com/lm-sys/FastChat/issues/351\r\n`",
"@lw3259111 Thanks for the additional information. I'm able to load some checkpoints with both of the following:\r\n```py\r\nmodel = LlamaForCausalLM.from_pretrained(checkpoint)\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint)\r\n```\r\nwithout this error occurring. So the issue is likely relating to the specific weights being loaded, model configuration or something else in the environment. \r\n\r\nA few questions, comments and suggestions:\r\n* Looking at the screenshots shared, in the first one in [this comment](https://github.com/huggingface/transformers/issues/22706#issuecomment-1503343784), I can see there is an error being triggered relating to `git-lfs` not being installed in the environment. Could you try installing or reinstalling `git lfs`? It's worthwhile making sure this work, but I doubt this is the issue. \r\n* [In the linked issues](https://github.com/tatsu-lab/stanford_alpaca/issues/61#issuecomment-1504459664), the version of transformers in your env is different from in this issue. I'm assuming a typo, but can you confirm the version. Note: the transformers library needs to be install from source to use the Llama model. \r\n* When `model = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, **kwargs)` is called, could you share what the kwargs are? \r\n* Following [this issue](https://github.com/tatsu-lab/stanford_alpaca/issues/61), is the model being loaded one which was saved out after using the `Trainer` class?\r\n",
"@amyeroberts Thank you for your reply. I will reply to your questions one by one\r\n - git-lfs has been installed in my compute\r\n<img width=\"723\" alt=\"image\" src=\"https://user-images.githubusercontent.com/12690488/231516433-e6ff8f1e-3313-4f0c-93ca-95b7c8332a2b.png\">\r\n\r\n- my transformers version is 4.28.0.dev0, [https://github.com/tatsu-lab/stanford_alpaca/issues/61#issuecomment-1504459664](url). I made a mistake in writing the corresponding Transformers version of this link, and I have made the necessary modifications\r\n\r\n- `kwargs` are `{'torch_dtype': torch.float16,\r\n 'device_map': 'auto',\r\n 'max_memory': {0: '13GiB', 1: '13GiB'}}`\r\n\r\n- yes,The checkpoint-1200 was saved out after using the Trainer class",
"https://github.com/lm-sys/FastChat/issues/351#issuecomment-1519060027\r\n\r\nThis is related to https://github.com/lm-sys/FastChat/issues/256#issue-1658116931",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,685
| 1,685
|
NONE
| null |
### System Info
transformers ==4.28.0.dev0
pytorch==1.13.1
### Who can help?
--
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
--
### Expected behavior

I found that this is a bug with AutoModelForCausalLM, because the code that uses this module is unable to load the checkpoint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22706/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22705
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22705/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22705/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22705/events
|
https://github.com/huggingface/transformers/issues/22705
| 1,662,085,948
|
I_kwDOCUB6oc5jEWs8
| 22,705
|
using Deepspeed zero stage3 finetune sd2, dimension error occurs
|
{
"login": "uygnef",
"id": 13539441,
"node_id": "MDQ6VXNlcjEzNTM5NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/13539441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uygnef",
"html_url": "https://github.com/uygnef",
"followers_url": "https://api.github.com/users/uygnef/followers",
"following_url": "https://api.github.com/users/uygnef/following{/other_user}",
"gists_url": "https://api.github.com/users/uygnef/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uygnef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uygnef/subscriptions",
"organizations_url": "https://api.github.com/users/uygnef/orgs",
"repos_url": "https://api.github.com/users/uygnef/repos",
"events_url": "https://api.github.com/users/uygnef/events{/privacy}",
"received_events_url": "https://api.github.com/users/uygnef/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@stas00\r\ncould please help me take a look at this issue?",
"See https://github.com/huggingface/diffusers/pull/3076\r\n\r\nPlease carefully read the OP of the PR for details.",
"@uygnef Have you solved this problem?\r\n\r\n\r\n",
"@luochuwei Yes, it works for training one model, but there seems to be an issue with training multiple models. I have submit the issue at https://github.com/microsoft/DeepSpeed/issues/3472",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,686
| 1,686
|
NONE
| null |
### System Info
Describe the bug
An error is reported when using deepspeed's zero stage3 finetune diffusers/examples/text_to_image/train_text_to_image.py script. My machine's GPU is 2*A100, running on deepspeed zero stage3
```
def train(args):
if args.non_ema_revision is not None:
deprecate(
"non_ema_revision!=None",
"0.15.0",
message=(
"Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to"
" use `--variant=non_ema` instead."
),
)
# logging_dir = os.path.join(args.output_dir, args.logging_dir)
accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit)
deepspeed_plugin = DeepSpeedPlugin(zero_stage=3, gradient_accumulation_steps=2)
# deepspeed_plugin.set_mixed_precision("fp16")
accelerator = Accelerator(
gradient_accumulation_steps=args.gradient_accumulation_steps,
mixed_precision=args.mixed_precision,
log_with=args.report_to,
logging_dir=args.log_dir,
project_config=accelerator_project_config,
deepspeed_plugin=deepspeed_plugin
)
```
error log is
```
04/11/2023 16:59:12 0:INFO: Prepare everything with our accelerator.
[2023-04-11 16:59:12,036] [INFO] [logging.py:69:log_dist] [Rank 0] DeepSpeed info: version=0.6.5, git-hash=unknown, git-branch=unknown
04112023 16:59:13|INFO|torch.distributed.distributed_c10d| Added key: store_based_barrier_key:2 to store for rank: 0
04112023 16:59:13|INFO|torch.distributed.distributed_c10d| Added key: store_based_barrier_key:2 to store for rank: 1
04112023 16:59:13|INFO|torch.distributed.distributed_c10d| Rank 1: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
/usr/local/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:429: UserWarning: torch.distributed.distributed_c10d._get_global_rank is deprecated please use torch.distributed.distributed_c10d.get_global_rank instead
warnings.warn(
04112023 16:59:13|INFO|torch.distributed.distributed_c10d| Rank 0: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
/usr/local/conda/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py:429: UserWarning: torch.distributed.distributed_c10d._get_global_rank is deprecated please use torch.distributed.distributed_c10d.get_global_rank instead
warnings.warn(
[2023-04-11 16:59:13,796] [INFO] [engine.py:278:__init__] DeepSpeed Flops Profiler Enabled: False
[2023-04-11 16:59:13,796] [INFO] [engine.py:1086:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer
[2023-04-11 16:59:13,796] [INFO] [engine.py:1092:_configure_optimizer] Using client Optimizer as basic optimizer
[2023-04-11 16:59:13,878] [INFO] [engine.py:1108:_configure_optimizer] DeepSpeed Basic Optimizer = AdamW
[2023-04-11 16:59:13,878] [INFO] [utils.py:52:is_zero_supported_optimizer] Checking ZeRO support for optimizer=AdamW type=<class 'torch.optim.adamw.AdamW'>
[2023-04-11 16:59:13,878] [INFO] [logging.py:69:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer
[2023-04-11 16:59:13,878] [INFO] [engine.py:1410:_configure_zero_optimizer] Initializing ZeRO Stage 3
[2023-04-11 16:59:13,887] [INFO] [stage3.py:275:__init__] Reduce bucket size 500000000
[2023-04-11 16:59:13,887] [INFO] [stage3.py:276:__init__] Prefetch bucket size 50000000
Using /home/hadoop-hmart-waimai-rank/.cache/torch_extensions/py39_cu117 as PyTorch extensions root...
Using /home/hadoop-hmart-waimai-rank/.cache/torch_extensions/py39_cu117 as PyTorch extensions root...
Emitting ninja build file /home/hadoop-hmart-waimai-rank/.cache/torch_extensions/py39_cu117/utils/build.ninja...
Building extension module utils...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module utils...
Time to load utils op: 0.5212891101837158 seconds
Loading extension module utils...
Time to load utils op: 0.5023727416992188 seconds
[2023-04-11 16:59:16,286] [INFO] [stage3.py:567:_setup_for_real_optimizer] optimizer state initialized
Using /home/hadoop-hmart-waimai-rank/.cache/torch_extensions/py39_cu117 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0005068778991699219 seconds
[2023-04-11 16:59:16,615] [INFO] [utils.py:828:see_memory_usage] After initializing ZeRO optimizer
[2023-04-11 16:59:16,616] [INFO] [utils.py:829:see_memory_usage] MA 7.45 GB Max_MA 10.52 GB CA 11.47 GB Max_CA 11 GB
[2023-04-11 16:59:16,616] [INFO] [utils.py:837:see_memory_usage] CPU Virtual Memory: used = 5.49 GB, percent = 2.4%
[2023-04-11 16:59:16,616] [INFO] [logging.py:69:log_dist] [Rank 0] DeepSpeed Final Optimizer = AdamW
[2023-04-11 16:59:16,616] [INFO] [engine.py:795:_configure_lr_scheduler] DeepSpeed using client LR scheduler
[2023-04-11 16:59:16,616] [INFO] [logging.py:69:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
[2023-04-11 16:59:16,617] [INFO] [logging.py:69:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0001], mom=[(0.9, 0.999)]
[2023-04-11 16:59:16,618] [INFO] [config.py:1059:print] DeepSpeedEngine configuration:
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] amp_enabled .................. False
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] amp_params ................... False
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": null,
"exps_dir": null,
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] bfloat16_enabled ............. False
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] checkpoint_tag_validation_enabled True
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] checkpoint_tag_validation_fail False
[2023-04-11 16:59:16,619] [INFO] [config.py:1063:print] communication_data_type ...... None
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] curriculum_enabled ........... False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] curriculum_params ............ False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] dataloader_drop_last ......... False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] disable_allgather ............ False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] dump_state ................... False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] dynamic_loss_scale_args ...... None
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_enabled ........... False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_gas_boundary_resolution 1
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_layer_name ........ bert.encoder.layer
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_layer_num ......... 0
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_max_iter .......... 100
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_stability ......... 1e-06
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_tol ............... 0.01
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] eigenvalue_verbose ........... False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] elasticity_enabled ........... False
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] flops_profiler_config ........ {
"enabled": false,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] fp16_enabled ................. True
[2023-04-11 16:59:16,620] [INFO] [config.py:1063:print] fp16_master_weights_and_gradients False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] fp16_mixed_quantize .......... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] global_rank .................. 0
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] gradient_accumulation_steps .. 1
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] gradient_clipping ............ 0.0
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] gradient_predivide_factor .... 1.0
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] initial_dynamic_scale ........ 4294967296
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] loss_scale ................... 0
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] memory_breakdown ............. False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] optimizer_legacy_fusion ...... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] optimizer_name ............... None
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] optimizer_params ............. None
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] pld_enabled .................. False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] pld_params ................... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] prescale_gradients ........... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_change_rate ......... 0.001
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_groups .............. 1
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_offset .............. 1000
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_period .............. 1000
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_rounding ............ 0
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_start_bits .......... 16
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_target_bits ......... 8
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_training_enabled .... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_type ................ 0
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] quantize_verbose ............. False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] scheduler_name ............... None
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] scheduler_params ............. None
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] sparse_attention ............. None
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] sparse_gradients_enabled ..... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] steps_per_print .............. inf
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] tensorboard_enabled .......... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] tensorboard_job_name ......... DeepSpeedJobName
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] tensorboard_output_path ......
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] train_batch_size ............. 16
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] train_micro_batch_size_per_gpu 8
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] use_quantizer_kernel ......... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] wall_clock_breakdown ......... False
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] world_size ................... 2
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] zero_allow_untested_optimizer True
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] zero_config .................. {
"stage": 3,
"contiguous_gradients": true,
"reduce_scatter": true,
"reduce_bucket_size": 5.000000e+08,
"allgather_partitions": true,
"allgather_bucket_size": 5.000000e+08,
"overlap_comm": true,
"load_from_fp32_weights": true,
"elastic_checkpoint": false,
"offload_param": null,
"offload_optimizer": null,
"sub_group_size": 1.000000e+09,
"prefetch_bucket_size": 5.000000e+07,
"param_persistence_threshold": 1.000000e+05,
"max_live_parameters": 1.000000e+09,
"max_reuse_distance": 1.000000e+09,
"gather_16bit_weights_on_model_save": false,
"ignore_unused_parameters": true,
"round_robin_gradients": false,
"legacy_stage1": false
}
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] zero_enabled ................. True
[2023-04-11 16:59:16,621] [INFO] [config.py:1063:print] zero_optimization_stage ...... 3
[2023-04-11 16:59:16,622] [INFO] [config.py:1065:print] json = {
"train_batch_size": 16,
"train_micro_batch_size_per_gpu": 8,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "none"
},
"offload_param": {
"device": "none"
},
"stage3_gather_16bit_weights_on_model_save": false
},
"steps_per_print": inf,
"fp16": {
"enabled": true,
"auto_cast": true
},
"bf16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
}
Using /home/hadoop-hmart-waimai-rank/.cache/torch_extensions/py39_cu117 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0004420280456542969 seconds
04/11/2023 16:59:16 0:INFO: set weight type
04/11/2023 16:59:16 0:INFO: Move text_encode and vae to gpu and cast to weight_dtype
04/11/2023 16:59:16 0:INFO: [starship] accelerate not support all python data type
04/11/2023 16:59:16 0:INFO: ***** Running training *****
04/11/2023 16:59:16 0:INFO: Num examples = 400
04/11/2023 16:59:16 0:INFO: Num Epochs = 100
04/11/2023 16:59:16 0:INFO: Instantaneous batch size per device = 8
04/11/2023 16:59:16 0:INFO: Total train batch size (w. parallel, distributed & accumulation) = 16
04/11/2023 16:59:16 0:INFO: Gradient Accumulation steps = 1
04/11/2023 16:59:16 0:INFO: Total optimization steps = 2500
Steps: 0%| | 0/2500 [00:00<?, ?it/s]Parameter containing:
tensor([], device='cuda:0', dtype=torch.float16)
Traceback (most recent call last):
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/app/main.py", line 29, in <module>
main()
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/app/main.py", line 21, in main
run_aigc(args)
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/app/task.py", line 61, in run_aigc
train(args)
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/diffuser/train_txt2img.py", line 526, in train
encoder_hidden_states = text_encoder(batch["input_ids"].to(accelerator.device))[0]
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 823, in forward
return self.text_model(
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 719, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 234, in forward
inputs_embeds = self.token_embedding(input_ids)
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 160, in forward
return F.embedding(
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
Parameter containing:
tensor([], device='cuda:1', dtype=torch.float16)
Steps: 0%| | 0/2500 [00:05<?, ?it/s]
Traceback (most recent call last):
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/app/main.py", line 29, in <module>
main()
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/app/main.py", line 21, in main
run_aigc(args)
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/app/task.py", line 61, in run_aigc
train(args)
File "/workdir/fengyu05/501587/2924467c592a472aa750166c252e166d/src/diffuser/train_txt2img.py", line 526, in train
encoder_hidden_states = text_encoder(batch["input_ids"].to(accelerator.device))[0]
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 823, in forward
return self.text_model(
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 719, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 234, in forward
inputs_embeds = self.token_embedding(input_ids)
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 160, in forward
return F.embedding(
File "/usr/local/conda/lib/python3.9/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 32336) of binary: /usr/local/conda/bin/python
Traceback (most recent call last):
File "/usr/local/conda/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/conda/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/launch.py", line 195, in <module>
main()
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/launch.py", line 191, in main
launch(args)
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/launch.py", line 176, in launch
run(args)
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
src/app/main.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2023-04-11_16:59:28
host : workbenchxwmx64350ee0-f9ggd
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 32337)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-04-11_16:59:28
host : workbenchxwmx64350ee0-f9ggd
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 32336)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
real 0m26.485s
user 0m23.241s
sys 0m22.802s
```
I read https://github.com/huggingface/diffusers/issues/1865 , https://www.deepspeed.ai/tutorials/zero/#allocating-massive-megatron-lm-models and https://deepspeed.readthedocs.io/en/latest/zero3.html#deepspeed.zero.GatheredParameters
modify /usr/local/conda/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py as this:
```
209 self.token_embedding = nn.Embedding(config.vocab_size, embed_dim)
210 with deepspeed.zero.GatheredParameters(self.token_embedding.weight,
211 modifier_rank=0):
212 # Initialize the position embeddings.
213 nn.init.uniform_(self.token_embedding.weight, -1.0, 1)
214
215 # deepspeed.zero.register_external_parameter(self, self.token_embedding.weight)
```
but it does not work.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am also experiencing the same issue as mentioned in https://github.com/huggingface/diffusers/issues/1865, therefore I have copied the reproduction steps from the original post.
1.
```compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/kas/zero_stage3_offload_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 4
use_cpu: false
```
2.
/home/kas/zero_stage3_offload_config.json
```
{
"train_micro_batch_size_per_gpu": 16,
"gradient_accumulation_steps":2,
"train_batch_size":128,
"steps_per_print": 2,
"gradient_clipping": 1,
"zero_optimization": {
"stage": 3,
"allgather_partitions": false,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"stage3_max_live_parameters" : 2e8,
"stage3_max_reuse_distance" : 2e8,
"stage3_prefetch_bucket_size": 2e8,
"stage3_param_persistence_threshold": 2e8,
"sub_group_size" : 2e8,
"round_robin_gradients": true
},
"bf16": {
"enabled": true
}
}
```
4.
```
git clone https://github.com/huggingface/diffusers.git
cd expamles/text_to_imag
pip install deepspeed
export MODEL_NAME="stabilityai/stable-diffusion-2"
export dataset_name="lambdalabs/pokemon-blip-captions"
accelerate launch --config_file ./accelerate.yaml --mixed_precision="fp16" train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--use_ema \
--resolution=224 --center_crop --random_flip \
--train_batch_size=16 \
--gradient_accumulation_steps=2 \
--gradient_checkpointing \
--max_train_steps=500 \
--learning_rate=6e-5 \
--max_grad_norm=1 \
--lr_scheduler="constant_with_warmup" --lr_warmup_steps=0 \
--output_dir="sd-pokemon-model"
```
5.
```0%| | 0/500 [00:00<?, ?it/s] Steps: 0%| | 0/500 [00:00<?, ?it/s]Traceback (most recent call last):
File "train_text_to_image.py ", line 718, in <module>
main()
File "train_text_to_image.py ", line 648, in main
encoder_hidden_states = text_encoder(batch["input_ids"])[0]
File "/opt/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/miniconda3/lib/python3.7/site-packages/transformers/models/clip/modeling_clip.py", line 739, in forward
return_dict=return_dict,
File "/opt/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/miniconda3/lib/python3.7/site-packages/transformers/models/clip/modeling_clip.py", line 636, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "/opt/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/miniconda3/lib/python3.7/site-packages/transformers/models/clip/modeling_clip.py", line 165, in forward
inputs_embeds = self.token_embedding(input_ids)
File "/opt/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/miniconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/opt/miniconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2183, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
```
### Expected behavior
The goal is to be able to use Zero3 normally.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22705/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22704
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22704/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22704/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22704/events
|
https://github.com/huggingface/transformers/pull/22704
| 1,661,886,403
|
PR_kwDOCUB6oc5N_gtu
| 22,704
|
Remove mask
|
{
"login": "magdacisowska",
"id": 28646893,
"node_id": "MDQ6VXNlcjI4NjQ2ODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/28646893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/magdacisowska",
"html_url": "https://github.com/magdacisowska",
"followers_url": "https://api.github.com/users/magdacisowska/followers",
"following_url": "https://api.github.com/users/magdacisowska/following{/other_user}",
"gists_url": "https://api.github.com/users/magdacisowska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/magdacisowska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/magdacisowska/subscriptions",
"organizations_url": "https://api.github.com/users/magdacisowska/orgs",
"repos_url": "https://api.github.com/users/magdacisowska/repos",
"events_url": "https://api.github.com/users/magdacisowska/events{/privacy}",
"received_events_url": "https://api.github.com/users/magdacisowska/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,681
| 1,681
| 1,681
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22704/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22704",
"html_url": "https://github.com/huggingface/transformers/pull/22704",
"diff_url": "https://github.com/huggingface/transformers/pull/22704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22704.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22703
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22703/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22703/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22703/events
|
https://github.com/huggingface/transformers/pull/22703
| 1,661,747,951
|
PR_kwDOCUB6oc5N_DhQ
| 22,703
|
Make vilt, switch_transformers compatible with model parallelism
|
{
"login": "Xrenya",
"id": 51479797,
"node_id": "MDQ6VXNlcjUxNDc5Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/51479797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xrenya",
"html_url": "https://github.com/Xrenya",
"followers_url": "https://api.github.com/users/Xrenya/followers",
"following_url": "https://api.github.com/users/Xrenya/following{/other_user}",
"gists_url": "https://api.github.com/users/Xrenya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xrenya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xrenya/subscriptions",
"organizations_url": "https://api.github.com/users/Xrenya/orgs",
"repos_url": "https://api.github.com/users/Xrenya/repos",
"events_url": "https://api.github.com/users/Xrenya/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xrenya/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger I fixed the issue, but, unfortunately, I have closed and reopened PR to trigger the CircleCI.",
"Thanks a lot!"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# Make vilt, switch_transformers compatible with model parallelism
Fixes https://github.com/huggingface/transformers/issues/22561#issue-1653950092
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- PyTorch: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22703/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22703",
"html_url": "https://github.com/huggingface/transformers/pull/22703",
"diff_url": "https://github.com/huggingface/transformers/pull/22703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22703.patch",
"merged_at": 1681383031000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22702
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22702/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22702/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22702/events
|
https://github.com/huggingface/transformers/pull/22702
| 1,661,724,221
|
PR_kwDOCUB6oc5N--lJ
| 22,702
|
Enable naive Pipeline Parallelism training for Gpt neox japanese and san japanese
|
{
"login": "mayankagarwals",
"id": 39498938,
"node_id": "MDQ6VXNlcjM5NDk4OTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayankagarwals",
"html_url": "https://github.com/mayankagarwals",
"followers_url": "https://api.github.com/users/mayankagarwals/followers",
"following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}",
"gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions",
"organizations_url": "https://api.github.com/users/mayankagarwals/orgs",
"repos_url": "https://api.github.com/users/mayankagarwals/repos",
"events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayankagarwals/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger Could you review this once?",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
As suggested in the https://github.com/huggingface/transformers/issues/22561, moved labels to the same device as logits for GPT Neox Japanese and Gpt San Japanese
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22702/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22702",
"html_url": "https://github.com/huggingface/transformers/pull/22702",
"diff_url": "https://github.com/huggingface/transformers/pull/22702.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22702.patch",
"merged_at": 1681218377000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22701
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22701/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22701/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22701/events
|
https://github.com/huggingface/transformers/pull/22701
| 1,661,707,578
|
PR_kwDOCUB6oc5N-7Kr
| 22,701
|
[test] add CI workflow for VCS installation
|
{
"login": "XuehaiPan",
"id": 16078332,
"node_id": "MDQ6VXNlcjE2MDc4MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XuehaiPan",
"html_url": "https://github.com/XuehaiPan",
"followers_url": "https://api.github.com/users/XuehaiPan/followers",
"following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}",
"gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions",
"organizations_url": "https://api.github.com/users/XuehaiPan/orgs",
"repos_url": "https://api.github.com/users/XuehaiPan/repos",
"events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}",
"received_events_url": "https://api.github.com/users/XuehaiPan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22701). All of your documentation changes will be reflected on that endpoint.",
"Or we can just leave the setup as is and there won't be any need for this new check :-)",
"> Or we can just leave the setup as is and there won't be any need for this new check :-)\r\n\r\nThe CI workflows in this PR can also benefit to check the viability of `transformers`' dependencies.\r\n\r\nI think the ultimate solution is to ask the user to upgrade their `pip` first in the documentation:\r\n\r\n```bash\r\npip install --upgrade pip setuptools # upgrade to support PEP 660\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,684
| 1,684
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add a CI test for checking VCS installation (via URL or git repo). This workflow prevents potential breakage for [install from source](https://huggingface.co/docs/transformers/installation#install-from-source) installation method.
Ref:
- #22539
- #22658
- #22599
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22701/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22701",
"html_url": "https://github.com/huggingface/transformers/pull/22701",
"diff_url": "https://github.com/huggingface/transformers/pull/22701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22701.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22700
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22700/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22700/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22700/events
|
https://github.com/huggingface/transformers/pull/22700
| 1,661,520,687
|
PR_kwDOCUB6oc5N-Tv-
| 22,700
|
Add support of output_scores to flax models
|
{
"login": "hannan72",
"id": 8229163,
"node_id": "MDQ6VXNlcjgyMjkxNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hannan72",
"html_url": "https://github.com/hannan72",
"followers_url": "https://api.github.com/users/hannan72/followers",
"following_url": "https://api.github.com/users/hannan72/following{/other_user}",
"gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannan72/subscriptions",
"organizations_url": "https://api.github.com/users/hannan72/orgs",
"repos_url": "https://api.github.com/users/hannan72/repos",
"events_url": "https://api.github.com/users/hannan72/events{/privacy}",
"received_events_url": "https://api.github.com/users/hannan72/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22700). All of your documentation changes will be reflected on that endpoint.",
"@sanchit-gandhi @ArthurZucker \r\nCould you please review this PR?",
"I get a new error on CI for test codes of all models:\r\n```AttributeError: module 'jax' has no attribute 'Array'```\r\nHowever there was not such error two weeks ago (while my commit all tests passed).\r\nIs there any updates on `Jax` that is not compatible with?\r\nAny ideas @sanchit-gandhi ?\r\n\r\nUpdate: The above mentioned error has been raised from `optax` source codes:\r\n```\r\nexamples/flax/test_flax_examples.py:41: in <module>\r\n import run_clm_flax\r\nexamples/flax/language-modeling/run_clm_flax.py:40: in <module>\r\n import optax\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/optax/__init__.py:18: in <module>\r\n from optax._src.alias import adabelief\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/optax/_src/alias.py:23: in <module>\r\n from optax._src import clipping\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/optax/_src/clipping.py:130: in <module>\r\n ) -> Tuple[List[chex.Array], jax.Array]:\r\nE AttributeError: module 'jax' has no attribute 'Array' \r\n```\r\nDoes it have relevance to this recent merge: https://github.com/huggingface/transformers/pull/22895 ?",
"> I get a new error on CI for test codes of all models: `AttributeError: module 'jax' has no attribute 'Array'` However there was not such error two weeks ago (while my commit all tests passed). Is there any updates on `Jax` that is not compatible with? Any ideas @sanchit-gandhi ?\r\n> \r\n> Update: The above mentioned error has been raised from `optax` source codes:\r\n> \r\n> ```\r\n> examples/flax/test_flax_examples.py:41: in <module>\r\n> import run_clm_flax\r\n> examples/flax/language-modeling/run_clm_flax.py:40: in <module>\r\n> import optax\r\n> ../.pyenv/versions/3.8.12/lib/python3.8/site-packages/optax/__init__.py:18: in <module>\r\n> from optax._src.alias import adabelief\r\n> ../.pyenv/versions/3.8.12/lib/python3.8/site-packages/optax/_src/alias.py:23: in <module>\r\n> from optax._src import clipping\r\n> ../.pyenv/versions/3.8.12/lib/python3.8/site-packages/optax/_src/clipping.py:130: in <module>\r\n> ) -> Tuple[List[chex.Array], jax.Array]:\r\n> E AttributeError: module 'jax' has no attribute 'Array' \r\n> ```\r\n> \r\n> Does it have relevance to this recent merge: #22895 ?\r\n\r\nI've found that this issue was related to the optax version (which installed the 0.1.5). In the updated version of transformer repo, the version to be installed is forced to be 0.1.4",
"Good catch regarding the `jax.Array` issue! I need to un-pin JAX on Transformers since new Optax / Chex versions are running ahead https://github.com/huggingface/transformers/issues/19842 Will do this tomorrow 🤗",
"Thanks for the review @gante 🙌 See https://github.com/huggingface/transformers/pull/22700#discussion_r1182794360 for the next steps @hannan72 🚀",
"Also see related: #22700\r\n\r\nThis might get merged before this PR, in which case we can rebase to get the beam score fixes from main! Your changes will still be valuable for greedy search @hannan72 🤗",
"Hey @hannan72! This PR is looking in good shape - would you like to get it over the line with the last bits of integration?",
"> Hey @hannan72! This PR is looking in good shape - would you like to get it over the line with the last bits of integration?\r\n\r\nSorry for late response. I was busy on a product release.\r\nYes I really want to make it final and put it in the next release of transformers. What is remaining?\r\nPlease clarify the remaining steps to finalize the PR and close this issue.",
"Awesome! It's more or less as you left it - the major \"TODO\" is getting the correct vocab size in the first forward pass (see https://github.com/huggingface/transformers/pull/22700#discussion_r1178185868)",
"> Awesome! It's more or less as you left it - the major \"TODO\" is getting the correct vocab size in the first forward pass (see [#22700 (comment)](https://github.com/huggingface/transformers/pull/22700#discussion_r1178185868))\r\n\r\nI had made a try on it and posted the result:\r\n\r\n\r\nI tried to do this. But there was an error stopped me working on it.\r\nI get the `vocab_size` from logits shape in the first step as follows:\r\n```\r\nnext_tokens_scores = logits_processor(state.sequences, logits, state.cur_len)\r\nnext_token = jnp.argmax(next_tokens_scores, axis=-1)\r\nscores = state.scores\r\nif output_scores and state.scores is None:\r\n vocab_size = next_tokens_scores.shape[-1]\r\n scores = jnp.ones((batch_size, max_length, vocab_size)) * np.array(-1.0e7)\r\ntokens_scores = scores.at[:, state.cur_len, :].set(next_tokens_scores) if output_scores else None\r\n```\r\n\r\nBut in the line:\r\nhttps://github.com/huggingface/transformers/blob/312b104ff65514736c0475814fec19e47425b0b5/src/transformers/generation/flax_utils.py#L641\r\n it checks that tensor shapes between runs should be exactly same, which causes the following error :\r\n```\r\nException has occurred: TypeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) body_fun output and input must have same type structure, got PyTreeDef(CustomNode(GreedyState[()], [*, *, *, *, *, {'decoder_attention_mask': *, 'decoder_position_ids': *, 'encoder_attention_mask': None, 'encoder_outputs': CustomNode(FlaxBaseModelOutput[()], [*, None, None]),...\r\n```\r\n\r\nSo it seems the second suggestion is not going to work here. Because in Jax, every tensor shape should be pre-defined before deployment while we get the `vocab_size` during the deployment.",
"The idea here would be to run the first pass outside of the lax while loop (which we already do), then get the logits shape, then run the loop with the correct vocab size. Picking up on L730:\r\nhttps://github.com/huggingface/transformers/blob/9ade58f0555430cec851e307c83c3a56c4a77d0b/src/transformers/generation/flax_utils.py#L730\r\n\r\nThis would look something like:\r\n\r\n```python\r\n # The very first prompt often has sequence length > 1, so run outside of `lax.while_loop` to comply with TPU\r\n if input_ids.shape[1] > 1:\r\n state = sample_search_body_fn(state)\r\n\r\n # now get the vocab size\r\n vocab_size = state.logits.shape[-1]\r\n\r\n # do the other stuff that we need to do to init the state scores\r\n # ...\r\n\r\n # now run the main body\r\n if not trace:\r\n state = self._run_loop_in_debug(sample_search_cond_fn, sample_search_body_fn, state)\r\n else:\r\n state = lax.while_loop(sample_search_cond_fn, sample_search_body_fn, state)\r\n```",
"> The idea here would be to run the first pass outside of the lax while loop (which we already do), then get the logits shape, then run the loop with the correct vocab size. Picking up on L730:\r\n> \r\n> https://github.com/huggingface/transformers/blob/9ade58f0555430cec851e307c83c3a56c4a77d0b/src/transformers/generation/flax_utils.py#L730\r\n> \r\n> This would look something like:\r\n> \r\n> ```python\r\n> # The very first prompt often has sequence length > 1, so run outside of `lax.while_loop` to comply with TPU\r\n> if input_ids.shape[1] > 1:\r\n> state = sample_search_body_fn(state)\r\n> \r\n> # now get the vocab size\r\n> vocab_size = state.logits.shape[-1]\r\n> \r\n> # do the other stuff that we need to do to init the state scores\r\n> # ...\r\n> \r\n> # now run the main body\r\n> if not trace:\r\n> state = self._run_loop_in_debug(sample_search_cond_fn, sample_search_body_fn, state)\r\n> else:\r\n> state = lax.while_loop(sample_search_cond_fn, sample_search_body_fn, state)\r\n> ```\r\n\r\nI implemented your suggestion by applying following changes in `greedy_search_body_fn` and get the `vocab_size` from the first run as follows:\r\n\r\n```\r\n def greedy_search_body_fn(state):\r\n \"\"\"state update fn.\"\"\"\r\n model_outputs = model(state.running_token, params=params, **state.model_kwargs)\r\n logits = model_outputs.logits[:, -1]\r\n\r\n # apply min_length, ...\r\n next_tokens_scores = logits_processor(state.sequences, logits, state.cur_len)\r\n\r\n next_token = jnp.argmax(next_tokens_scores, axis=-1)\r\n if output_scores:\r\n if state.scores is not None:\r\n tokens_scores = state.scores.at[:, state.cur_len, :].set(next_tokens_scores)\r\n else:\r\n scores = jnp.ones((batch_size, max_length, next_tokens_scores.shape[-1])) * np.array(-1.0e7)\r\n tokens_scores = scores.at[:, state.cur_len, :].set(next_tokens_scores)\r\n else:\r\n tokens_scores = None\r\n next_token = next_token * ~state.is_sent_finished + pad_token_id * state.is_sent_finished\r\n next_is_sent_finished = state.is_sent_finished | (next_token == eos_token_id)\r\n next_token = next_token[:, None]\r\n\r\n next_sequences = lax.dynamic_update_slice(state.sequences, next_token, (0, state.cur_len))\r\n next_model_kwargs = self.update_inputs_for_generation(model_outputs, state.model_kwargs)\r\n return GreedyState(\r\n cur_len=state.cur_len + 1,\r\n sequences=next_sequences,\r\n scores=tokens_scores,\r\n running_token=next_token,\r\n is_sent_finished=next_is_sent_finished,\r\n model_kwargs=next_model_kwargs,\r\n )\r\n\r\n # The very first prompt often has sequence length > 1, so run outside of `lax.while_loop` to comply with TPU\r\n # Besides, when output_scores is true, to return scores vocab_size of the model is got from first run.\r\n if input_ids.shape[1] > 1 or output_scores:\r\n state = greedy_search_body_fn(state)\r\n\r\n```",
"@sanchit-gandhi \r\nI think this PR is ready to merge. All tests are passed.\r\nCould you please review it again and merge it?",
"@sanchit-gandhi \r\nI have checked out to my latest commit (b82ef360c5d819efc10298344d7d2fb4c33e1c47) and run a test as follows:\r\n\r\n* Model_name: whisper_medium with flax inference\r\n* GPU: A100-40GB\r\n* Input audio: 5seconds\r\n* transformers: git+https://github.com/huggingface/transformers.git@b82ef360c5d819efc10298344d7d2fb4c33e1c47\r\n* Pytorch: 2.0.0\r\n* jax: [cuda12_local] 0.4.11\r\n\r\nA. Normal Inference (while `output_scores=False`):\r\nThe model has been deployed for 5 sequence runs. Inference time is ~0.2 seconds:\r\n\r\n```\r\n model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True)\r\n jit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"language\", \"task\"])\r\n runtime=[]\r\nfor i in range(5):\r\n start_time = time.time()\r\n input_features = jnp.array(input_features, dtype=jnp.float16)\r\n pred_ids = jit_generate(input_features, max_length=128, language='<|de|>', task =\"transcribe\")\r\n runtime.append(time.time() - start_time)\r\nprint(\"Inference time: \", runtime)\r\nprint(\"output scores: \", scores)\r\n ```\r\nresult:\r\nInference time: [57.01693844795227, 0.22632288932800293, 0.1981194019317627, 0.19892430305480957, 0.19736719131469727]\r\noutput scores: None\r\n\r\n\r\nB. Inference with confidence scores (while `output_scores=True`):\r\nThe model has been deployed for 5 sequence runs. Inference time is also ~0.2 seconds:\r\n\r\n```\r\n model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True)\r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"language\", \"task\", \"output_hidden_states\", \"output_scores\", \"return_dict_in_generate\"])\r\n runtime=[]\r\nfor i in range(5):\r\n start_time = time.time()\r\n input_features = jnp.array(input_features, dtype=jnp.float16)\r\n pred_ids = jit_generate(input_features, max_length=128, language='<|de|>', task =\"transcribe\",\r\n output_scores=True, output_hidden_states=True, return_dict_in_generate=True)\r\n runtime.append(time.time() - start_time)\r\nprint(\"Inference time: \", runtime)\r\nprint(\"output scores: \", scores)\r\n```\r\nresult:\r\nInference time: [82.8741066455841, 0.20504498481750488, 0.19746017456054688, 0.1972200870513916, 0.1973130702972412]\r\noutput scores: [[[-10000000. -10000000. -10000000. ... -10000000. -10000000. -10000000.]\r\n [ -inf -inf -inf ... -inf -inf -inf]\r\n [ -inf -inf -inf ... -inf -inf -inf]\r\n ...\r\n [-10000000. -10000000. -10000000. ... -10000000. -10000000. -10000000.]\r\n [-10000000. -10000000. -10000000. ... -10000000. -10000000. -10000000.]\r\n [-10000000. -10000000. -10000000. ... -10000000. -10000000. -10000000.]]]\r\n\r\n\r\nIt should be also noted that the result of model inference are exactly the same. The only change is that the first run takes more when `output_score=True` But next inferences are the approximately the same value.\r\n\r\nCould you please review and merge this PR? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello @sanchit-gandhi \r\nSorry for late response\r\nI added test codes for output_scores feature of Flax that you asked me. But the PR is closed automatically by github-actions.\r\nMy latest commit that included some tests is: https://github.com/hannan72/transformers/commit/0416becf86c65a3f32e72314715b79f5f84f52ce\r\n\r\nCould you please re-open the PR to run new tests?",
"Re-opened and running the tests! Thanks @hannan72! Let me know when this is ready for a re-review",
"Thank you @sanchit-gandhi for re-opening this PR.\r\nI've add some tests based on what you asked me for. Could you please review my latest commit (https://github.com/huggingface/transformers/pull/22700/commits/0416becf86c65a3f32e72314715b79f5f84f52ce)?\r\n",
"@sanchit-gandhi \r\nHave you reviewed my added test codes?",
"Hey @hannan72 - yes I did! Please see the comment I left a couple of weeks ago: https://github.com/huggingface/transformers/pull/22700#discussion_r1316165469",
"Let me know if you need any help here @hannan72! More than happy to assist with the integration and think you're pretty close to finishing!",
"> Let me know if you need any help here @hannan72! More than happy to assist with the integration and think you're pretty close to finishing!\r\n\r\n@sanchit-gandhi many thanks for your reviews of the PR of @hannan72! We would need your help in finalizing the PR. As you mentioned in your comment that the HF team already tested that the ids do not change, I think it would be much easier if you extend the existing test case to show that also the scores are correct. \r\n\r\nQuote: @sanchit-gandhi\r\n\"At the moment, we've tested that output_scores=True doesn't change the ids, but not that the scores are correct\"\r\n\r\nWe would appreciate your help a lot here to get this PR over the finish line since you know your test code much better than we do and for you it would be much faster. Do you think you can ask someone of your team to help to get the PR finalized and merged? \r\n",
"Cool to see that you're interested in this PR @teddius! I sadly won't have the bandwidth to work on this PR directly, but am more than happy to continue with PR reviews and answering any questions/queries. If @hannan72 is able to, it seems fitting that he gets the opportunity to finish the PR that he started! Otherwise, we can open this one up to the community and see if anyone is able to help here.",
"@sanchit-gandhi understand, many thanks for your fast reply. @hannan72 will be busy on other tasks so please feel free to open up the task for the community, so we can get some help in finishing the last test case. Many thanks for your support along the way!\r\n",
"Cool, sounds good @teddius! See https://github.com/huggingface/transformers/issues/22612#issuecomment-1753324050 for the community contribution request."
] | 1,681
| 1,701
| 1,699
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Flax models does not support `output_scores` when generate() method is called, despite the PyTorch models that fully supports this feature.
It is tried to follow naming and format of these parameters as same as PyTorch model codes (utils.py)
## Before submitting
- [x] This PR adds support of `output_scores` to flax models.
- [x] Flax Whisper model handles `output_scores` and `num_beams` parameters to consider during generate().
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22700/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22700",
"html_url": "https://github.com/huggingface/transformers/pull/22700",
"diff_url": "https://github.com/huggingface/transformers/pull/22700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22700.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22699
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22699/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22699/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22699/events
|
https://github.com/huggingface/transformers/issues/22699
| 1,661,468,860
|
I_kwDOCUB6oc5jCAC8
| 22,699
|
BF16 on AMD MI250x GPU
|
{
"login": "jglaser",
"id": 1899768,
"node_id": "MDQ6VXNlcjE4OTk3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1899768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jglaser",
"html_url": "https://github.com/jglaser",
"followers_url": "https://api.github.com/users/jglaser/followers",
"following_url": "https://api.github.com/users/jglaser/following{/other_user}",
"gists_url": "https://api.github.com/users/jglaser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jglaser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jglaser/subscriptions",
"organizations_url": "https://api.github.com/users/jglaser/orgs",
"repos_url": "https://api.github.com/users/jglaser/repos",
"events_url": "https://api.github.com/users/jglaser/events{/privacy}",
"received_events_url": "https://api.github.com/users/jglaser/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Ah this is a bit of a tricky situation. Is there a way to get the difference between an Nvidia GPU and an AMD GPU from PyTorch? Most users ignore warnings, so I'd really prefer to keep the error, but of course we can refine the test to ignore the GPUs that are not concerned.",
"Perhaps use `torch.cuda.get_device_name()`?\r\nhttps://stackoverflow.com/questions/48152674/how-do-i-check-if-pytorch-is-using-the-gpu\r\n\r\nOn NVIDIA V100:\r\n```\r\n>>> torch.cuda.get_device_name()\r\n'Tesla V100-SXM2-16GB'\r\n```\r\n\r\non AMD MI-250x:\r\n```\r\n>>> torch.cuda.get_device_name()\r\n''\r\n```\r\n(empty string)\r\n\r\nA simple check could consist of testing for NVIDIA GPU and then erroring out if not finding the right one, otherwise just issuing a warning. In any case, attempting to use BF16 kernels on a non-supported GPU would probable produce pertinent error messages later on.\r\n\r\n```\r\ndevice_name = torch.cuda.get_device_name()\r\nnvidia_models = [ 'GeForce', 'Tesla' ]\r\nif any([ model in device_name for model in nvidia_models ]):\r\n # check for A100 and above\r\nelse:\r\n # raise a warning that BF16 may not be supported and may cause exceptions during training or inference, and that the\r\n # user should know what they're doing\r\n```\r\n\r\nAlternatively, provide a `Trainer` argument to override this error.\r\n",
"The `get_device_name` sounds like a good option. Would you like to suggest this change in a PR?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> ### System Info\r\n> Hi,\r\n> \r\n> on ROCm, I am seeing the following error with BF16\r\n> \r\n> ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0 raise ValueError(\r\n> \r\n> Since the error and underlying check is NVIDIA specific and can be ignored on AMD MI250X GPUs, it would be good to turn it into a warning to not have to hack the `utils/import_utils.py` source.\r\n> \r\n> ### Who can help?\r\n> @sgugger\r\n> \r\n> ### Information\r\n> * [ ] The official example scripts\r\n> * [ ] My own modified scripts\r\n> \r\n> ### Tasks\r\n> * [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n> * [ ] My own task or dataset (give details below)\r\n> \r\n> ### Reproduction\r\n> Run any huggingface model with `--bf16` command line option on an AMD MI250X GPU\r\n> \r\n> ### Expected behavior\r\n> Training work\r\n\r\nHI, do you have AMD MI250x GPU linux or windows driver?"
] | 1,681
| 1,697
| 1,684
|
CONTRIBUTOR
| null |
### System Info
Hi,
on ROCm, I am seeing the following error with BF16
ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0
raise ValueError(
Since the error and underlying check is NVIDIA specific and can be ignored on AMD MI250X GPUs, it would be good to turn it into a warning to not have to hack the `utils/import_utils.py` source.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run any huggingface model with `--bf16` command line option on an AMD MI250X GPU
### Expected behavior
Training works
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22699/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22698
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22698/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22698/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22698/events
|
https://github.com/huggingface/transformers/pull/22698
| 1,661,328,027
|
PR_kwDOCUB6oc5N9o07
| 22,698
|
Use code on the Hub from another repo
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
This makes it easier to maintain only one source of ground truth when using the code on the Hub feature by storing the repo ID on top of the module containing the class inside the config. Thus, when saving and re-pushing a model using code on the Hub, the code is not copied over anymore, but a reference to the original repo containing the code is put.
This might be breaking if some users relied on the code being copied over when `save_pretrained(xxx)` is executed. To enable that old behavior, one only needs to call the `register_for_auto_class` method:
```py
from transformers import AutoModel
model = AutoModel.from_pretrained("hf-internal-testing/test_dynamic_model", trust_remote_code=True)
model.save_pretrained(some_path)
```
then some_path only contains the config and weights of the model. The config will contain links to the repo where the code of the model is defined (`hf-internal-testing/test_dynamic_model`) so that it can be reloaded via
```py
AutoModel.from_pretrained(some_path)
```
To get the custom code file copied other (behavior before this PR) just do:
```py
from transformers import AutoModel
model = AutoModel.from_pretrained("hf-internal-testing/test_dynamic_model", trust_remote_code=True)
model.register_for_auto_class("AutoModel")
model.save_pretrained(some_path)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22698/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22698",
"html_url": "https://github.com/huggingface/transformers/pull/22698",
"diff_url": "https://github.com/huggingface/transformers/pull/22698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22698.patch",
"merged_at": 1681745789000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22697
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22697/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22697/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22697/events
|
https://github.com/huggingface/transformers/pull/22697
| 1,661,322,119
|
PR_kwDOCUB6oc5N9nj4
| 22,697
|
Make it easier to develop without a dev install
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,681
| 1,681
| 1,681
|
COLLABORATOR
| null |
# What does this PR do?
This PR makes the only quality check that failed without all the dev dependencies work without them, then makes it clear in all contributing guides that installing with the quality extra should be enough for most development.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22697/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22697",
"html_url": "https://github.com/huggingface/transformers/pull/22697",
"diff_url": "https://github.com/huggingface/transformers/pull/22697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22697.patch",
"merged_at": 1681216913000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22696
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22696/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22696/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22696/events
|
https://github.com/huggingface/transformers/issues/22696
| 1,661,321,335
|
I_kwDOCUB6oc5jBcB3
| 22,696
|
`no_repeat_ngram_size` has no effect for Flax model
|
{
"login": "gianlucadetommaso",
"id": 32386694,
"node_id": "MDQ6VXNlcjMyMzg2Njk0",
"avatar_url": "https://avatars.githubusercontent.com/u/32386694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gianlucadetommaso",
"html_url": "https://github.com/gianlucadetommaso",
"followers_url": "https://api.github.com/users/gianlucadetommaso/followers",
"following_url": "https://api.github.com/users/gianlucadetommaso/following{/other_user}",
"gists_url": "https://api.github.com/users/gianlucadetommaso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gianlucadetommaso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gianlucadetommaso/subscriptions",
"organizations_url": "https://api.github.com/users/gianlucadetommaso/orgs",
"repos_url": "https://api.github.com/users/gianlucadetommaso/repos",
"events_url": "https://api.github.com/users/gianlucadetommaso/events{/privacy}",
"received_events_url": "https://api.github.com/users/gianlucadetommaso/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @gianlucadetommaso \r\n\r\nYou are right. `no_repeat_ngram` is one of the various logit processors used during generation. While I can see it's defined in tensorflow : https://github.com/huggingface/transformers/blob/151425ddb29d4ad1a121e8cce62000a2ac52d3ba/src/transformers/generation/tf_utils.py#L1451 \r\n\r\nI don't think it's defined in flax: \r\nhttps://github.com/huggingface/transformers/blob/151425ddb29d4ad1a121e8cce62000a2ac52d3ba/src/transformers/generation/flax_utils.py#L488\r\n\r\n@sanchit-gandhi \r\n\r\nLet me know if you have your hands full. \r\nBeen meaning to get into flax for a while, can take this up. Shouldn't be very problematic. I'll just need to see how to implement the same processor in flax, might need a little guidance\r\n\r\n",
"@mayankagarwals @sanchit-gandhi It seems also `num_return_sequences` does not work for the Flax model, and indeed it seems missing from [transformers/src/transformers/generation/flax_utils.py](https://github.com/huggingface/transformers/blob/151425ddb29d4ad1a121e8cce62000a2ac52d3ba/src/transformers/generation/flax_utils.py#L488). \r\n\r\nWorth doing a more general comparison of functionalities, I guess. 😄 ",
"Hey, I noticed this issue randomly, so just dropping in to say it looks like #18707.\r\n\r\nThe implementation was attempted in #18769 (cc @gante) and dropped because it was memory heavy, but I think it should be doable with a XLA `while` loop without allocating huge tensors. I implemented similar logic in Elixir recently (with the same shape restrictions, since we also use XLA), so perhaps [this](https://github.com/elixir-nx/bumblebee/blob/6ae97b2ce2e99a863a658f0730334c0a4984fc3d/lib/bumblebee/text/generation.ex#L745-L779) helps. The code is fairly high-level, but if anything is not clear let me know :)",
"@jonatanklosko nice, I had given up on it after spending a few hours on it! I'll keep tabs on your implementation, in case no one picks it up in the near future :)\r\n\r\n@gianlucadetommaso @mayankagarwals feel free to pick up the task of adding missing logits processors to FLAX! In general, a close copy of TF's implementations will work on FLAX, since they both rely on XLA and have similar syntax.\r\n",
"@gante Got it. I'll take some time out and look into this. Quite interesting!",
"@gante \r\n\r\nWhile I was able to understand and reproduce your solution (kudos on the clean code), I had a question. \r\nThe following code works as expected\r\n```import tensorflow as tf\r\nbatch_size = 5\r\nngram_size = 4\r\nvocab_size = 52\r\nseq_len = 50\r\n\r\n\r\ninput_ids = tf.convert_to_tensor([[ 40, 28, 35, 36, 37, 14, 15, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51], [ 40, 28, 35, 36, 37, 14, 15, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51],[ 40, 28, 35, 36, 37, 14, 15, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51], [ 40, 28, 35, 36, 37, 14, 15, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51],[ 40, 28, 35, 36, 37, 14, 15, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51, 51, 51, 51, 51,\r\n 51, 51, 51, 51, 51]])\r\ntransition_tensor = tf.zeros((batch_size, ngram_size - 1, vocab_size, vocab_size), dtype=tf.bool)\r\n\r\n# if `input_ids` is padded this will do some useless computations, but that is fine (avoids XLA recompilation)\r\nfor i in range(seq_len - (ngram_size - 1)):\r\n ngrams = input_ids[:, i : i + ngram_size]\r\n\r\n # creates the indexing for the batch and the n-th member of the ngram\r\n batch_indexing, ngram_indexing = tf.meshgrid(tf.range(ngrams.shape[0]), tf.range(ngrams.shape[1] - 1))\r\n batch_indexing = tf.reshape(tf.transpose(batch_indexing), (-1,))\r\n ngram_indexing = tf.reshape(tf.transpose(ngram_indexing), (-1,))\r\n\r\n # creates the indexing for the current -> next token p airs\r\n curr_tokens = ngrams[:, :-1]\r\n next_tokens = ngrams[:, 1:]\r\n current_token_indexing = tf.reshape(curr_tokens, (-1,))\r\n next_token_indexing = tf.reshape(next_tokens, (-1,))\r\n\r\n # scatters the observed ngrams into the transition tensor\r\n update_indices = tf.stack(\r\n (batch_indexing, ngram_indexing, current_token_indexing, next_token_indexing), axis=1\r\n )\r\n\r\n transition_tensor = tf.tensor_scatter_nd_update(\r\n tensor=transition_tensor,\r\n indices=update_indices,\r\n updates=tf.ones(update_indices.shape[0], dtype=tf.bool),\r\n )\r\n\r\nprint(transition_tensor)\r\n```\r\n\r\nBut when done with higher dimensions, gives core dump on CPU. This is just eager evaluation without any xla. Any idea why this might be? Does tensorflow give core dump when it can't do large operations on CPU? \r\n```# batch_size = 5\r\n# ngram_size = 4\r\n# vocab_size = 50257\r\n# seq_len = 50\r\n# \r\n# \r\n# input_ids = tf.convert_to_tensor([[ 40, 2883, 6155, 351, 616, 13779, 3290, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256], [ 40, 2883, 6155, 351, 616, 13779, 3290, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256],[ 40, 2883, 6155, 351, 616, 13779, 3290, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256], [ 40, 2883, 6155, 351, 616, 13779, 3290, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256],[ 40, 2883, 6155, 351, 616, 13779, 3290, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256,\r\n# 50256, 50256, 50256, 50256, 50256]])\r\n```",
"@mayankagarwals I believe I got stuck precisely at that point :) Have you had a chance to look at @jonatanklosko's implementation? ",
"I just had a high-level overview, yet to dig in. I figured I'd first understand why this implementation wasn't working. I'll get to it soon\r\n\r\nThat is very odd behavior though. If there was an issue with resources, the code should have failed while allocating the transition tensor \r\n`transition_tensor = tf.zeros((batch_size, ngram_size - 1, vocab_size, vocab_size), dtype=tf.bool)\r\n`\r\n\r\nBut instead, it fails while performing the scatter and update op. \r\nTo debug further I broke down the scatter and update it into 15 different operations in a loop. It's failing for some and not failing for others. \r\n```\r\n for j in update_indices:\r\n print(j)\r\n tf.tensor_scatter_nd_update(transition_tensor, tf.expand_dims(j, axis=0), tf.constant([True], dtype=tf.bool))\r\n\r\n```\r\n\r\nAnyway, if by any chance you figure this out during your stint with TensorFlow. Do let me know, I'd be interested to know\r\n",
"Keeping this open in case you want to continue on @mayankagarwals - seems like you were making good progress!",
"Thanks, @sanchit-gandhi. I will find time to work on this soon, just got caught up with other things. Will update the thread as I make progress :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,681
| 1,691
| 1,691
|
NONE
| null |
### System Info
transformes = ^4.27.4, macOS, python = ^3.9.6
### Who can help?
@sanchit-gandhi @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have been walking through the generation example in [here](https://huggingface.co/blog/how-to-generate), but I am trying to use `FlaxGPT2LMHeadModel` instead of `GPT2LMHeadModel`.
Everything works up to when `no_repeat_ngram_size` is introduced. In the example, setting `no_repeat_ngram_size=2` changes the generated sentence from
```
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.
I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll
```
to
```
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.
I've been thinking about this for a while now, and I think it's time for me to take a break
```
However, when using `FlaxGPT2LMHeadModel` instead of `GPT2LMHeadModel`, the generated sentence with `no_repeat_ngram_size=2` remains exactly the same as the first message.
Here is a reproducing example.
```
from transformers import FlaxGPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = FlaxGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='jax')
beam_output = model.generate(
input_ids,
max_length=50,
num_beams=5,
no_repeat_ngram_size=2,
early_stopping=True
)
print(tokenizer.decode(beam_output.sequences.tolist()[0], skip_special_tokens=True))
# I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.
# I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll
```
Perhaps there is a bug in the interplay of `FlaxGPT2LMHeadModel` and `no_repeat_ngram_size`?
### Expected behavior
Everything works fine when `GPT2LMHeadModel` is used instead. Here an example.
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
beam_output = model.generate(
input_ids,
max_length=50,
num_beams=5,
no_repeat_ngram_size=2,
early_stopping=True
)
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
# I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again.
# I've been thinking about this for a while now, and I think it's time for me to take a break
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22696/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22695
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22695/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22695/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22695/events
|
https://github.com/huggingface/transformers/issues/22695
| 1,661,258,184
|
I_kwDOCUB6oc5jBMnI
| 22,695
|
Can't import Mega for causal LM model
|
{
"login": "Tylersuard",
"id": 41713505,
"node_id": "MDQ6VXNlcjQxNzEzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tylersuard",
"html_url": "https://github.com/Tylersuard",
"followers_url": "https://api.github.com/users/Tylersuard/followers",
"following_url": "https://api.github.com/users/Tylersuard/following{/other_user}",
"gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions",
"organizations_url": "https://api.github.com/users/Tylersuard/orgs",
"repos_url": "https://api.github.com/users/Tylersuard/repos",
"events_url": "https://api.github.com/users/Tylersuard/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tylersuard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The example you are referring to comes from the documentation of the main branch of Transformers, not the released version. You will thus need a [source install](https://huggingface.co/docs/transformers/installation#install-from-source) to be able to execute it.",
"Thank you @sgugger "
] | 1,681
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### System Info
ImportError: cannot import name 'MegaForCausalLM' from 'transformers' (/usr/local/lib/python3.9/dist-packages/transformers/__init__.py)
This happens while running the example code here: https://huggingface.co/docs/transformers/main/model_doc/mega
from transformers import AutoTokenizer, MegaForCausalLM, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
config = AutoConfig.from_pretrained("mnaylor/mega-base-wikitext")
config.is_decoder = True
config.bidirectional = False
model = MegaForCausalLM.from_pretrained("mnaylor/mega-base-wikitext", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
!pip install transformers
from transformers import AutoTokenizer, MegaForCausalLM, AutoConfig
### Expected behavior
It should import MegaForCausalLM.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22695/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.