url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/18976
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18976/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18976/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18976/events
|
https://github.com/huggingface/transformers/issues/18976
| 1,368,861,288
|
I_kwDOCUB6oc5Rlypo
| 18,976
|
Top_P sampling samples an extra token when the cum sum of probabilities is exactly equal to top_p
|
{
"login": "ekagra-ranjan",
"id": 3116519,
"node_id": "MDQ6VXNlcjMxMTY1MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekagra-ranjan",
"html_url": "https://github.com/ekagra-ranjan",
"followers_url": "https://api.github.com/users/ekagra-ranjan/followers",
"following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}",
"gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions",
"organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs",
"repos_url": "https://api.github.com/users/ekagra-ranjan/repos",
"events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @ekagra-ranjan 👋 \r\n\r\nEDIT: I've checked the [original paper](https://arxiv.org/pdf/1904.09751.pdf) and you are correct -- in your example, only two tokens should be up to consideration. Adding a quick fix for it.\r\n\r\n",
"@gante I can raise a PR right away for this. Should I go ahead?\r\n",
"Oh, my bad, already opened a PR 🙈 ",
"@gante Actually, I wanted to raise a PR with my implementation because it has an optimization of not requiring to clone an intermediate tensor and shifting things to right (as done in current implementation). I have raised the [PR](https://github.com/huggingface/transformers/pull/18984). Could you please review it?",
"@ekagra-ranjan that is fine, as long as you also edit the test for FLAX and TF (as in my PR), to ensure the three frameworks have the same behavior"
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cpu (False)
- Tensorflow version (GPU?): 2.6.4 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
### Who can help?
@patrickvonplaten @Narsil @gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Top p sampling samples an extra token when the cumulative sum of probabilities of token is exactly equal to the given top p. E.g., if the input probabilities is `[0.3, 0.1, 0.1, 0.5]` and top_p = `0.8` then only 2 tokens with probability `0.5` and `0.3` should be sampled as their sum would exactly be equal to `0.8`. I believe this is the expected behavior of Top P sampling according to the [definition](https://huggingface.co/docs/transformers/main_classes/text_generation) which states that:
top_p (float, optional, defaults to 1.0) — If set to float < 1, only the most probable tokens with probabilities that add **up to top_p** or higher are kept for generation.
I have created a notebook which reproduces this behavior. The notebook also has a proposed implementation which will fix this with an added optimization of not needing to clone tensor and shifting to left or right. https://www.kaggle.com/ekagra/hf-contrib-topp
I have checked locally that the proposed implementation passes the existing [unittest ](https://github.com/huggingface/transformers/blob/f7196f2e63b14e9fbb4ad664e71912aab3b484cf/tests/generation/test_generation_logits_process.py#L162).
### Your contribution
If this makes sense then I would be happy to raise a PR for this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18976/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18975
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18975/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18975/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18975/events
|
https://github.com/huggingface/transformers/issues/18975
| 1,368,743,440
|
I_kwDOCUB6oc5RlV4Q
| 18,975
|
How to parallelize large model(like t5-11b) at transformer version 3.0.2
|
{
"login": "ZeyiLiao",
"id": 97815464,
"node_id": "U_kgDOBdSLqA",
"avatar_url": "https://avatars.githubusercontent.com/u/97815464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeyiLiao",
"html_url": "https://github.com/ZeyiLiao",
"followers_url": "https://api.github.com/users/ZeyiLiao/followers",
"following_url": "https://api.github.com/users/ZeyiLiao/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeyiLiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeyiLiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeyiLiao/subscriptions",
"organizations_url": "https://api.github.com/users/ZeyiLiao/orgs",
"repos_url": "https://api.github.com/users/ZeyiLiao/repos",
"events_url": "https://api.github.com/users/ZeyiLiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeyiLiao/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"the naive `parallelize` was never supported by t5. just gpt2 and bart. and it'll be removed soon altogether as there are better solutions.\r\n\r\nedit: that was a wrong statement - only gpt2 and t5 have every been supported.\r\n\r\nyou can do 2 things:\r\n\r\n1. `accelerate` will automatically do naive parallelization for you https://github.com/huggingface/accelerate\r\n2. and of course deepspeed-zero is likely to perform faster as it'll utilize the gpus more efficiently https://huggingface.co/docs/transformers/main/main_classes/deepspeed#deepspeed-trainer-integration\r\n\r\nif you are just doing only inference also check out: deepspeed-inference\r\n\r\nAs this is not a bug I'm closing this Issue, but please don't hesitate to ask questions.",
"@stas00 , yeah, thanks a lot!\r\n\r\nAnd I wanna clarify that I want to use Version 3.0.2 transformers (since the code is using some specific function at V 3.0.2 and it would need a large amount of change if adapted to the latest version) and just check that `src/transformers `don't have deepspeed.py.",
"wait, I got it wrong. it is gpt2 and t5 that used to support `parallelize` - my apologies - I implemented bart support long time ago but it was never merged as we decided not to continue with this approach. So t5 should work just fine.\r\n\r\nhttps://github.com/huggingface/transformers/blob/a26114777ee1c2802e91bd9cb26a3b39974d52ba/src/transformers/models/t5/modeling_t5.py#L209-L216\r\n\r\nI think perhaps you're trying to use a really old transformers version that haven't yet had t5 support for naive `parallelize` added.\r\n\r\nAs you can see this is really old and it indeed doesn't have `parallelize`\r\n\r\nhttps://github.com/huggingface/transformers/blob/v3.0.2/src/transformers/modeling_t5.py\r\n\r\nPerhaps you can move the function that you need from that old version to the modern code? \r\n\r\nYou can of course code your own integration with Deepspeed if you really have to. The Deepspeed site has lots of examples on how to do that.\r\n\r\nAny modern solutions like accelerate will require the current `transformers` versions.",
"@stas00 Okay! \r\n\r\nSo do you have some recommendations for how to find the corresponding version of the function at older version efficiently?\r\nLike for function```self._use_cache``` at transformers==3.0.2",
"In this case I think this is just `use_cache=True` here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4c2e983f44ce4d3b9c8502d42cc568e45897bd15/src/transformers/generation_utils.py#L892\r\n\r\nfrom the original:\r\nhttps://github.com/huggingface/transformers/blob/v3.0.2/src/transformers/generation_utils.py#L39\r\n\r\nPlease let me know if that's what you're after. and if not please show me which specific code has been moved or changed."
] | 1,662
| 1,663
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-189-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <Yes>
- Using distributed or parallel set-up in script?: <Try to>
### Who can help?
@stas00
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
git clone https://github.com/GXimingLu/neurologic_decoding.git
I add file at ``./neurologic_decoding/seq2seq/decode.py `` a bit and try to parallelize large-model
```
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
if model_name in ["t5-3b","t5-11b"]:
print(f'{model_name} is parallizaing')
model.parallelize()
```
But error said that:
```
torch.nn.modules.module.ModuleAttributeError: 'T5ForConditionalGeneration' object has no attribute 'parallelize'
```
### Expected behavior
Parallelize T5-3b/11b at transformer 3.0.2. I know the `parallelize` function may not work in the 3.0.2 version, or maybe I should use deep-speeds (If so, can u recommend some tutorials)?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18975/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18974
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18974/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18974/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18974/events
|
https://github.com/huggingface/transformers/issues/18974
| 1,368,736,454
|
I_kwDOCUB6oc5RlULG
| 18,974
|
AttributeError: 'DistributedDataParallel' object has no attribute 'generate'
|
{
"login": "HebaGamalElDin",
"id": 36745656,
"node_id": "MDQ6VXNlcjM2NzQ1NjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/36745656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HebaGamalElDin",
"html_url": "https://github.com/HebaGamalElDin",
"followers_url": "https://api.github.com/users/HebaGamalElDin/followers",
"following_url": "https://api.github.com/users/HebaGamalElDin/following{/other_user}",
"gists_url": "https://api.github.com/users/HebaGamalElDin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HebaGamalElDin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HebaGamalElDin/subscriptions",
"organizations_url": "https://api.github.com/users/HebaGamalElDin/orgs",
"repos_url": "https://api.github.com/users/HebaGamalElDin/repos",
"events_url": "https://api.github.com/users/HebaGamalElDin/events{/privacy}",
"received_events_url": "https://api.github.com/users/HebaGamalElDin/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi, just unwrap the model:\r\n\r\nmodel.module.generate(inputs)\r\n\r\n(I didn't verify, but this should work)",
"> Hi, just unwrap the model:\r\n> \r\n> model.module.generate(inputs)\r\n> \r\n> (I didn't verify, but this should work)\r\n\r\nYes, this actually works, thank you @nitaytech .. I have just faced one more issue now.\r\n\r\nwhen I decode the batch after generation I get no prediction strings (means generated_text is always empty string), should I load the processor as well at the devices?\r\nhint: this behavior happened only with the DistributedDataParallel, It was working all together on a single GPU\r\n```\r\ndef test(processor: TrOCRProcessor, model: VisionEncoderDecoderModel, dataloader: DataLoader):\r\n output: dict[int, str] = []\r\n model.eval()\r\n with torch.no_grad():\r\n for i, batch in enumerate(dataloader):\r\n inputs: torch.Tensor = batch[\"input\"].cuda(non_blocking=True)\r\n\r\n generated_ids = model.module.generate(inputs)\r\n generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)\r\n\r\n ids = [t.item() for t in batch[\"idx\"]]\r\n output.extend(zip(ids, generated_text))\r\n return output\r\n```\r\nam I missing anything else?\r\n",
"The distributed model expects distribution of the function `forward` and not `generate`. In case you want to distribute inference, what you can do is create a class inheriting from `torch.nn.Module` with a `forward` function that interfaces TrOCR's `generate` function and then distribute an instance of that class.\r\n\r\n```python\r\nclass DitributionCompatibleTrOCR(torch.nn.Module):\r\n def __init__(self, trocr_model):\r\n self.trocr_model = trocr_model\r\n\r\n def forward(self, x):\r\n return self.trocr_model.generate(x)\r\n```\r\n\r\nYou will probably get a concatenation error since the distribution function will try to concatenate outputs from each GPU but I can't remember how I solved that 🥲.",
"I'd recommend using [HuggingFace Accelerate](https://github.com/huggingface/accelerate) for training TrOCR in a distributed set-up.\r\n\r\nYou can then use [unwrap_model](https://huggingface.co/docs/accelerate/v0.12.0/en/package_reference/accelerator#accelerate.Accelerator.unwrap_model) to turn the distributed module back into a regular nn.Module (on which you can call generate)",
"We do provide an example for that, see here: https://github.com/huggingface/transformers/blob/8edf1963103127247ae3ef96fc5ba6a96eb4a290/examples/pytorch/summarization/run_summarization_no_trainer.py#L675\r\n\r\nThis is taken from the example script for summarization, but it would be equivalent for TrOCR",
"> We do provide an example for that, see here:\r\n> \r\n> https://github.com/huggingface/transformers/blob/8edf1963103127247ae3ef96fc5ba6a96eb4a290/examples/pytorch/summarization/run_summarization_no_trainer.py#L675\r\n> \r\n> This is taken from the example script for summarization, but it would be equivalent for TrOCR\r\n\r\nYet, I already switched to huggingface Accelerate (as I am working on Sagemaker so I installed \"accelerate[sagemaker]\" version), however the same issue present. \r\n\r\nHere's the training script, that's already working properly on a single GPU.. I couldn't figure out the what is the root issue.\r\n\r\n> \r\n```\r\nimport os\r\nimport torch\r\nprint(f\"TORCH_VERSION: {torch.__version__}\")\r\nprint(f\"CUDA AVAILABILITY: {torch.cuda.is_available()} GPUs: {torch.cuda.get_device_name()}\")\r\nimport pandas as pd\r\nimport random\r\nimport math\r\nimport re\r\nimport numpy as np\r\nimport itertools\r\nfrom PIL import Image\r\nimport PIL.ImageOps\r\nimport cv2\r\nfrom smart_open import open as smart_open\r\nimport io\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers import AdamW, TrOCRProcessor, VisionEncoderDecoderModel, get_scheduler\r\nfrom Data_pipeline import Context, HCRDataset, OCRDataLoad\r\nfrom Validation_Metrics import getWordLevelError, getCharacterLevelError\r\nfrom accelerate import Accelerator\r\nimport accelerate\r\naccelerator = Accelerator(kwargs_handlers=[accelerate.DistributedDataParallelKwargs(find_unused_parameters=True)])\r\naccelerator.print(f\"ACCELERATOR DEVICE:{accelerator.distributed_type}---- NUM OF PROCESSES: {accelerator.num_processes }\")\r\n\r\nfrom datasets import load_metric\r\ncer_metric = load_metric(\"cer\")\r\nwer_metric = load_metric(\"wer\")\r\n\r\n# LOAD MODEL\r\ndef load_model() -> VisionEncoderDecoderModel:\r\n model: VisionEncoderDecoderModel = VisionEncoderDecoderModel.from_pretrained('gagan3012/ArOCRv4')\r\n return model.to(accelerator.device)\r\n\r\n# SETUP MODEL CONFIGUATIONS\r\ndef init_model_for_training(model: VisionEncoderDecoderModel, processor: TrOCRProcessor):\r\n model.config.decoder_start_token_id = processor.tokenizer.cls_token_id\r\n model.config.pad_token_id = processor.tokenizer.pad_token_id\r\n model.config.vocab_size = model.config.decoder.vocab_size\r\n model.config.bos_token_id = processor.tokenizer.bos_token_id\r\n model.config.max_length = 162\r\n model.config.decoder.is_decoder = True\r\n model.config.decoder.add_cross_attention = True\r\n torch.cuda.manual_seed_all(42)\r\n model.config.num_beams = 4\r\n\r\ndef predict(processor: TrOCRProcessor, model: VisionEncoderDecoderModel, dataloader: DataLoader):\r\n output: dict[int, str] = []\r\n with torch.no_grad():\r\n for i, batch in enumerate(dataloader):\r\n inputs: torch.Tensor = batch[\"input\"].to(accelerator.device)\r\n\r\n generated_ids = model.generate(inputs)\r\n generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)\r\n\r\n ids = [t.item() for t in batch[\"idx\"]]\r\n output.extend(zip(ids, generated_text))\r\n\r\n return output\r\n\r\n\r\ndef validate(context: Context, print_wrong: bool = False) -> float:\r\n predictions = predict(context.processor, context.model, context.val_dataloader)\r\n assert len(predictions) > 0\r\n \r\n CER_avg = []\r\n WER_avg = []\r\n correct_count = 0\r\n wrong_count = 0\r\n for id, prediction in predictions:\r\n label = context.val_dataset.get_label(id)\r\n path = context.val_dataset.get_path(id)\r\n \r\n CER = getCharacterLevelError(label, prediction)\r\n WER = getWordLevelError(label, prediction)\r\n \r\n CER_avg.append(CER)\r\n WER_avg.append(WER)\r\n accelerator.print(f\"validation-batch--------------{id}-----------Label--------{label}---------Prediction-----------{prediction} -----CER----- {CER}----\")\r\n\r\n return round(sum(CER_avg)/len(CER_avg),2), round(sum(WER_avg)/len(WER_avg),2)\r\n\r\n# LOAD PRE_PROCESSOR\r\ndef load_processor() -> TrOCRProcessor:\r\n return TrOCRProcessor.from_pretrained('gagan3012/ArOCRv4')\r\n\r\ndef train(context, train_epochs, learning_rate):\r\n model = context.model\r\n optimizer = AdamW(model.parameters(), lr=learning_rate)\r\n \r\n num_training_steps = train_epochs * len(context.train_dataloader)\r\n lr_scheduler = get_scheduler(\"linear\", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps)\r\n \r\n model, optimizer, context.training_dataloader,context.val_dataloader = accelerator.prepare(model, optimizer, context.train_dataloader, context.val_dataloader)\r\n \r\n overall_loss = 0.0\r\n overall_cer = 0.0\r\n overall_wer = 0.0\r\n for epoch in range(train_epochs):\r\n context.model.train() \r\n train_loss = 0.0\r\n min_cer = 1.0\r\n min_train_loss = 1.0\r\n for j, batch in enumerate(context.train_dataloader):\r\n inputs: torch.Tensor = batch[\"input\"].to(accelerator.device)\r\n labels: torch.Tensor = batch[\"label\"].to(accelerator.device)\r\n #print(inputs)\r\n #print(labels)\r\n \r\n outputs = model(pixel_values=inputs, labels=labels)\r\n loss = outputs.loss\r\n accelerator.backward(loss)\r\n #loss.backward()\r\n optimizer.step()\r\n \r\n lr_scheduler.step()\r\n optimizer.zero_grad()\r\n train_loss+=loss\r\n #accelerator.print(f\"Batch: {j}----Loss: {loss}\")\r\n overall_loss+=train_loss\r\n if (loss < min_train_loss) or (min_train_loss==1.0):\r\n min_train_loss = loss\r\n accelerator.print(f\"Epoch {epoch}-----Loss---{train_loss/len(context.train_dataloader)}--------- min-loss: {min_train_loss}\")\r\n # evaluate\r\n unwrapped_model = accelerator.unwrap_model(model)\r\n context.model = unwrapped_model\r\n cer, wer = validate(context)\r\n\r\n del loss, outputs, train_loss\r\n\r\n \r\n overall_cer+=cer\r\n overall_wer+=wer\r\n accelerator.print(f\"\\n---- overall loss: {overall_loss/train_epochs}\\n\\n\")\r\n accelerator.print(f\"\\n---- overall cer: {overall_cer/train_epochs}\\n\\n\")\r\n accelerator.print(f\"\\n---- overall wer: {overall_wer/train_epochs}\\n\\n\")\r\n\r\n\r\n\r\ndef main():\r\n batch_size = 8\r\n train_epochs = 10\r\n learning_rate = 0.001\r\n checkpoints_path = \"checkpoints\"\r\n \r\n processor = load_processor()\r\n (x_train,y_train),(x_valid,y_valid),(x_test,y_test) = OCRDataLoad()\r\n train_dataset = HCRDataset(x_train, y_train, processor)\r\n\r\n train_dataloader = DataLoader(train_dataset, batch_size, shuffle=True, num_workers=4)#, sampler=train_sampler)\r\n \r\n val_dataset = HCRDataset(x_valid, y_valid, processor)\r\n\r\n val_dataloader = DataLoader(val_dataset, batch_size, shuffle=False, num_workers=4)#, sampler=val_sampler)\r\n \r\n # SageMaker data parallel: Wrap the PyTorch model with the library's DDP\r\n model = load_model()\r\n init_model_for_training(model, processor)\r\n \r\n #model = DDP(model, broadcast_buffers=False)\r\n context = Context(model, processor, train_dataset, train_dataloader, val_dataset, val_dataloader)\r\n train(context, train_epochs, learning_rate)\r\n unwraped_model = accelerator.unwrap_model(context.model)\r\n # SageMaker data parallel: Save model on master node.\r\n unwraped_model.save_pretrained(checkpoints_path)\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
### System Info
transformer version: 4.21.1
python: 3.8
pytorch: 1.12
### Who can help?
@NielsRogge
### Reproduction
Steps:
1. Loaded the trOCR model by huggingface:
2. Coping the model to all GPUs
3. Predicting the validation set using generate function
### Expected behavior
I'm training the trOCR model on my customized Arabic Dataset on Sagemaker instance, I'm running a distributed data training job and have added the model into all GPUs using pytorch as follows:
```
from torch.nn.parallel import DistributedDataParallel as DDP
model = DDP(model)
```
When I'm running the validation function it raises this error
> "AttributeError: 'DistributedDataParallel' object has no attribute 'generate'"
when I'm trying the generate function:
```
inputs: torch.Tensor = batch["input"].to('cuda')
generated_ids = model.generate(inputs)
```
What could be the problem pleaes?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18974/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18974/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18973
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18973/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18973/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18973/events
|
https://github.com/huggingface/transformers/pull/18973
| 1,368,666,828
|
PR_kwDOCUB6oc4-uIu2
| 18,973
|
Adding changes to add the Pegasus Onnx Config.
|
{
"login": "pramodith",
"id": 16939722,
"node_id": "MDQ6VXNlcjE2OTM5NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/16939722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pramodith",
"html_url": "https://github.com/pramodith",
"followers_url": "https://api.github.com/users/pramodith/followers",
"following_url": "https://api.github.com/users/pramodith/following{/other_user}",
"gists_url": "https://api.github.com/users/pramodith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pramodith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pramodith/subscriptions",
"organizations_url": "https://api.github.com/users/pramodith/orgs",
"repos_url": "https://api.github.com/users/pramodith/repos",
"events_url": "https://api.github.com/users/pramodith/events{/privacy}",
"received_events_url": "https://api.github.com/users/pramodith/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18973). All of your documentation changes will be reflected on that endpoint.",
"@lewtun I tested the export and validated the onnx model using the following code.\r\n\r\n```\r\nfrom transformers.models.pegasus.configuration_pegasus import PegasusOnnxConfig, PegasusConfig\r\nfrom transformers.models.pegasus import PegasusModel, PegasusTokenizer, PegasusForConditionalGeneration\r\nfrom transformers.onnx import export, validate_model_outputs\r\nfrom pathlib import Path\r\n\r\ndef check_onnx_model(task):\r\n if task == \"default\":\r\n config = PegasusConfig.from_pretrained(\"google/pegasus-x-base\")\r\n model = PegasusModel.from_pretrained(\"google/pegasus-x-base\")\r\n tokenizer = PegasusTokenizer.from_pretrained(\"google/pegasus-x-base\")\r\n elif task == \"seq2seq-lm\":\r\n config = PegasusConfig.from_pretrained(\"google/pegasus-xsum\")\r\n model = PegasusForConditionalGeneration.from_pretrained(\"google/pegasus-xsum\", add_cross_attention=True)\r\n tokenizer = PegasusTokenizer.from_pretrained(\"google/pegasus-xsum\")\r\n else:\r\n config = PegasusConfig.from_pretrained(\"google/pegasus-xsum\")\r\n model = PegasusForConditionalGeneration.from_pretrained(\"google/pegasus-xsum\")\r\n tokenizer = PegasusTokenizer.from_pretrained(\"google/pegasus-xsum\")\r\n\r\n onnx_config = PegasusOnnxConfig(config, task=task, use_past=True)\r\n onnx_path = Path(\"model.onnx\")\r\n onnx_inputs, onnx_outputs = export(tokenizer, model, onnx_config, onnx_config.default_onnx_opset, onnx_path)\r\n print(onnx_inputs)\r\n print(onnx_outputs)\r\n print(validate_model_outputs(onnx_config,tokenizer,model,onnx_path,onnx_outputs,onnx_config.atol_for_validation))\r\n\r\ncheck_onnx_model(\"seq2seq-lm\")\r\n```\r\n\r\nFor the case of \"seq2seq-lm\" I got the following error when I set `use_past = True`. `ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.00010585784912109375\r\n`\r\nIs the difference in magnitude of order 1e-4 acceptable?",
"\r\n> For the case of \"seq2seq-lm\" I got the following error when I set `use_past = True`. `ValueError: Outputs values don't match between the reference model and ONNX exported model: Got max absolute difference of: 0.00010585784912109375 ` Is the difference in magnitude of order 1e-4 acceptable?\r\n\r\nDepending on models, 1e-3 is acceptable. I wouldn't go further.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Thanks for adding ONNX support of this architecture @pramodith 🔥 !\r\n> \r\n> The PR is very clean and I've left a small suggestion to tweak the tolerance level. Once you've included this, could you confirm the slow tests pass with:\r\n> \r\n> ```\r\n> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k \"pegasus\"\r\n> ```\r\n\r\nHey @pramodith just checking if you were able to run the slow tests successfully?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,670
| 1,670
|
NONE
| null |
# What does this PR do?
This pull request makes the required changes to support running the Pegasus model in Onnx runtime, Linked to
A related PR #18305 was closed because of some git issues I was running into and replaced for this one.
Fixes https://github.com/huggingface/transformers/issues/16308
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Linked to https://github.com/huggingface/transformers/issues/16308
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@lewtun
@ChainYo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18973/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18973",
"html_url": "https://github.com/huggingface/transformers/pull/18973",
"diff_url": "https://github.com/huggingface/transformers/pull/18973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18973.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18972
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18972/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18972/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18972/events
|
https://github.com/huggingface/transformers/pull/18972
| 1,368,618,237
|
PR_kwDOCUB6oc4-t_mI
| 18,972
|
Revert "TF: unpin maximum TF version"
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18972). All of your documentation changes will be reflected on that endpoint."
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
Reverts huggingface/transformers#18917 to make the CI green.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18972/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18972",
"html_url": "https://github.com/huggingface/transformers/pull/18972",
"diff_url": "https://github.com/huggingface/transformers/pull/18972.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18972.patch",
"merged_at": 1662815506000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18971
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18971/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18971/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18971/events
|
https://github.com/huggingface/transformers/issues/18971
| 1,368,594,577
|
I_kwDOCUB6oc5RkxiR
| 18,971
|
generate() - documentation of `length_penalty' is misleading (and actually wrong)
|
{
"login": "nitaytech",
"id": 56558412,
"node_id": "MDQ6VXNlcjU2NTU4NDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/56558412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nitaytech",
"html_url": "https://github.com/nitaytech",
"followers_url": "https://api.github.com/users/nitaytech/followers",
"following_url": "https://api.github.com/users/nitaytech/following{/other_user}",
"gists_url": "https://api.github.com/users/nitaytech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nitaytech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nitaytech/subscriptions",
"organizations_url": "https://api.github.com/users/nitaytech/orgs",
"repos_url": "https://api.github.com/users/nitaytech/repos",
"events_url": "https://api.github.com/users/nitaytech/events{/privacy}",
"received_events_url": "https://api.github.com/users/nitaytech/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @nitaytech 👋 we are aware of this issue, it is also being tracked here -- https://github.com/huggingface/transformers/issues/18208"
] | 1,662
| 1,663
| 1,663
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.15.0-1014-azure-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten @Narsil @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
----
### Expected behavior
According to the documentation of the `generate()` function (transformers/generation_utils.py), the description of the `length_penalty` is as dollows:
> length_penalty (`float`, *optional*, defaults to 1.0):
> Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length.
> 0.0 means no penalty. Set to values < 0.0 in order to encourage the model to generate longer
> sequences, to a value > 0.0 in order to encourage the model to produce shorter sequences.
However, this documentation is not aligned with the implementation of the `length_penalty` in methods which use it (like, `BeamHypotheses.add()`) or with the documentation of these methods:
Implementation:
`score = sum_logprobs / (hyp.shape[-1] ** self.length_penalty)`
**Note: the sum_logprobs is NEGATIVE(!!!), thus dividing it by a larger number making the score bigger**.
Documentation:
> length_penalty (`float`, *optional*, defaults to 1.0):
> Exponential penalty to the length. 1.0 means no penalty. Set to values < 1.0 in order to encourage the
> model to generate shorter sequences, to a value > 1.0 in order to encourage the model to produce longer
> sequences.
I think the documentation of `BeamHypotheses.add()` is more correct and less misleading. I do understand that the sum_logprobs is the right logprob that represents the sequence logprobs, however, since the common practice for generation is to use the *mean* logprob: ` sum_logprobs / hyp.shape[-1]`, writing "1.0 means that the beam score is penalized by the sequence length" is misleading. Moreover, "Set to values < 0.0 in order to encourage the model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter sequences." is just not correct, since the sum_logprobs is NEGATIVE(!!!), thus dividing it by a larger number making the score bigger (bigger length_penalty --> bigger denominator --> bigger (negative) score (closer to zero). The documentation should be changed or explained more carefully (and be aligned with the implementations).
I would change the documentation to something like:
> length_penalty (`float`, *optional*, defaults to 1.0):
> Exponential penalty to the length. **0.0 means no penalty. 1.0 means the score of each sequence is the
> log probability divided by the sequence length (the mean log probability, which is the common practice).**
> Set to values < 1.0 in order to encourage the model to generate shorter sequences,
> to a value > 1.0 in order to encourage the model to produce longer sequences.
Thanks
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18971/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18970
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18970/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18970/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18970/events
|
https://github.com/huggingface/transformers/issues/18970
| 1,368,497,239
|
I_kwDOCUB6oc5RkZxX
| 18,970
|
add a python module called "loss.py" for custom losses published in papers
|
{
"login": "marzi9696",
"id": 68329143,
"node_id": "MDQ6VXNlcjY4MzI5MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/68329143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marzi9696",
"html_url": "https://github.com/marzi9696",
"followers_url": "https://api.github.com/users/marzi9696/followers",
"following_url": "https://api.github.com/users/marzi9696/following{/other_user}",
"gists_url": "https://api.github.com/users/marzi9696/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marzi9696/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marzi9696/subscriptions",
"organizations_url": "https://api.github.com/users/marzi9696/orgs",
"repos_url": "https://api.github.com/users/marzi9696/repos",
"events_url": "https://api.github.com/users/marzi9696/events{/privacy}",
"received_events_url": "https://api.github.com/users/marzi9696/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
### Feature request
I'm currently working on a text generation project at work which made me encounter a new loss name "unlikelihood loss".
Thankfully there was an implementation already existed on the internet.
But I was wondering that would be neat to create a separate module just for loss functions specially custom ones published in papers.
I think that would help people out a lot.
I would also like to be the first contribute to this new feature if you think that would be helpful.
[link to the paper](https://arxiv.org/pdf/1908.04319.pdf)
### Motivation
I always run into problem when I can't understand the mathematics in paper deep enough to be able to implement it myself.
And Hugging Face has already helped a lot.
I think that would be a neat feature to create a separate loss module or script, located in utils or models directory.
That would help people like me that struggle with implementations.
### Your contribution
yes of curse.
I would be glad to contribute to this new feature.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18970/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18969
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18969/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18969/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18969/events
|
https://github.com/huggingface/transformers/issues/18969
| 1,368,493,222
|
I_kwDOCUB6oc5RkYym
| 18,969
|
Choice of variable name in custom model affects model initialization
|
{
"login": "urmeya",
"id": 64949494,
"node_id": "MDQ6VXNlcjY0OTQ5NDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/64949494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/urmeya",
"html_url": "https://github.com/urmeya",
"followers_url": "https://api.github.com/users/urmeya/followers",
"following_url": "https://api.github.com/users/urmeya/following{/other_user}",
"gists_url": "https://api.github.com/users/urmeya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/urmeya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/urmeya/subscriptions",
"organizations_url": "https://api.github.com/users/urmeya/orgs",
"repos_url": "https://api.github.com/users/urmeya/repos",
"events_url": "https://api.github.com/users/urmeya/events{/privacy}",
"received_events_url": "https://api.github.com/users/urmeya/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I am very confused as to what the bug you think you have here is. You are trying to load weights of a checkpoint in a model that does not match (`\"bert-base-ubcased\"` is a BERT model with no head, so it does not expect a `bert` attribute). Using the base model prefix is the magic that Transformers uses behind the scenes to load those checkpoints in model with heads. ",
"In the two code implementations I have pasted, the only difference is a variable name (`self.bert` in first case and `self.basemodel` in second case).\r\n\r\nMy query is: if the model keys get mapped correctly in first case, why are they not mapping in second case? \r\n\r\nPlease compare the warning messages I have pasted for the two cases if this description is not making sense. \r\n",
"> My query is: if the model keys get mapped correctly in first case, why are they not mapping in second case?\r\n\r\nYour description shows the exact opposite: there is a warning in the first case and not in the second case. Please clarify what it is you are asking as I don't understand your question.",
"I missed word 'not' there. Correcting it now.",
"@sgugger and @LysandreJik , please find more concise version of my query below. \r\n\r\nI am trying to write a custom model class that can be used to finetune transformers language model like BERT with custom head for some downstream task.\r\n\r\nThe code below is my attempt to write such class. It uses `bert-base-uncased` as base model and adds custom layer(s) (just a linear layer for this example case). The code below loads the pretrained `bert-base-uncased` weights for finetuning and also randomly initiates the custom head layer for training. \r\n\r\nPlease let me know if it is a correct way to write custom model class for finetuning.\r\nThe problem I am facing is that the code below stops working (i.e. doesn't load pretrained weights but randomly initializes all of them) if I just change the variable name `self.bert` to any other name. I want to know if it is an expected behavior. If yes, for any specific model on huggingface hub, how can I find which variable name (similar to `self.bert`) one is supposed to use?\r\n\r\n```python\r\nfrom transformers import AutoModel, AutoConfig, PreTrainedModel\r\nfrom transformers.modeling_outputs import SequenceClassifierOutput\r\nimport torch\r\nimport torch.nn as nn\r\n\r\nclass CustomModel(PreTrainedModel):\r\n def __init__(self, config, num_labels=2, dropout_prob=0.3):\r\n super(CustomModel, self).__init__(config)\r\n self.num_labels = num_labels\r\n self.bert = AutoModel.from_config(config)\r\n self.dropout = nn.Dropout(dropout_prob)\r\n self.classifier = nn.Linear(config.hidden_size, num_labels)\r\n \r\n def _init_weights(self, module):\r\n if isinstance(module, (nn.Linear, nn.Embedding)):\r\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\r\n if isinstance(module, nn.Linear) and module.bias is not None:\r\n module.bias.data.zero_()\r\n \r\n def forward(self, input_ids, attention_mask, labels=None):\r\n \r\n outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)\r\n # sequence_output = self.dropout(outputs[0])\r\n pooled_output = outputs[1]\r\n \r\n logits = self.classifier(pooled_output)\r\n \r\n loss = None\r\n if labels is not None:\r\n loss_fct = nn.CrossEntropyLoss()\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions)\r\n \r\ncheckpoint = \"bert-base-uncased\"\r\nconfig = AutoConfig.from_pretrained(checkpoint)\r\nmodel = CustomModel.from_pretrained(pretrained_model_name_or_path=checkpoint, config=config, num_labels=2, dropout_prob=0.3)\r\n```",
"First note that such a generic class goes against Transformers design principles, which is one class per model. So, it's logical you would have to change the name of the attribute for each model with head you want to write.\r\n\r\nOtherwise you can try setting the model using `self.base_model_prefix` (which will be `\"bert\"` for BERT, \"roberta\" for RoBERTa etc.) inside your custom model, if you want your class to be generic.",
"I had tried setting `self.base_model_prefix` but it does not work as it adds prefix on top of variable name (e.g. `bert.basemodel.encoder.layer.9.output.LayerNorm.weight` if the variable name is `self.basemodel`)\r\n\r\nI do get your bigger point that using generic class like `PreTrainedModel` is against transformers design principles. I will write the model classes inheriting from specific model classes like `BertPreTrainedModel` / `RobertaPreTrainedModel` and strictly using corresponding variable name like `self.bert`/`self.roberta`.\r\n",
"Hi! I'm facing the same problem. Did you find any solution, @urmeya ?"
] | 1,662
| 1,675
| 1,663
|
NONE
| null |
### System Info
transformers version 4.17.0
python version 3.7.11
platform Ubuntu
### Who can help?
@LysandreJik @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to build a custom model by using one of the transformer language models as base model and and then defining a custom head. Below is an example code that is working.
```python
from transformers import AutoModel, AutoConfig, PreTrainedModel
from transformers.modeling_outputs import SequenceClassifierOutput
import torch
import torch.nn as nn
class CustomModel(PreTrainedModel):
def __init__(self, config, num_labels=2, dropout_prob=0.3):
super(CustomModel, self).__init__(config)
self.num_labels = num_labels
self.bert = AutoModel.from_config(config)
self.dropout = nn.Dropout(dropout_prob)
self.classifier = nn.Linear(config.hidden_size, num_labels)
def _init_weights(self, module):
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
def forward(self, input_ids, attention_mask, labels=None):
outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
# sequence_output = self.dropout(outputs[0])
pooled_output = outputs[1]
logits = self.classifier(pooled_output)
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions)
checkpoint = "bert-base-uncased"
config = AutoConfig.from_pretrained(checkpoint)
model = CustomModel.from_pretrained(pretrained_model_name_or_path=checkpoint, config=config, num_labels=2, dropout_prob=0.3)
```
Here I have used `bert` as the base model and the variable name is also `self.bert`. I get following warning which I think is okay.
```
Some weights of the model checkpoint at bert-base-uncased were not used when initializing CustomModel: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- This IS expected if you are initializing CustomModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CustomModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of CustomModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'bert.embeddings.position_ids', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
The problem arises when I just change variable name `self.bert` to any other name like `self.basemodel` as in following code
```python
from transformers import AutoModel, AutoConfig, PreTrainedModel
from transformers.modeling_outputs import SequenceClassifierOutput
import torch
import torch.nn as nn
class CustomModel(PreTrainedModel):
def __init__(self, config, num_labels=2, dropout_prob=0.3):
super(CustomModel, self).__init__(config)
self.num_labels = num_labels
self.basemodel = AutoModel.from_config(config)
self.dropout = nn.Dropout(dropout_prob)
self.classifier = nn.Linear(config.hidden_size, num_labels)
def _init_weights(self, module):
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
def forward(self, input_ids, attention_mask, labels=None):
outputs = self.basemodel(input_ids=input_ids, attention_mask=attention_mask)
# sequence_output = self.dropout(outputs[0])
pooled_output = outputs[1]
logits = self.classifier(pooled_output)
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
return SequenceClassifierOutput(loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions)
checkpoint = "bert-base-uncased"
config = AutoConfig.from_pretrained(checkpoint)
model = CustomModel.from_pretrained(pretrained_model_name_or_path=checkpoint, config=config, num_labels=2, dropout_prob=0.3)
```
Here I get following warning:
```
Some weights of the model checkpoint at bert-base-uncased were not used when initializing CustomModel: ['bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.11.attention.self.query.bias', 'cls.predictions.decoder.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.embeddings.position_embeddings.weight', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.embeddings.token_type_embeddings.weight', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'cls.predictions.transform.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.4.output.dense.weight', 'bert.embeddings.word_embeddings.weight', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.3.output.dense.bias', 'cls.seq_relationship.weight', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.11.intermediate.dense.weight', 'cls.predictions.transform.LayerNorm.bias', 'bert.encoder.layer.11.output.dense.bias', 'cls.predictions.bias', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.pooler.dense.bias', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.embeddings.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.pooler.dense.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.self.value.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.11.attention.self.value.bias', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.self.query.weight']
- This IS expected if you are initializing CustomModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CustomModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of CustomModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['basemodel.encoder.layer.5.output.dense.bias', 'basemodel.encoder.layer.5.attention.output.dense.weight', 'basemodel.encoder.layer.5.attention.self.query.bias', 'basemodel.pooler.dense.weight', 'basemodel.encoder.layer.9.attention.self.query.bias', 'basemodel.encoder.layer.0.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.10.attention.self.query.weight', 'basemodel.encoder.layer.10.intermediate.dense.weight', 'basemodel.encoder.layer.10.attention.self.key.weight', 'basemodel.encoder.layer.10.attention.output.dense.bias', 'basemodel.encoder.layer.0.attention.output.dense.bias', 'basemodel.encoder.layer.7.intermediate.dense.weight', 'basemodel.encoder.layer.0.output.dense.weight', 'basemodel.encoder.layer.0.attention.self.key.weight', 'basemodel.encoder.layer.3.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.11.attention.self.value.bias', 'basemodel.encoder.layer.0.attention.self.key.bias', 'basemodel.encoder.layer.6.output.dense.weight', 'basemodel.encoder.layer.1.attention.self.query.bias', 'basemodel.encoder.layer.6.attention.output.dense.bias', 'basemodel.encoder.layer.9.attention.self.query.weight', 'basemodel.encoder.layer.1.attention.output.dense.weight', 'basemodel.encoder.layer.8.attention.self.value.weight', 'basemodel.encoder.layer.0.output.dense.bias', 'basemodel.encoder.layer.4.attention.self.value.bias', 'basemodel.encoder.layer.1.attention.self.key.bias', 'basemodel.encoder.layer.5.attention.self.key.bias', 'basemodel.encoder.layer.9.intermediate.dense.bias', 'basemodel.encoder.layer.5.intermediate.dense.bias', 'basemodel.encoder.layer.7.attention.self.value.bias', 'basemodel.encoder.layer.4.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.6.attention.output.dense.weight', 'basemodel.encoder.layer.7.output.dense.bias', 'basemodel.encoder.layer.3.attention.self.query.bias', 'basemodel.encoder.layer.4.attention.output.dense.bias', 'basemodel.encoder.layer.8.attention.self.value.bias', 'basemodel.encoder.layer.0.attention.self.value.bias', 'basemodel.encoder.layer.8.attention.self.query.weight', 'basemodel.encoder.layer.6.intermediate.dense.bias', 'basemodel.encoder.layer.10.output.dense.weight', 'basemodel.encoder.layer.2.attention.self.key.bias', 'basemodel.encoder.layer.5.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.9.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.1.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.10.attention.self.query.bias', 'basemodel.encoder.layer.6.output.LayerNorm.weight', 'basemodel.encoder.layer.11.attention.self.query.weight', 'basemodel.encoder.layer.3.attention.self.value.weight', 'basemodel.encoder.layer.4.output.dense.weight', 'basemodel.encoder.layer.11.attention.self.query.bias', 'basemodel.encoder.layer.6.attention.self.value.weight', 'basemodel.encoder.layer.4.intermediate.dense.weight', 'basemodel.encoder.layer.3.output.dense.weight', 'basemodel.encoder.layer.2.attention.output.dense.weight', 'basemodel.encoder.layer.2.attention.self.query.weight', 'basemodel.encoder.layer.9.attention.output.dense.weight', 'basemodel.encoder.layer.11.intermediate.dense.weight', 'basemodel.encoder.layer.9.attention.self.key.weight', 'basemodel.encoder.layer.10.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.2.output.LayerNorm.bias', 'basemodel.encoder.layer.9.attention.self.value.bias', 'basemodel.encoder.layer.5.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.8.output.dense.weight', 'basemodel.encoder.layer.3.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.5.output.LayerNorm.bias', 'basemodel.encoder.layer.9.attention.self.key.bias', 'basemodel.encoder.layer.7.attention.output.dense.weight', 'basemodel.encoder.layer.4.attention.output.dense.weight', 'basemodel.encoder.layer.2.output.dense.bias', 'basemodel.encoder.layer.9.output.LayerNorm.weight', 'basemodel.encoder.layer.3.attention.output.dense.bias', 'basemodel.encoder.layer.11.attention.self.value.weight', 'basemodel.encoder.layer.0.intermediate.dense.weight', 'basemodel.encoder.layer.3.intermediate.dense.weight', 'basemodel.encoder.layer.6.attention.self.value.bias', 'basemodel.encoder.layer.3.attention.output.dense.weight', 'basemodel.encoder.layer.6.attention.self.query.bias', 'basemodel.encoder.layer.2.attention.self.key.weight', 'basemodel.encoder.layer.5.attention.self.key.weight', 'basemodel.encoder.layer.7.output.dense.weight', 'basemodel.encoder.layer.10.output.dense.bias', 'basemodel.encoder.layer.1.attention.self.key.weight', 'basemodel.embeddings.position_ids', 'basemodel.encoder.layer.2.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.10.output.LayerNorm.weight', 'basemodel.encoder.layer.1.attention.self.value.bias', 'basemodel.encoder.layer.7.attention.self.key.weight', 'basemodel.encoder.layer.6.attention.self.key.weight', 'basemodel.encoder.layer.9.intermediate.dense.weight', 'basemodel.embeddings.LayerNorm.weight', 'basemodel.encoder.layer.2.intermediate.dense.weight', 'basemodel.encoder.layer.8.intermediate.dense.bias', 'basemodel.encoder.layer.4.attention.self.key.bias', 'classifier.bias', 'basemodel.encoder.layer.11.attention.self.key.weight', 'basemodel.encoder.layer.0.attention.self.query.bias', 'basemodel.pooler.dense.bias', 'basemodel.encoder.layer.5.attention.output.dense.bias', 'basemodel.encoder.layer.11.attention.output.dense.bias', 'basemodel.encoder.layer.7.attention.self.value.weight', 'basemodel.encoder.layer.1.attention.self.value.weight', 'basemodel.encoder.layer.0.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.4.attention.self.key.weight', 'basemodel.encoder.layer.6.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.2.intermediate.dense.bias', 'basemodel.encoder.layer.10.attention.output.dense.weight', 'basemodel.encoder.layer.11.intermediate.dense.bias', 'basemodel.encoder.layer.6.attention.self.query.weight', 'basemodel.encoder.layer.8.output.LayerNorm.weight', 'basemodel.encoder.layer.7.attention.self.key.bias', 'basemodel.encoder.layer.0.output.LayerNorm.bias', 'basemodel.encoder.layer.11.attention.self.key.bias', 'basemodel.encoder.layer.5.attention.self.value.weight', 'basemodel.encoder.layer.4.attention.self.query.weight', 'basemodel.encoder.layer.7.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.3.attention.self.key.weight', 'basemodel.encoder.layer.1.output.LayerNorm.weight', 'basemodel.encoder.layer.3.attention.self.key.bias', 'basemodel.encoder.layer.0.output.LayerNorm.weight', 'basemodel.encoder.layer.1.attention.output.dense.bias', 'basemodel.encoder.layer.1.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.6.output.LayerNorm.bias', 'basemodel.encoder.layer.8.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.11.attention.output.LayerNorm.weight', 'classifier.weight', 'basemodel.embeddings.token_type_embeddings.weight', 'basemodel.encoder.layer.9.output.LayerNorm.bias', 'basemodel.encoder.layer.0.intermediate.dense.bias', 'basemodel.encoder.layer.4.output.LayerNorm.weight', 'basemodel.encoder.layer.9.attention.output.dense.bias', 'basemodel.encoder.layer.2.attention.self.query.bias', 'basemodel.encoder.layer.8.output.LayerNorm.bias', 'basemodel.encoder.layer.11.attention.output.LayerNorm.bias', 'basemodel.embeddings.LayerNorm.bias', 'basemodel.encoder.layer.8.intermediate.dense.weight', 'basemodel.encoder.layer.2.attention.self.value.weight', 'basemodel.encoder.layer.6.attention.output.LayerNorm.bias', 'basemodel.encoder.layer.11.output.dense.weight', 'basemodel.encoder.layer.3.output.dense.bias', 'basemodel.encoder.layer.4.attention.self.query.bias', 'basemodel.encoder.layer.3.output.LayerNorm.bias', 'basemodel.encoder.layer.4.output.LayerNorm.bias', 'basemodel.encoder.layer.5.attention.self.query.weight', 'basemodel.encoder.layer.5.output.LayerNorm.weight', 'basemodel.encoder.layer.6.output.dense.bias', 'basemodel.encoder.layer.2.attention.self.value.bias', 'basemodel.encoder.layer.8.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.10.attention.self.value.weight', 'basemodel.encoder.layer.9.attention.self.value.weight', 'basemodel.encoder.layer.3.output.LayerNorm.weight', 'basemodel.encoder.layer.7.output.LayerNorm.bias', 'basemodel.encoder.layer.9.output.dense.bias', 'basemodel.encoder.layer.0.attention.output.dense.weight', 'basemodel.encoder.layer.1.intermediate.dense.bias', 'basemodel.encoder.layer.0.attention.self.query.weight', 'basemodel.encoder.layer.1.output.dense.weight', 'basemodel.encoder.layer.8.attention.self.key.weight', 'basemodel.encoder.layer.9.output.dense.weight', 'basemodel.encoder.layer.11.attention.output.dense.weight', 'basemodel.encoder.layer.4.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.7.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.4.intermediate.dense.bias', 'basemodel.encoder.layer.1.attention.self.query.weight', 'basemodel.encoder.layer.6.intermediate.dense.weight', 'basemodel.encoder.layer.7.intermediate.dense.bias', 'basemodel.encoder.layer.10.output.LayerNorm.bias', 'basemodel.encoder.layer.10.attention.self.key.bias', 'basemodel.encoder.layer.5.intermediate.dense.weight', 'basemodel.encoder.layer.4.output.dense.bias', 'basemodel.encoder.layer.8.attention.output.dense.bias', 'basemodel.encoder.layer.8.attention.output.dense.weight', 'basemodel.encoder.layer.10.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.11.output.dense.bias', 'basemodel.encoder.layer.1.output.dense.bias', 'basemodel.encoder.layer.9.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.1.intermediate.dense.weight', 'basemodel.encoder.layer.7.attention.self.query.weight', 'basemodel.encoder.layer.10.attention.self.value.bias', 'basemodel.embeddings.position_embeddings.weight', 'basemodel.encoder.layer.7.output.LayerNorm.weight', 'basemodel.encoder.layer.2.attention.output.LayerNorm.weight', 'basemodel.encoder.layer.3.attention.self.value.bias', 'basemodel.encoder.layer.0.attention.self.value.weight', 'basemodel.encoder.layer.5.attention.self.value.bias', 'basemodel.encoder.layer.6.attention.self.key.bias', 'basemodel.encoder.layer.2.attention.output.dense.bias', 'basemodel.encoder.layer.5.output.dense.weight', 'basemodel.encoder.layer.4.attention.self.value.weight', 'basemodel.encoder.layer.8.output.dense.bias', 'basemodel.encoder.layer.7.attention.self.query.bias', 'basemodel.encoder.layer.3.attention.self.query.weight', 'basemodel.encoder.layer.7.attention.output.dense.bias', 'basemodel.encoder.layer.11.output.LayerNorm.bias', 'basemodel.encoder.layer.2.output.dense.weight', 'basemodel.encoder.layer.2.output.LayerNorm.weight', 'basemodel.encoder.layer.3.intermediate.dense.bias', 'basemodel.embeddings.word_embeddings.weight', 'basemodel.encoder.layer.11.output.LayerNorm.weight', 'basemodel.encoder.layer.8.attention.self.key.bias', 'basemodel.encoder.layer.1.output.LayerNorm.bias', 'basemodel.encoder.layer.10.intermediate.dense.bias', 'basemodel.encoder.layer.8.attention.self.query.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Because of the prefix `basemodel` instead of `bert` in the keys, the weights are not getting mapped. Is there a way to tell `AutoModel` not to add prefix `bert` in the keys?
### Expected behavior
The main reason I want to use generic variable like `self.basemodel` is that I will like to explore different base models which may not necessarily bert models (and therefore even variable name `self.bert` might fail in those cases).
I was hoping that if I just change the checkpoint name, I should be able to try out different base models.
While exploring the solution, I found following code snippet that adds/removes `prefix` like `bert` if required. But in my case I would need to first remove prefix `basemodel` and add then add `bert` prefix which not possible in the current and wanyway would be too messy.
https://github.com/huggingface/transformers/blob/855dcae8bb743c3f8f0781742d7fa2fa3aaa3e22/src/transformers/modeling_utils.py#L2321-L2340
Please help me in figuring out a correct way to achieve what I am trying to do.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18969/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18968
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18968/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18968/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18968/events
|
https://github.com/huggingface/transformers/issues/18968
| 1,368,447,494
|
I_kwDOCUB6oc5RkNoG
| 18,968
|
Allow custom head size for self attention in BERT
|
{
"login": "xinyangz",
"id": 13930183,
"node_id": "MDQ6VXNlcjEzOTMwMTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/13930183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinyangz",
"html_url": "https://github.com/xinyangz",
"followers_url": "https://api.github.com/users/xinyangz/followers",
"following_url": "https://api.github.com/users/xinyangz/following{/other_user}",
"gists_url": "https://api.github.com/users/xinyangz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinyangz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinyangz/subscriptions",
"organizations_url": "https://api.github.com/users/xinyangz/orgs",
"repos_url": "https://api.github.com/users/xinyangz/repos",
"events_url": "https://api.github.com/users/xinyangz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinyangz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
### Feature request
Right now the `attention_head_size` of BERT self attention is set to hidden_size / num_attention_heads:
https://github.com/huggingface/transformers/blob/855dcae8bb743c3f8f0781742d7fa2fa3aaa3e22/src/transformers/models/bert/modeling_bert.py#L260-L262
However, the `all_head_size` of self attention layer doesn't have to match the `hidden_size` of the model. For example, we may train a deeper model with narrower layers. In fact [this paper](https://arxiv.org/pdf/2106.09650.pdf) found that doing so will increase the model performance with minimal hit on the training and inference speed.
My proposal is to add an option `attention_head_size` to `BertConfig` to allow more flexible model architectures.
### Motivation
Currently, there is no easy way to change the `attention_head_size` without changing the `hidden_size` of the whole model. As a result, it's hard to set up a narrower and deeper or a wider and shallower model.
### Your contribution
I can submit a PR to add an option to `BertConfig` and update the code where needed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18968/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18967
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18967/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18967/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18967/events
|
https://github.com/huggingface/transformers/issues/18967
| 1,368,405,141
|
I_kwDOCUB6oc5RkDSV
| 18,967
|
Pre-processing re-runs for each process
|
{
"login": "rahular",
"id": 1104544,
"node_id": "MDQ6VXNlcjExMDQ1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahular",
"html_url": "https://github.com/rahular",
"followers_url": "https://api.github.com/users/rahular/followers",
"following_url": "https://api.github.com/users/rahular/following{/other_user}",
"gists_url": "https://api.github.com/users/rahular/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahular/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahular/subscriptions",
"organizations_url": "https://api.github.com/users/rahular/orgs",
"repos_url": "https://api.github.com/users/rahular/repos",
"events_url": "https://api.github.com/users/rahular/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahular/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Do you have a reproducer of the behavior you are seeing? On my side data is processed once.",
"Wow, ok. So looks like the example scripts always set `--overwrite_cache` to `True`.\r\n\r\nIt is currently `parser.add_argument(\"--overwrite_cache\", type=bool, default=None)` instead of `parser.add_argument(\"--overwrite_cache\", action=\"store_true\")`.\r\n\r\nWill make a PR soon."
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-4.15.0-180-generic-x86_64-with-glibc2.27
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@muellerzr @sgugger @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In `examples/summarization_no_trainer.py`, we use the following code to pre-process the data:
```
with accelerator.main_process_first():
processed_datasets = raw_datasets.map(
preprocess_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset",
)
```
In a multi-GPU (process) setup, the main process pre-processes the data first and saves it in the cache. Ideally, the other processes should just pick it up from there. But that's not happening. Every process is re-pre-processing the data. This is a problem when the data is large.
I have tried to check if something changes during the run with `Hasher.hash(preprocess_func)`. But the hash remains the same.
### Expected behavior
Processes other than the main process should read the processed data from cache.the
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18967/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18966
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18966/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18966/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18966/events
|
https://github.com/huggingface/transformers/pull/18966
| 1,368,268,824
|
PR_kwDOCUB6oc4-s54x
| 18,966
|
Align try_to_load_from_cache with huggingface_hub
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There is one on the HF hub side ;-) I just removed it here since it does not concern Transformers and we only the default value.",
"Perfect :)"
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
This PR completely align `try_to_load_from_cache` with its `huggingface_hub` counterpart (it's a copy-paste while just removing the `repo_type` argument) and adapts its use in `cached_file`. This is done before the next release of Transformers so that there is no breaking change if users start to adopt it (since the arguments are in a different order), and to make the transition (which will happen after the next release of `huggingface_hub`) easier.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18966/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18966",
"html_url": "https://github.com/huggingface/transformers/pull/18966",
"diff_url": "https://github.com/huggingface/transformers/pull/18966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18966.patch",
"merged_at": 1662998977000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18965
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18965/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18965/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18965/events
|
https://github.com/huggingface/transformers/issues/18965
| 1,368,201,539
|
I_kwDOCUB6oc5RjRlD
| 18,965
|
The configuration is not a valid json file
|
{
"login": "nicolaspi",
"id": 3748978,
"node_id": "MDQ6VXNlcjM3NDg5Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3748978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicolaspi",
"html_url": "https://github.com/nicolaspi",
"followers_url": "https://api.github.com/users/nicolaspi/followers",
"following_url": "https://api.github.com/users/nicolaspi/following{/other_user}",
"gists_url": "https://api.github.com/users/nicolaspi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicolaspi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicolaspi/subscriptions",
"organizations_url": "https://api.github.com/users/nicolaspi/orgs",
"repos_url": "https://api.github.com/users/nicolaspi/repos",
"events_url": "https://api.github.com/users/nicolaspi/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicolaspi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"👆🏼Same error just started occuring about 30 mins ago across all machines",
"<img width=\"502\" alt=\"Screen Shot 2022-09-09 at 3 00 30 PM\" src=\"https://user-images.githubusercontent.com/26133/189424655-2d594077-d358-41aa-a265-86571dabb522.png\">\r\n",
"Related to #18962 ",
"Just in case, pinging @patil-suraj given a few commits on that model repo today =) https://huggingface.co/openai/clip-vit-large-patch14/commits/main",
"is there a way to disable it grabbing the newest? I disabled my ethernet ran the app, and then turned it back on after it got past that point in the script and I'm running again if someone needs a fast temporary bandaid",
"Thank you for reporting, this was fixed in [openai/clip-vit-large-patch14#4](https://huggingface.co/openai/clip-vit-large-patch14/discussions/4)\r\nRe-running the instantiation/`from_pretrained` should redownload the correct JSON file.",
"If you have a set of downloaded cache of the unbroken files, you can get around the problem by forcing an offline only mode and using your cache.\r\n\r\nIf you don't want to edit your script, add TRANSFORMERS_OFFLINE=1, as seen by document here https://huggingface.co/docs/transformers/installation#offline-mode\r\nIf you prefer to edit your python scripts, you can pass local_files_only when calling from_pretrained https://huggingface.co/docs/transformers/main_classes/model\r\n",
"Fixed"
] | 1,662
| 1,662
| 1,662
|
NONE
| null |
### System Info
The config at this url [https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json](https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json) is not a valid JSON and produces error:
`json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 88 column 3 (char 2317)`
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import CLIPTextModel
CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
```
### Expected behavior
The config should be a valid json.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18965/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18965/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18964
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18964/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18964/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18964/events
|
https://github.com/huggingface/transformers/pull/18964
| 1,368,184,481
|
PR_kwDOCUB6oc4-soOm
| 18,964
|
Explain why loading config JSON fails
|
{
"login": "noprompt",
"id": 541996,
"node_id": "MDQ6VXNlcjU0MTk5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/541996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noprompt",
"html_url": "https://github.com/noprompt",
"followers_url": "https://api.github.com/users/noprompt/followers",
"following_url": "https://api.github.com/users/noprompt/following{/other_user}",
"gists_url": "https://api.github.com/users/noprompt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noprompt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noprompt/subscriptions",
"organizations_url": "https://api.github.com/users/noprompt/orgs",
"repos_url": "https://api.github.com/users/noprompt/repos",
"events_url": "https://api.github.com/users/noprompt/events{/privacy}",
"received_events_url": "https://api.github.com/users/noprompt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18964). All of your documentation changes will be reflected on that endpoint.",
"Could you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:\r\n```\r\npip install -e \".[quality]\"\r\n```\r\nAnd then run them with:\r\n```\r\nmake fixup\r\n```",
"@LysandreJik Yes, will do. Thank you!",
"@LysandreJik Sorry for the delay! I've applied the code quality changes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,668
| 1,668
|
NONE
| null |
This PR improves the exception message thrown when reading configuration fails to include the information provided by the exception itself (line/column numbers, etc.). This can spare the reader of the message some trouble debugging it. See [this thread here](https://github.com/CompVis/stable-diffusion/issues/247) for background.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18964/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18964",
"html_url": "https://github.com/huggingface/transformers/pull/18964",
"diff_url": "https://github.com/huggingface/transformers/pull/18964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18964.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18963
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18963/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18963/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18963/events
|
https://github.com/huggingface/transformers/pull/18963
| 1,368,161,937
|
PR_kwDOCUB6oc4-sjYN
| 18,963
|
Make AutoProcessor a magic loading class for all modalities
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,663
| 1,663
|
COLLABORATOR
| null |
# What does this PR do?
This PR re-enables a feature initially part of #14465 : the fact that `AutoProcessor` is a class loading the right processing class for any model (so processor, tokenizer or feature extractor). You can thus do:
```
processor = AutoProcessor.from_pretrained("bert-base-cased")
# Returns a fast BERT tokenizer
```
or
```
processor = AutoProcessor.from_pretrained("facebook/convnext-tiny-224")
# Returns a fConvNext feature extractor
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18963/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18963",
"html_url": "https://github.com/huggingface/transformers/pull/18963",
"diff_url": "https://github.com/huggingface/transformers/pull/18963.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18963.patch",
"merged_at": 1663155373000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18962
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18962/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18962/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18962/events
|
https://github.com/huggingface/transformers/pull/18962
| 1,368,044,039
|
PR_kwDOCUB6oc4-sLGV
| 18,962
|
[CLIP] allow loading projection layer in vision and text model
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"Just to understand better - what is a checkpoint that uses such a projection layer? Is that really part of CLIP or rather of the model built on top of CLIP? Also if one uses the `text_embeds` or `image_embeds` output -> what is the purpose of also having the `pooled_output`?\r\n\r\n Wondering if we should instead create a new head here instead of forcing it into the same class? Or is this an architecture that the official CLIP is using often?",
"\r\n\r\n> what is a checkpoint that uses such a projection layer? Is that really part of CLIP or rather of the model built on top of CLIP?\r\n\r\nThe projection layers are already part of the CLIP model. Those are used to convert the final hidden states of vision and text model into clip embedding space.\r\n\r\nhttps://github.com/huggingface/transformers/blob/a26114777ee1c2802e91bd9cb26a3b39974d52ba/src/transformers/models/clip/modeling_clip.py#L880-L881\r\n\r\nThe reason we added `CLIPTextModel` and `CLIPVisionModel`, is that users could load text and vision models separately as these individual models can be used in downstream task. But the current design is not optimal, as it does not return final clip embeddings. Those final embeddings are very useful for downstream tasks such retrieval, classification. And now these are also being used in text2image or image2image models.\r\n\r\nSo if a user needs either the text embeds or vision embeds they need to load the whole clip model, or write a custom wrapper module to include the projection layer (which is what we did for safety checker in diffusers https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_clip.py#L880)\r\n\r\n> Wondering if we should instead create a new head here instead of forcing it into the same class?\r\n\r\nGood point! Then maybe we could add `CLIPTextModelWithProjection` and `CLIPVisionModelWithProjection`. ",
"> Good point! Then maybe we could add CLIPTextModelWithProjection and CLIPVisionModelWithProjection.\r\n\r\nI'd prefer that solution too!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"Reviving the PR, as there are some model in `diffusers` that will need this soon. As discussed above added `CLIPTextModelWithProjection` and `CLIPVisionModelWithProjection`. @patrickvonplaten , @sgugger would be awesome if you could take a look again :) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18962). All of your documentation changes will be reflected on that endpoint."
] | 1,662
| 1,668
| 1,668
|
MEMBER
| null |
The current vision and text models in clip don't return the image or text projected embeddings and the user needs to load the whole `CLIPModel` to be able to get the vision or text embeddings.
This PR adds `CLIPTextModelWithProjection` and `CLIPVisionModelWithProjection` similar to `CLIPTextModel` and `CLIPVisionModel` but with a projection head. This will allow using only the related modality model instead of loading the full model or having to write wrappers.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18962/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18962/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18962",
"html_url": "https://github.com/huggingface/transformers/pull/18962",
"diff_url": "https://github.com/huggingface/transformers/pull/18962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18962.patch",
"merged_at": 1668531008000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18961
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18961/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18961/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18961/events
|
https://github.com/huggingface/transformers/pull/18961
| 1,367,975,339
|
PR_kwDOCUB6oc4-r86w
| 18,961
|
Add AnyPrecisionAdamW optimizer
|
{
"login": "atturaioe",
"id": 76523524,
"node_id": "MDQ6VXNlcjc2NTIzNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/76523524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atturaioe",
"html_url": "https://github.com/atturaioe",
"followers_url": "https://api.github.com/users/atturaioe/followers",
"following_url": "https://api.github.com/users/atturaioe/following{/other_user}",
"gists_url": "https://api.github.com/users/atturaioe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atturaioe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atturaioe/subscriptions",
"organizations_url": "https://api.github.com/users/atturaioe/orgs",
"repos_url": "https://api.github.com/users/atturaioe/repos",
"events_url": "https://api.github.com/users/atturaioe/events{/privacy}",
"received_events_url": "https://api.github.com/users/atturaioe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, @stas00. I want to ask you whether should I add `anyprecision_adamw` specific arguments to the `trainings_args.py` or use the default ones in `trainer.py`. I'll be working on tests.",
"_The documentation is not available anymore as the PR was closed or merged._",
"I'd say let's add a generic `--optim-args` optional arg which can then supply options to any future optimizer - i.e. it'd pair with `--optim`.\r\n\r\nI'm trying to remember if we already have the plumbing for parsing in place - I think the `--debug` flag has it. edit: no, not that one. I remember writing it, but can't remember which one uses it. it's there somewhere - so many options.\r\n\r\nBut something like `--optim-args \"key1:val1; key2:val2; ...\"`\r\n\r\nso here it'd be `--optim anyprecision_adamw --optim-args \"use_kahan_summation=true; momentum_dtype=bf1oat16; ...\"`\r\n\r\nand we would convert any dtypes into actual `torch.foo` dtype using `getattr(torch, momentum_dtype)`",
"@atturaioe, this is just another variation - perhaps `--optim-args` can support just the exact syntax as python function sig?\r\n\r\n```\r\n--optim anyprecision_adamw --optim-args \"use_kahan_summation=True, momentum_dtype=torch.bf1oat16; ...\"\r\n```\r\n\r\nso `,` separator and perhaps writing out the dtypes exactly as they are in python and converting them on the fly to an actual class name. Same for booleans.\r\n\r\nPerhaps it'd be easier to mimick the signature. Not sure. Let's see what you think is better.\r\n",
"Yeah! But should I parse the `--optim-args` into `dict` or something like that right in the `trainer.get_optimizer_cls_and_kwargs`?",
"Yes, that's exactly right:\r\n\r\nhttps://github.com/huggingface/transformers/blob/d842f2d5b9bd4e361644c332bf9dc7f9b064f581/src/transformers/trainer.py#L1094\r\n",
"Is it any good? \r\nI didn't quite understand about converting dtypes on the fly(using `eval`?).",
"eval would be unsafe. here is a quick proof of concept:\r\n\r\n```\r\npython -c \"import torch; x = 'torch.float16'; print(getattr(torch, x.split('.')[1]))\"\r\n```",
"Just pasting @lessw2020's comment from https://github.com/huggingface/transformers/pull/18961#discussion_r970160808 so that it doesn't get hidden by github once resolved and we will want to revisit this down the road and support other configs:\r\n\r\n> 1 - For mixed precision - you could either\r\n> a - run with the current defaults (M=fp32, Var = BF16, Kahan = False) and that would provide the memory and speed improvements from the Variance in BF16. That works nicely, and you can make that all work 'automatically' per above control options.\r\n> b - you could also go all BF16 (M=BF16, Var = BF16, Kahan = False) because you will still get high precision weight updates with the master weights being in fp32. This is not as well tested though, but is something we are going to enable in FSDP soon by moving the working weight gradients to BF16, meaning you only have FP32 weights, nothing else.\r\n> \r\n> To your question - having the weights in BF16 (via model.to) will only work if Kahan summation is active. If you don't run it with Kahan, then you are exactly right, you will hit weight stagnation and it will not be performant.\r\n> The addition of Kahan is what makes it all work nicely.\r\n> \r\n> Re: mark as experimental and tune as users run with it - that sounds like a great idea. I would just go ahead and use the current defaults then (M=FP32, Var = BF16, Kahan = False) as it's plug and play into FP32 or BF16 mixed precision.\r\n> I'm working on a video tutorial now actually for this optimizer. Maybe we can add to the video once this PR is in, and show people how to run it with the manual change of model.to() and setting the defaults directly to get people comfortable with running in pure BF16.\r\n",
"https://github.com/pytorch/torchdistx/issues/68",
"@atturaioe, I'm back from vacation - what support do you need to finish this PR?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18961). All of your documentation changes will be reflected on that endpoint.",
"Hi @stas00, hope you had great time!\r\nSo the problem here that the `momentum_dtype` and `variance_dtype` set to different `dtypes` (`float32/bfloat16`) don't get cast dynamically, in the optimizer's `step()`, unless they're both of the same `dtype`. But of course I can set them both to same `dtype`, so the tests will pass.\r\nPlease correct me if I misunderstood something here.",
"Let's perhaps start with using the same dtype only and deal with that unusual case down the road should someone actually want to use it?",
"This commit changes the default params to the `float32` since there are 2 options for them to be the same dtype:\r\n1 - all of them `float32`\r\n2 - all of them `bfloat16` - won't pass tests since we have to move the `model.to(torch.bfloat16) ` while running tests",
"That's probably good enough as the initial integration. We can iterate to test the other variations once it becomes part of pytorch-core. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18961). All of your documentation changes will be reflected on that endpoint.",
"ok, so as it has been awhile since this was created please rebase to main and flip the Draft mode to ready and we can then ask Sylvain to have a last look and merge. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18961). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18961). All of your documentation changes will be reflected on that endpoint.",
"Thank you guys for helping/guiding me through this PR!"
] | 1,662
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
Add `AnyPrecisionAdamW` optimizer from `torchdistx`
Fixes # (issue)
#18827
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18961/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18961/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18961",
"html_url": "https://github.com/huggingface/transformers/pull/18961",
"diff_url": "https://github.com/huggingface/transformers/pull/18961.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18961.patch",
"merged_at": 1668781629000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18960
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18960/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18960/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18960/events
|
https://github.com/huggingface/transformers/pull/18960
| 1,367,941,320
|
PR_kwDOCUB6oc4-r1l-
| 18,960
|
[Wav2Vec2] Fix `None` loss in docstring for Wav2Vec2ForPreTraining
|
{
"login": "abdouaziz",
"id": 39220574,
"node_id": "MDQ6VXNlcjM5MjIwNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/39220574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdouaziz",
"html_url": "https://github.com/abdouaziz",
"followers_url": "https://api.github.com/users/abdouaziz/followers",
"following_url": "https://api.github.com/users/abdouaziz/following{/other_user}",
"gists_url": "https://api.github.com/users/abdouaziz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdouaziz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdouaziz/subscriptions",
"organizations_url": "https://api.github.com/users/abdouaziz/orgs",
"repos_url": "https://api.github.com/users/abdouaziz/repos",
"events_url": "https://api.github.com/users/abdouaziz/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdouaziz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Great! Now that we've established the problem:\r\n\r\n> loss is `None` as `sampled_negative_indices` is omitted from the args of the model\r\n\r\nand verified the fix:\r\n\r\n> forward `sampled_negative_indices` to the model\r\n\r\nwe can touch-up this PR and get it merged!\r\n\r\nWe'll need to do two things:\r\n1. Remove the erroneous file [src/transformers/models/test.py](https://github.com/huggingface/transformers/pull/18960/files/e94a4d4e7bb108aeba8daafc3de7b58fc5048838#diff-dc9af87a30bbbc682dfaf00796e992367511d96095a2f0a46a8da06a948a7050)\r\n2. Code quality: you can do this simply by running `make style` from the root of the Transformers repo 🤗\r\n\r\nLet me know if you have any questions! Cheers!",
"Hello @sanchit-gandhi Thanks for the help , I am still having some issue even when I make sure that I had installed all the necessary packages I am still having this error . \r\nEven `pip install black[jupyter]` is installed and when I run `make style`i have the same error here : \r\n```py\r\n\r\nSkipping .ipynb files as Jupyter dependencies are not installed.\r\nYou can fix this by running ``pip install black[jupyter]``\r\nwould reformat examples/research_projects/lxmert/modeling_frcnn.p\r\n...\r\n```",
"Hey @abdouaziz. Thanks for removing the erroneous file :) Don't worry about the `.ipynb` warning as we've not changed any Python notebooks! Looks like something else is up with `make style` though - we have 567 files changed in this PR! There should only be 2 files changed (wav2vec2 and wav2vec2-conformer).\r\n\r\nCould you try the following in turn and check whether the number of files changed drops back down to 2 after each step:\r\n1. Rebasing onto main:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/main\r\n```\r\nAnd then running `make style`.\r\n2. Updating HF doc builder https://pypi.org/project/hf-doc-builder/\r\n```\r\npip install --upgrade hf-doc-builder\r\n```\r\nAnd then running `make style`.\r\n\r\nIf that doesn't fix it we can try some other hacks!\r\n\r\n",
"Hey @abdouaziz - sorry this has been so arduous! Are you still interested in completing this PR? Feel free to open a new one if you wish and we can go from there!",
"> \r\nHello @sanchit-gandhi , Yes i am interesting to completing this RP , but I am still having the same issue here the new PR \r\nhttps://github.com/huggingface/transformers/pull/19061#issue-1375177421 , \r\nI am ready for suggestion ??"
] | 1,662
| 1,663
| 1,663
|
NONE
| null |
# What does this PR do?
- [ ] It fixes the Nan value returned in the contrastive loss of [Wav2Vec2ForPreTraining](https://huggingface.co/docs/transformers/v4.21.3/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining) .
- [ ] Adding sampled_negative_indices as a target allows Wav2Vec2ForPreTraining to calculate the loss.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18960/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18960",
"html_url": "https://github.com/huggingface/transformers/pull/18960",
"diff_url": "https://github.com/huggingface/transformers/pull/18960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18960.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18959
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18959/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18959/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18959/events
|
https://github.com/huggingface/transformers/pull/18959
| 1,367,940,076
|
PR_kwDOCUB6oc4-r1Up
| 18,959
|
[CookieCutter] Clarify questions
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
I've seen quite some mistakes in terms of people answering the questions of the CookieCutter in the wrong way. This is because the questions are sometimes a bit vague, i.e. it's not clear whether one should provide Roberta, RoBERTa or roberta. This was actually not clear for me either.
This PR aims to clarify the questions, making sure contributors understand better what to answer.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18959/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18959",
"html_url": "https://github.com/huggingface/transformers/pull/18959",
"diff_url": "https://github.com/huggingface/transformers/pull/18959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18959.patch",
"merged_at": 1663156375000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18958
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18958/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18958/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18958/events
|
https://github.com/huggingface/transformers/issues/18958
| 1,367,843,977
|
I_kwDOCUB6oc5Rh6SJ
| 18,958
|
Encoder-decoder model is not working correctly for the latest versions
|
{
"login": "miguelwon",
"id": 7373193,
"node_id": "MDQ6VXNlcjczNzMxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7373193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miguelwon",
"html_url": "https://github.com/miguelwon",
"followers_url": "https://api.github.com/users/miguelwon/followers",
"following_url": "https://api.github.com/users/miguelwon/following{/other_user}",
"gists_url": "https://api.github.com/users/miguelwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miguelwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miguelwon/subscriptions",
"organizations_url": "https://api.github.com/users/miguelwon/orgs",
"repos_url": "https://api.github.com/users/miguelwon/repos",
"events_url": "https://api.github.com/users/miguelwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/miguelwon/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
},
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thank you for reporting, @miguelwon . I will take a look.",
"This might also be interesting for @ArthurZucker if you're very busy at the moment @ydshieh ",
"I am having a look RN, will tell you when I know more 👍🏻 ",
"Hey @miguelwon it seems that you are right about the training not converging at all using current version. \r\nHowever, since loading a trained model in the new versions does not give bad results, I suspect that the issue comes from either the computation of the loss, or the trainer. \r\n\r\nI will have a look in more details as I believe this is a pretty important bug 😄 \r\n ",
"Hi @ArthurZucker, just to know if there any news about this issue. Thanks!",
"Hey! Sorry not yet, it's pretty tricky, but I hope I'll resolve it soon! 🤗 ",
"Hello @miguelwon I discussed with @ArthurZucker internally and decided to take a look on this issue.\r\n\r\nIn the notebook you provided, inside the function `process_data_to_model_inputs`, you prepared:\r\n - decoder_input_ids\r\n - decoder_attention_mask\r\n - labels\r\n\r\nand you left a remark\r\n> because BERT automatically shifts the labels, the labels correspond exactly to `decoder_input_ids`.\r\n\r\nIn fact, for the encoder-decoder architecture, the loss computation is done in `EncoderDecoderModel.forward` rather than in `decoder.forward` (in your case, the decoder is `BertLMHeadModel`), see [here](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L631). Also, see [this warning message](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L41).\r\n\r\nCombine [the way `decoder_input_ids` is prepared here](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L611), we don't need to prepare `decoder_input_ids` and `decoder_attention_mask` in your method `process_data_to_model_inputs`.\r\n\r\nHowever, you can still provide them, but in this case, the `decoder_input_ids` should be a shift of `labels`, instead of being the same value (which is the case in your notebook).\r\n\r\nFor old versions like `4.2.1`, it was using the decoder's code of loss computation, so the notebook works with it. But since v4.12, this is not the recommended way to run encoder-decoder models.\r\n\r\nI have updated your notebook (in a copy), you can check [here](https://colab.research.google.com/drive/1WbPtf7OKar7DbTJ-76n67UrLkUnRc_If?usp=sharing), which shows it doesn't generate non-sense results anymore with the above suggested changes.\r\n\r\nI hope this answers your question :-)\r\n\r\n",
"Yes it does! :) \r\nThanks a lot for you clarification and the updated notebook! \r\n\r\n",
"Thank you @ydshieh ."
] | 1,662
| 1,672
| null |
NONE
| null |
### System Info
transformers==4.2.1
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X ] My own task or dataset (give details below)
### Reproduction
Hi,
I'm working with a seq2seq problem, in particular with `EncoderDecoderModel` model. The problem is that I can't have good results with the latest version (**4.21.3**). I also tried with **4.18.0** because of [this](https://github.com/huggingface/blog/issues/292#issuecomment-1122666099) but didn't work either. It is however working when using version **4.2.1**
I have made an [public notebook](https://colab.research.google.com/drive/1obQcmRdX89eWJfk_qUMcxa4pBSPIc53X?usp=sharing) you can run to see the issue. Is an example to train a model to generate the written digits, given a number.
### Expected behavior
You works nicely with version **4.2.1**, but very bad with the most recent versions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18958/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18957
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18957/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18957/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18957/events
|
https://github.com/huggingface/transformers/pull/18957
| 1,367,796,503
|
PR_kwDOCUB6oc4-rWDP
| 18,957
|
Wav2Vec2ForPreTraining loss Nan fixed
|
{
"login": "abdouaziz",
"id": 39220574,
"node_id": "MDQ6VXNlcjM5MjIwNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/39220574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdouaziz",
"html_url": "https://github.com/abdouaziz",
"followers_url": "https://api.github.com/users/abdouaziz/followers",
"following_url": "https://api.github.com/users/abdouaziz/following{/other_user}",
"gists_url": "https://api.github.com/users/abdouaziz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdouaziz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdouaziz/subscriptions",
"organizations_url": "https://api.github.com/users/abdouaziz/orgs",
"repos_url": "https://api.github.com/users/abdouaziz/repos",
"events_url": "https://api.github.com/users/abdouaziz/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdouaziz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,662
| 1,662
| 1,662
|
NONE
| null |
# What does this PR do?
- [ ] This PR fixe the loss of [Wav2Vec2ForPreTraining](https://huggingface.co/docs/transformers/v4.21.3/en/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining) which return Nan values .
- [ ] We fix the error by adding sampled_negative_indices as a target to calculate the loss .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18957/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18957",
"html_url": "https://github.com/huggingface/transformers/pull/18957",
"diff_url": "https://github.com/huggingface/transformers/pull/18957.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18957.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18956
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18956/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18956/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18956/events
|
https://github.com/huggingface/transformers/issues/18956
| 1,367,734,613
|
I_kwDOCUB6oc5RhflV
| 18,956
|
Latest Wav2Vec2 pretraining script runs on first GPU only
|
{
"login": "RK-BAKU",
"id": 71269675,
"node_id": "MDQ6VXNlcjcxMjY5Njc1",
"avatar_url": "https://avatars.githubusercontent.com/u/71269675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RK-BAKU",
"html_url": "https://github.com/RK-BAKU",
"followers_url": "https://api.github.com/users/RK-BAKU/followers",
"following_url": "https://api.github.com/users/RK-BAKU/following{/other_user}",
"gists_url": "https://api.github.com/users/RK-BAKU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RK-BAKU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RK-BAKU/subscriptions",
"organizations_url": "https://api.github.com/users/RK-BAKU/orgs",
"repos_url": "https://api.github.com/users/RK-BAKU/repos",
"events_url": "https://api.github.com/users/RK-BAKU/events{/privacy}",
"received_events_url": "https://api.github.com/users/RK-BAKU/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Got to run accelerate config in order to setup devices."
] | 1,662
| 1,662
| 1,662
|
NONE
| null |
### System Info
Dear @patrickvonplaten
I've tried different multi GPU setups (RTX 3090 and A5000) but training always runs only on device 0.
**Tried these commands:**
`accelerate launch --num_processes=2 pretrain.py`
`accelerate launch pretrain.py`
Seems there is some bug in the script because old fairseq training runs on all available devices.
Kindly assist to understand where to dig further.
**Script used for training:**
[https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py)
### Who can help?
@patr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just followed the instruction on the script page
### Expected behavior
Run on multiple GPUs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18956/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18955
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18955/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18955/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18955/events
|
https://github.com/huggingface/transformers/pull/18955
| 1,367,730,848
|
PR_kwDOCUB6oc4-rH0Y
| 18,955
|
update black target version
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like it wants to change a looooot of files. This usually creates a hell of merge conflicts in PRs so I would really like to avoid doing it too much. We will switch to black 2023 in January, so how about we change the target at that point too? We can change your PR to leave a comment for now so we don't forget.",
"> Looks like it wants to change a looooot of files. This usually creates a hell of merge conflicts in PRs so I would really like to avoid doing it too much. We will switch to black 2023 in January, so how about we change the target at that point too? We can change your PR to leave a comment for now so we don't forget.\r\n\r\nSure! \r\n\r\nI didn't quite understand what you mean with the last sentence though. Do you mean closing this PR and opening an issue instead? That works, whatever is best.",
"Just putting a comment next to the black version pinned in the setup, so that we know to update this at the next version change.",
"> Just putting a comment next to the black version pinned in the setup, so that we know to update this at the next version change.\r\n\r\nAlright. Done."
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
Considering that setup.yp requires Python 3.7 or higher:
https://github.com/huggingface/transformers/blob/22f72185601d5167a747104b4aca102d0e92524c/setup.py#L417
it might make sense to also have black target only Python 3.7. Just a suggestion though.
Unsurprisingly, this PR may trigger quality check errors.
Note sure who to tag, so assuming for general repo things: @sgugger and @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18955/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18955",
"html_url": "https://github.com/huggingface/transformers/pull/18955",
"diff_url": "https://github.com/huggingface/transformers/pull/18955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18955.patch",
"merged_at": 1662759006000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18954
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18954/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18954/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18954/events
|
https://github.com/huggingface/transformers/issues/18954
| 1,367,684,874
|
I_kwDOCUB6oc5RhTcK
| 18,954
|
Update decision transformers to gym 0.26
|
{
"login": "RedTachyon",
"id": 19414946,
"node_id": "MDQ6VXNlcjE5NDE0OTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/19414946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RedTachyon",
"html_url": "https://github.com/RedTachyon",
"followers_url": "https://api.github.com/users/RedTachyon/followers",
"following_url": "https://api.github.com/users/RedTachyon/following{/other_user}",
"gists_url": "https://api.github.com/users/RedTachyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RedTachyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RedTachyon/subscriptions",
"organizations_url": "https://api.github.com/users/RedTachyon/orgs",
"repos_url": "https://api.github.com/users/RedTachyon/repos",
"events_url": "https://api.github.com/users/RedTachyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/RedTachyon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @edbeeching ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
### Feature request
We recently published a [new release of gym](https://github.com/openai/gym/releases/tag/0.26.0), which carries with it a bunch of breaking changes.
However, this is the last of the API changes, and it will be stable going forward. So it would be great to update the decision transformers to be compatible with that.
### Motivation
I'd say there are two main reasons to switch to the new API that goes with 0.26:
- The new API "makes sense"- we're still preparing a proper writeup of the rationale behind each decision, but they were deliberately made to support good research, flexibility and reproducibility.
- It will be supported in the future - we have many exciting features on the horizon (e.g. hardware-accelerated environments), which will be predicated on using the new API
### Your contribution
I'll be happy to help with the whole process, including contributing the PR.
From what I can see, most of the code in transformers is rather self-contained, so it would mainly be the the `run_decision_transformer.py` example that needs updating, and then potentially other resources about decision transformers (like the blog), which would be separate PRs naturally
The biggest question would be how you want to handle versioning. My intuition is that it'd be best to update to gym 0.26 together with some "significant" version of transformers, like `4.22 -> 4.23` (or later, depending on how long it takes).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18954/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18953
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18953/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18953/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18953/events
|
https://github.com/huggingface/transformers/pull/18953
| 1,367,609,809
|
PR_kwDOCUB6oc4-quLh
| 18,953
|
create Past CI results as tables for GitHub issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"One table\r\n\r\n| no. | error |\r\n|-:|:-|\r\n| 63 | AttributeError: module 'torch.jit._state' has no attribute '_clear_class_state' |\r\n| 38 | RuntimeError: iter.device(arg).is_cuda() INTERNAL ASSERT FAILED at \"/pytorch/aten/src/ATen/native/cu |\r\n| 3 | OSError: gs555750 is not a valid git identifier (branch name, tag name or commit id) that exists for |\r\n| 3 | AssertionError: Couldn't trace module. |\r\n| 3 | RuntimeError: \"normal_kernel_cpu\" not implemented for 'BFloat16' |\r\n| 1 | RuntimeError: Caught RuntimeError in replica 0 on device 0. |",
"Another one\r\n\r\n| model | no. of errors | major error | count |\r\n|-:|-:|-:|-:|\r\n| bloom | 48 | RuntimeError: iter.device(arg).is_cuda() INTERNAL ASSERT FAI | 38 |\r\n| data2vec | 9 | AttributeError: module 'torch.jit._state' has no attribute ' | 9 |\r\n| clip | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| blenderbot | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| bart | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| blenderbot_small | 6 | AttributeError: module 'torch.jit._state' has no attribute ' | 6 |\r\n| canine | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| bigbird_pegasus | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| convnext | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| beit | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| albert | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| codegen | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| ctrl | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| convbert | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| bert_generation | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |\r\n| cpm | 3 | AttributeError: module 'torch.jit._state' has no attribute ' | 3 |",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
Update Past CI error statistic report script to produce 2 GitHub issue tables :-)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18953/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18953",
"html_url": "https://github.com/huggingface/transformers/pull/18953",
"diff_url": "https://github.com/huggingface/transformers/pull/18953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18953.patch",
"merged_at": 1662988831000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18952
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18952/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18952/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18952/events
|
https://github.com/huggingface/transformers/issues/18952
| 1,367,512,315
|
I_kwDOCUB6oc5RgpT7
| 18,952
|
Resize position embeddings in PreTrainedModel
|
{
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have met the same issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am also facing the same issue",
"is this issue resolved? Why was it closed? I need to use different size, not, but I can't do it",
"It was closed automatically because no one answered after one month :man_shrugging: \r\n",
"They exactly a PR for the same issue, but only for some models.\r\n#13559\r\nI think you can modify the resize_position_embeddings in your own model based on the example in this PR."
] | 1,662
| 1,698
| 1,666
|
CONTRIBUTOR
| null |
### Feature request
Add a method to resize position embeddings in PreTrainedModel, in the same way as there is `resize_token_embeddings` for word embeddings.
There are several ways to do that:
- retrain everything from scratch
- keep the pretrained embeddings but add new trained from scratch for the new positions (as done in `PreTrainedModel._get_resized_embeddings` if I understand correctly)
- same but initialize new positions by interpolating pretrained ones instead of random init
### Motivation
It would be nice to be able to resize position embeddings when the PreTrainedModel has too small `max_position_embeddings`.
I found several related issues:
- https://stackoverflow.com/questions/69820065/how-to-extend-a-pretrained-transformer-model-configured-with-small-max-position
- https://github.com/huggingface/transformers/issues/1978
### Your contribution
Willing to help :)
From what I can tell, most of the job is already done in `PreTrainedModel._get_resized_embeddings`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18952/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18951
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18951/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18951/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18951/events
|
https://github.com/huggingface/transformers/issues/18951
| 1,367,449,506
|
I_kwDOCUB6oc5RgZ-i
| 18,951
|
Pipeline GPT-NeoX only returns "BB" from any prompt then nothing for subsequent calls of the pipeline
|
{
"login": "jaimu97",
"id": 14964859,
"node_id": "MDQ6VXNlcjE0OTY0ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/14964859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaimu97",
"html_url": "https://github.com/jaimu97",
"followers_url": "https://api.github.com/users/jaimu97/followers",
"following_url": "https://api.github.com/users/jaimu97/following{/other_user}",
"gists_url": "https://api.github.com/users/jaimu97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaimu97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaimu97/subscriptions",
"organizations_url": "https://api.github.com/users/jaimu97/orgs",
"repos_url": "https://api.github.com/users/jaimu97/repos",
"events_url": "https://api.github.com/users/jaimu97/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaimu97/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I noticed also changing the `eos_token_id` to `187` (`\\n`) increases time for a response to about 20 seconds (previously ~5s) and there is an increased load on the cards but the response is the same \"BB\"",
"cc @Narsil ",
"I unfortunately don't have a machine at hand big enough to run that code.\r\n\r\nDoes this happen with any other (smaller) model that we can try on ?\r\n\r\nIf not, the ideal thing would be to check if the problem is the `eos_token_id` being generated or not.\r\nUsing `pipeline(..., eos_token_id=None)` should deactivate it, and your generation should now actually generate 250 tokens.\r\n\r\nThe other options is that it DOES generate tokens, but they are somehow removed by the decoding process. In order to check for that I would add a `print` statement directly into the `postprocess` method of `TextGenerationPipeline` and see what's going on here.\r\n\r\nWould that help ?",
"Hi, I actually found that this was a hardware issue and forgot to close this issue. Any time I ran something that required both cards to work together I would get an `IO_PAGE_FAULT` error and to fix it I needed to disable IOMMU in my motherboard settings now it works. 😅"
] | 1,662
| 1,664
| 1,664
|
NONE
| null |
### System Info
CPU: AMD 4750G (Onboard video disabled)
OS: Ubuntu Server 20.04.5 LTS x86_64
RAM: 64GB
GPUs: 2x Tesla M40 24GB
Driver Version: 510.47.03
CUDA Version: 11.6
Pytorch stable 1.12.1 (Installed via Anaconda)
Both accelerate and transformers installed through pip
accelerate @ git+https://github.com/huggingface/accelerate@98823de572246d68cb31db94b60f7328ae9d551e
transformers @ git+https://github.com/huggingface/transformers@cfd623a859890c6d106610d3c688064eadc7bd61
### Who can help?
@patil-suraj @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi everyone!
I'm trying to run GPT-NeoX 20B with accelerate and `device-map="auto"` however, I can't seem to get the model to return anything other than "BB" for the first response and then nothing but an empty string after.
Steps to reproduce the behaviour:
1. Use latest git of `accelerate` and `transformers`
2. Setup a pipeline of `EleutherAI/gpt-neox-20b`
3. Call pipeline with any prompt
<details>
<summary>Code I am using</summary>
<pre>
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch, accelerate
print("Loading generator!")
generator = pipeline('text-generation',
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", device_map="auto",
torch_dtype=torch.float16),
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b"),
temperature=0.7, return_full_text=False, max_new_tokens=250)
print("Loaded generator!")
while True:
prompt = input("Enter prompt: ")
response = generator(prompt)
print(response)
</pre>
</details>
<details>
<summary>Example runs</summary>
<pre>
(hf) jai@tesla-server:~$ python gpt-neox-test2.py
Loading generator!
Loaded generator!
Enter prompt: Test!
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': 'BB'}]
Enter prompt: Testing again!
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': ''}]
Enter prompt: One more time!
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': ''}]
(hf) jai@tesla-server:~$ python gpt-neox-test2.py
Loading generator!
Loaded generator!
Enter prompt: This is a different prompt.
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': 'BB'}]
Enter prompt: yet every time the responses are still the same. :(
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
[{'generated_text': ''}]
</pre>
</details>
I'm not sure where this issue belongs as I'm still new to huggingface, but if I make a small change and use a model that can fit into a single card such as `EleutherAI/gpt-j-6B` and remove `device_map="auto"` there is no issue with generation (still with piplines).
I have also tried using the `GPTNeoXTokenizerFast` class with the same results.
There is no errors as in python crashing it just doesn't generate anything meaningful.
### Expected behavior
Response should be at the length of `max_new_tokens` (250) or at least more than one token and relevant to prompt provided.
`[{'generated_text': 'BB'}]`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18951/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18950
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18950/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18950/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18950/events
|
https://github.com/huggingface/transformers/issues/18950
| 1,367,305,528
|
I_kwDOCUB6oc5Rf204
| 18,950
|
BertTokenizer slowly on latest versions
|
{
"login": "Piecer-plc",
"id": 109329809,
"node_id": "U_kgDOBoQ9kQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109329809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Piecer-plc",
"html_url": "https://github.com/Piecer-plc",
"followers_url": "https://api.github.com/users/Piecer-plc/followers",
"following_url": "https://api.github.com/users/Piecer-plc/following{/other_user}",
"gists_url": "https://api.github.com/users/Piecer-plc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Piecer-plc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Piecer-plc/subscriptions",
"organizations_url": "https://api.github.com/users/Piecer-plc/orgs",
"repos_url": "https://api.github.com/users/Piecer-plc/repos",
"events_url": "https://api.github.com/users/Piecer-plc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Piecer-plc/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @SaulLu ",
"Hi @PerformanceDetect ,\r\n\r\nThanks for sharing this benchmark with us! Indeed it would be interesting to find out what addition caused this slowdown, do you think you would have some time to investigate further?\r\n\r\nOtherwise, if your usage is speed sensitive, I recommend you to use the fast version of the tokenizer :hugs: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
### System Info
Hi, I test the execution time of my program when adopt different transformers version.
In my program, when the transformers version was **4.21.3**, the execution time was **7.12s**, but when I changed the lightgbm to the **4.3.0**, the execution tim was **2.79s**.
I record the system info on different versions.
[4.21.3.txt](https://github.com/huggingface/transformers/files/9532526/4.21.3.txt)
[4.10.0.txt](https://github.com/huggingface/transformers/files/9532528/4.10.0.txt)
[4.3.0.txt](https://github.com/huggingface/transformers/files/9532527/4.3.0.txt)
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1: Code
```python
from transformers import BertTokenizer
import pandas as pd
import time
start = time.time()
train_df = pd.read_csv('train.csv')
train_df.head()
tweets = train_df['text'].values
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
def encode_sentence(s):
tokens = list(tokenizer.tokenize(s))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
max_len = 50
x_train = []
for tweet in tweets:
vec = encode_sentence(tweet)
x_train.append(vec[:max_len] + [0] * (max_len - len(vec)))
end = time.time()
print("Time:", end-start)
```
2: Dataset
[train.csv](https://github.com/huggingface/transformers/files/9532537/train.csv)
### Expected behavior
The execution time are same on different versions, or the latest version is better than older versions.
|Version|Execution time|
|--|--|
|4.21.3|7.12s|
|4.10.0|8.85s|
|4.3.0|2.79s|
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18950/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18949
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18949/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18949/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18949/events
|
https://github.com/huggingface/transformers/pull/18949
| 1,367,303,478
|
PR_kwDOCUB6oc4-ptcR
| 18,949
|
Fix M-CTC-T chunking
|
{
"login": "samwaterbury",
"id": 30158870,
"node_id": "MDQ6VXNlcjMwMTU4ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/30158870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samwaterbury",
"html_url": "https://github.com/samwaterbury",
"followers_url": "https://api.github.com/users/samwaterbury/followers",
"following_url": "https://api.github.com/users/samwaterbury/following{/other_user}",
"gists_url": "https://api.github.com/users/samwaterbury/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samwaterbury/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samwaterbury/subscriptions",
"organizations_url": "https://api.github.com/users/samwaterbury/orgs",
"repos_url": "https://api.github.com/users/samwaterbury/repos",
"events_url": "https://api.github.com/users/samwaterbury/events{/privacy}",
"received_events_url": "https://api.github.com/users/samwaterbury/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18949). All of your documentation changes will be reflected on that endpoint.",
"In addition to running tests, as a sanity check I ran the code snippet from [this blog post](https://huggingface.co/blog/asr-chunking):\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(model=\"facebook/wav2vec2-base-960h\")\r\npipe(\"very_long_file.mp3\", chunk_length_s=10)\r\n```\r\n\r\nand it works as expected. I also modified it for M-CTC-T and it still works:\r\n\r\n```python\r\nfrom transformers import AutomaticSpeechRecognitionPipeline, MCTCTForCTC, MCTCTProcessor\r\n\r\nmodel = MCTCTForCTC.from_pretrained(\"speechbrain/m-ctc-t-large\")\r\nprocessor = MCTCTProcessor.from_pretrained(\"speechbrain/m-ctc-t-large\")\r\npipe = AutomaticSpeechRecognitionPipeline(\r\n feature_extractor=processor.feature_extractor,\r\n tokenizer=processor.tokenizer,\r\n model=model,\r\n framework=\"pt\",\r\n)\r\n\r\nprint(pipe(\"very_long_file.mp3\", chunk_length_s=10))\r\n```",
"Thanks for the PR - this all looks good to me! cc @sanchit-gandhi and @Narsil for a quick second review :-) ",
"Thanks for the PR - that's a great addition! Generally this all looks more or less all good to me! Just a bit unsure about the changes in the pipeline, but ok for me since all tests pass. @Narsil what do you think? \r\n\r\nAlso cc @sanchit-gandhi for info",
"Thanks for the PR @samwaterbury! LGTM - happy with the changes to computing the inputs : logits ratio :-) ",
"@Narsil if you have 10 minutes it'd be super nice to get your review here :-) ",
"Thanks all! 🙂 @Narsil any chance for your 👀",
"> FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_large_model_pt_with_lm - AssertionError: 'ctc' != 'ctc_with_lm'\r\n\r\nThis is because you don't have `kenlm` installed. Since the pipeline makes this a soft error instead of a strong one, you're seeing a different error. \r\n\r\nI think we could update the tests to skip the ones requiring kenlm if you don't have it installed (but `kenlm` is a tricky dependency iirc)",
"Hi @samwaterbury ,\r\n\r\nSorry for the long delay before review, I'm pretty far behind on some stuff.\r\n\r\nThe first and biggest issue I have, is that I am not sure that the approach is **sound**.\r\nFor the pipeline to work correctly with chunking/striding we **need** a very big property, which is that every data point in audio space corresponds to a single logits.\r\n\r\nThis enables to do this: https://huggingface.co/blog/asr-chunking\r\n\r\nHowever, I fear that M-CTC models uses mel spectrogram, which means the spectrograms themselves are distributing single data points into multiple feature points in the sequence length. It probably depends on the parameters of the feature extractor, but it seems like something should be done.\r\n\r\nSince there is overlap in the feature space, it become very hard to attribute logits (hence letters) to their origin, and to *stitch* back together the original string when running inference on 2 different blocks with striding.\r\n\r\nThe current PR is actually I fear quite wrong, and is only correct by accident, because the `inputs_to_logits_ratio` is set to 1 and the audio is rather small, all the input audio ends up being in the first chunk, and since striding is broken, all the first chunk is used, and afterwards none of the chunks are used.\r\n\r\nIn order to see what I'm talking about, run the example with `batch_size=2` and print the `stride` within `postprocess`. You will see the numbers don't add up.\r\n\r\nI created another PR https://github.com/huggingface/transformers/pull/19338 to recreate what you have done here in hopefully a more correct version. Unfortunately, it seems the output is wrong, because the stiching cannot be done properly.\r\n\r\nI might have made a mistake in my calculations though so take it with a grain of salt.\r\nBut calculating the ratio like I'm doing in the PR is too incorrect, and that's why we rely on `config.inputs_to_logits_ratio` instead for wav2vec2. (The differences should be very minor since it's only about the padding options of convolutions and such, but it can lead to subtle bugs on some splitting).",
"Hi @Narsil sorry for the delay and thanks for the long and detailed review and response. What you're saying makes sense and I appreciate the time you took to write it all out!\r\n\r\nI'm going to close this PR since it looks like your PR is a better starting point for continued work. (I've also personally moved from M-CTC-T to Whisper since opening this PR 😄)"
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
## Summary
Chunking doesn't currently work for the M-CTC-T architecture, which this PR attempts to fix. The change is fairly minor but I am not an expert on how M-CTC-T works, so definitely open to feedback. In my usage, it works as intended.
Paging @patrickvonplaten since I think you originally implemented the chunking mechanism. :slightly_smiling_face:
I will add some PR comments explaining the changes.
## Testing
I added one test to cover the ASR pipeline with M-CTC-T. I also ran these tests:
```shell
RUN_SLOW=True RUN_PIPELINE_TESTS=True pytest \
tests/models/mctct \
tests/pipelines/test_pipelines_automatic_speech_recognition.py \
tests/pipelines/test_pipelines_common.py
```
Some of these tests fail, but I was able to confirm they're also failing on the main branch. Here are the failures:
```
FAILED tests/pipelines/test_pipelines_common.py::CommonPipelineTest::test_iterator_data_tf - tensorflow.python.framework.errors_impl.InternalError: Exception encountered when calling layer...
FAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf - tensorflow.python.framework.errors_impl.InternalError: Exception encountered when calli...
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_chunking_fast_with_lm - AssertionError: 'e<s>eh' != '<s> <s'
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_large_model_pt_with_lm - AssertionError: 'ctc' != 'ctc_with_lm'
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_with_lm_fast - AssertionError: 'ctc' != 'ctc_with_lm'
FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_with_local_lm_fast - AssertionError: 'ctc' != 'ctc_with_lm'
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76 GiB tot...
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_normal_batched - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76...
FAILED tests/models/mctct/test_modeling_mctct.py::MCTCTModelIntegrationTest::test_inference_ctc_robust_batched - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76...
```
Some of these look to just be issues with my machine (memory errors).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18949/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18949",
"html_url": "https://github.com/huggingface/transformers/pull/18949",
"diff_url": "https://github.com/huggingface/transformers/pull/18949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18949.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18948
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18948/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18948/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18948/events
|
https://github.com/huggingface/transformers/pull/18948
| 1,367,288,374
|
PR_kwDOCUB6oc4-pqPs
| 18,948
|
Add support for conditional detr
|
{
"login": "DeppMeng",
"id": 26196079,
"node_id": "MDQ6VXNlcjI2MTk2MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/26196079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeppMeng",
"html_url": "https://github.com/DeppMeng",
"followers_url": "https://api.github.com/users/DeppMeng/followers",
"following_url": "https://api.github.com/users/DeppMeng/following{/other_user}",
"gists_url": "https://api.github.com/users/DeppMeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeppMeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeppMeng/subscriptions",
"organizations_url": "https://api.github.com/users/DeppMeng/orgs",
"repos_url": "https://api.github.com/users/DeppMeng/repos",
"events_url": "https://api.github.com/users/DeppMeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeppMeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI issue is caused by the fact that you have the following lines in src/transformers/models/auto/feature_extraction_auto.py:\r\n\r\n```\r\n(\"detr\", \"DetrFeatureExtractor\"),\r\n(\"detr\", \"DetrFeatureExtractor\"),\r\n```\r\n\r\n=> this should be updated to:\r\n\r\n```\r\n(\"detr\", \"DetrFeatureExtractor\"),\r\n(\"conditional_detr\", \"ConditionalDetrFeatureExtractor\"),\r\n```",
"Thanks a lot for all your work 🤗 merging!"
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added codes and documentations for conditioonal DETR model. The conditional DETR files are created by using the "add-new-model-like" feature of CookieCutter, based on DETR codes. All tests are passed. One thing I want to ask is that, I have converted the pretrained weights, how shoud I give these weights to you?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/Atten4Vis/ConditionalDETR/issues/21
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18948/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18948",
"html_url": "https://github.com/huggingface/transformers/pull/18948",
"diff_url": "https://github.com/huggingface/transformers/pull/18948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18948.patch",
"merged_at": 1663832704000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18947
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18947/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18947/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18947/events
|
https://github.com/huggingface/transformers/issues/18947
| 1,367,211,839
|
I_kwDOCUB6oc5Rff8_
| 18,947
|
About the evaluation_loop function of trainer
|
{
"login": "macheng6",
"id": 37951216,
"node_id": "MDQ6VXNlcjM3OTUxMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/37951216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/macheng6",
"html_url": "https://github.com/macheng6",
"followers_url": "https://api.github.com/users/macheng6/followers",
"following_url": "https://api.github.com/users/macheng6/following{/other_user}",
"gists_url": "https://api.github.com/users/macheng6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/macheng6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macheng6/subscriptions",
"organizations_url": "https://api.github.com/users/macheng6/orgs",
"repos_url": "https://api.github.com/users/macheng6/repos",
"events_url": "https://api.github.com/users/macheng6/events{/privacy}",
"received_events_url": "https://api.github.com/users/macheng6/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"You can set `eval_accumulation_steps=100`(or even a smaller number) in TraningArgs to avoid GPU memory exceeding.\r\n\r\nHowever, I also think it is a bad design that the default `eval_accumulation_steps` is infinity actually, also.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,670
| 1,670
|
NONE
| null |
### Feature request
It is recommended to feed the logits of each batch into the compute_metrics function, and then aggregate the results of each batch.
### Motivation
When I use the evaluate function of the trainer, the evaluation_loop function concatenates all the logits and labels on the validation set and sends them to the compute_metrics function for evaluation. The preds_host and labels_host are of torch.tensor type, so it is easy to exceed the gpu memory.
### Your contribution
pass
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18947/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18946/events
|
https://github.com/huggingface/transformers/issues/18946
| 1,367,120,940
|
I_kwDOCUB6oc5RfJws
| 18,946
|
adamw_bnb_8bit is actually Adam8bit and doesn't respect TrainingArguments weight_decay
|
{
"login": "n9Mtq4",
"id": 5840141,
"node_id": "MDQ6VXNlcjU4NDAxNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5840141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n9Mtq4",
"html_url": "https://github.com/n9Mtq4",
"followers_url": "https://api.github.com/users/n9Mtq4/followers",
"following_url": "https://api.github.com/users/n9Mtq4/following{/other_user}",
"gists_url": "https://api.github.com/users/n9Mtq4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n9Mtq4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n9Mtq4/subscriptions",
"organizations_url": "https://api.github.com/users/n9Mtq4/orgs",
"repos_url": "https://api.github.com/users/n9Mtq4/repos",
"events_url": "https://api.github.com/users/n9Mtq4/events{/privacy}",
"received_events_url": "https://api.github.com/users/n9Mtq4/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Would you like to make a PR to fix this?",
"Actually upon further review, the weight decay is being set correctly. I didn't fully understand the purpose of these lines that set the weight decay for only some parameters. This bypasses the non-negative check in the constructor, but does use the correct value when updating the parameters.\r\n\r\nhttps://github.com/huggingface/transformers/blob/e6f221c8d4829c9a3bca699c18a32043ab21f7a0/src/transformers/trainer.py#L1057-L1066"
] | 1,662
| 1,662
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.15.60-1-MANJARO-x86_64-with-glibc2.36
- Python version: 3.10.5
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.13.0.dev20220903+cu116 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (gpu)
- Jax version: 0.3.10
- JaxLib version: 0.3.10
- Using GPU in script?: Yes (Ampere)
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/bb6f6d53386bf2340eead6a8f9320ce61add3e96/src/transformers/trainer.py#L1137-L1142
That snippet sets the optimizer when using adamw_bnb_8bit, but it uses Adam8bit instead of AdamW8bit. This isn't a problem in and of itself as both implementations in bitsandbytes are the same except for the weight_decay parameter (see [AdamW8bit](https://github.com/TimDettmers/bitsandbytes/blob/2e630b55f51d454f3bd723dffda68a07ef93190c/bitsandbytes/optim/adamw.py#L38-L64), [Adam8bit](https://github.com/TimDettmers/bitsandbytes/blob/2e630b55f51d454f3bd723dffda68a07ef93190c/bitsandbytes/optim/adam.py#L46-L72)).
However, the trainer doesn't correctly set the weight_decay parameter for Adam8bit leaving it at 0 and the behavior as Adam instead of AdamW.
```
from transformers import TrainingArguments, Trainer, AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling
from datasets import Dataset
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = AutoModelForMaskedLM.from_pretrained("roberta-base")
toy_dataset = Dataset.from_dict({'text': ['a', 'b', 'c']})
toy_dataset = toy_dataset.map(lambda examples: tokenizer(examples['text']))
args = TrainingArguments(output_dir='/tmp/outdir', optim='adamw_bnb_8bit', weight_decay=-0.1)
trainer = Trainer(args=args, model=model, train_dataset=toy_dataset,
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer))
trainer.train()
```
The code works even with a negative weight_decay, indicating weight_decay isn't being set. I've confirmed this with a print statement inside the optimizer.
### Expected behavior
I expect the weight_decay parameter to be passed to the constructor of `Adam8bit`.
In the case of the reproduction code, an exception `ValueError: Invalid weight_decay value: -0.1` should be raised from the check [here](https://github.com/TimDettmers/bitsandbytes/blob/2e630b55f51d454f3bd723dffda68a07ef93190c/bitsandbytes/optim/optimizer.py#L325). But `weight_decay` isn't set correctly and remains set at 0.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18946/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18945/events
|
https://github.com/huggingface/transformers/pull/18945
| 1,366,955,977
|
PR_kwDOCUB6oc4-oikr
| 18,945
|
Removed issue in wav2vec link
|
{
"login": "chrisemezue",
"id": 36100251,
"node_id": "MDQ6VXNlcjM2MTAwMjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/36100251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisemezue",
"html_url": "https://github.com/chrisemezue",
"followers_url": "https://api.github.com/users/chrisemezue/followers",
"following_url": "https://api.github.com/users/chrisemezue/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisemezue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisemezue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisemezue/subscriptions",
"organizations_url": "https://api.github.com/users/chrisemezue/orgs",
"repos_url": "https://api.github.com/users/chrisemezue/repos",
"events_url": "https://api.github.com/users/chrisemezue/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisemezue/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Fix connected to [this issue](https://github.com/huggingface/transformers/issues/18944)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18945/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18945",
"html_url": "https://github.com/huggingface/transformers/pull/18945",
"diff_url": "https://github.com/huggingface/transformers/pull/18945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18945.patch",
"merged_at": 1663012760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18944/events
|
https://github.com/huggingface/transformers/issues/18944
| 1,366,946,102
|
I_kwDOCUB6oc5RefE2
| 18,944
|
wrong wav2vec link in audio classification blog
|
{
"login": "chrisemezue",
"id": 36100251,
"node_id": "MDQ6VXNlcjM2MTAwMjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/36100251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisemezue",
"html_url": "https://github.com/chrisemezue",
"followers_url": "https://api.github.com/users/chrisemezue/followers",
"following_url": "https://api.github.com/users/chrisemezue/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisemezue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisemezue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisemezue/subscriptions",
"organizations_url": "https://api.github.com/users/chrisemezue/orgs",
"repos_url": "https://api.github.com/users/chrisemezue/repos",
"events_url": "https://api.github.com/users/chrisemezue/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisemezue/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Closed by #18945"
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
[This wonderful blog](https://huggingface.co/docs/transformers/main/en/tasks/audio_classification#preprocess) has an issue with the wav2vec model card link. See below:
> 2. Check the sampling rate of the audio file matches the sampling rate of the audio data a model was pretrained with. You can find this information on the Wav2Vec2 `[model card]((https://huggingface.co/facebook/wav2vec2-base))`.
The right link should be https://huggingface.co/facebook/wav2vec2-base
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18944/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18943/events
|
https://github.com/huggingface/transformers/pull/18943
| 1,366,896,288
|
PR_kwDOCUB6oc4-oVov
| 18,943
|
[WIP] Implement LayoutLMv2ForRelationExtraction (continues #15173)
|
{
"login": "quasimik",
"id": 3389894,
"node_id": "MDQ6VXNlcjMzODk4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3389894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quasimik",
"html_url": "https://github.com/quasimik",
"followers_url": "https://api.github.com/users/quasimik/followers",
"following_url": "https://api.github.com/users/quasimik/following{/other_user}",
"gists_url": "https://api.github.com/users/quasimik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quasimik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quasimik/subscriptions",
"organizations_url": "https://api.github.com/users/quasimik/orgs",
"repos_url": "https://api.github.com/users/quasimik/repos",
"events_url": "https://api.github.com/users/quasimik/events{/privacy}",
"received_events_url": "https://api.github.com/users/quasimik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18943). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nThanks for your work. However, we don't want to add LayoutLMv2ForRelationExtraction with the design that the Microsoft authors created (as the model returns lists, rather than fixed size tensors). The latter is necessary for the model to work on a distributed environment, and for things like ONNX. See comments at #19120",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,668
| 1,668
|
NONE
| null |
# What does this PR do?
Continues the good work in #15173 to add LayoutLMv2ForRelationExtraction, as implemented in [Microsoft's UniLM repo](https://github.com/microsoft/unilm/blob/152193af4b295ae39cf0c2a492da3ee5cc5abe29/layoutlmft/layoutlmft/models/layoutlmv2/modeling_layoutlmv2.py#L895-L937)
Tests are not written yet, and I might need help with that.
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18943/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18943",
"html_url": "https://github.com/huggingface/transformers/pull/18943",
"diff_url": "https://github.com/huggingface/transformers/pull/18943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18943.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18942/events
|
https://github.com/huggingface/transformers/issues/18942
| 1,366,814,411
|
I_kwDOCUB6oc5Rd-7L
| 18,942
|
Output scores in TranslationPipeline
|
{
"login": "d-miketa",
"id": 320321,
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-miketa",
"html_url": "https://github.com/d-miketa",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https://api.github.com/users/d-miketa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-miketa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-miketa/subscriptions",
"organizations_url": "https://api.github.com/users/d-miketa/orgs",
"repos_url": "https://api.github.com/users/d-miketa/repos",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-miketa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"This seems a nice addition ! \r\n\r\nSame here, I have limited bandwidth at the moment.\r\n\r\nNotes for anyone wanting to implement this.\r\n\r\nThe goal is NOT to support every single feature `generate` supports in terms of return, only the one that make sense for users not knowing about ML, and not being power users (anyone that knows enough, should be able to drop down from pipelines and using lower level objects to get full control, or override the pipeline by subclassing). \r\n1 score per proposed translation fits that model.\r\n\r\nA counter-example would be: `score` per token was asked by users on `bloom` (it's `text-generation` not `translation`, but since it works similarly under the hood I'm leaving breadcrumbs. This for instance is outside of the scope of pipelines, since tokens are a ML construct, and users without any ML background will have trouble understanding what they are. In addition, returning such things change the return type which is never ideal when function/classes return types depend on arguments. (and subclassing should be easy enough to add support for anyone that so desires).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
### Feature request
Please consider adding scores to the output dict of TranslationPipeline.
https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/pipelines#transformers.TranslationPipeline.__call__
### Motivation
It'd be nice to see the scores/probabilities of translated sentences rather than just an ordered list of the top k beam search outputs. In many cases the scores can be interpreted as a measure of confidence in the prediction and this is valuable, especially in production.
Pipelines are natively supported by Seldon Deploy so this would also improve that integration.
### Your contribution
I don't currently have capacity to submit a PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18942/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18942/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18941/events
|
https://github.com/huggingface/transformers/pull/18941
| 1,366,811,967
|
PR_kwDOCUB6oc4-oDOa
| 18,941
|
Update translation requests contact
|
{
"login": "NimaBoscarino",
"id": 6765188,
"node_id": "MDQ6VXNlcjY3NjUxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6765188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NimaBoscarino",
"html_url": "https://github.com/NimaBoscarino",
"followers_url": "https://api.github.com/users/NimaBoscarino/followers",
"following_url": "https://api.github.com/users/NimaBoscarino/following{/other_user}",
"gists_url": "https://api.github.com/users/NimaBoscarino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NimaBoscarino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NimaBoscarino/subscriptions",
"organizations_url": "https://api.github.com/users/NimaBoscarino/orgs",
"repos_url": "https://api.github.com/users/NimaBoscarino/repos",
"events_url": "https://api.github.com/users/NimaBoscarino/events{/privacy}",
"received_events_url": "https://api.github.com/users/NimaBoscarino/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Doesn't look like I have merge privileges on this repo, could you merge it @sgugger?",
"Just waiting for the last green tick and will do so!"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
Updates the contact for translation requests to GuggerSylvain (@sgugger - if you're alright with that!)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Documentation: @sgugger, @osanseviero
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18941/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18941",
"html_url": "https://github.com/huggingface/transformers/pull/18941",
"diff_url": "https://github.com/huggingface/transformers/pull/18941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18941.patch",
"merged_at": 1662707724000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18940/events
|
https://github.com/huggingface/transformers/issues/18940
| 1,366,741,431
|
I_kwDOCUB6oc5RdtG3
| 18,940
|
Getting the heat map out of VILT (Figure 4 in the paper)
|
{
"login": "Ngheissari",
"id": 83084391,
"node_id": "MDQ6VXNlcjgzMDg0Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/83084391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ngheissari",
"html_url": "https://github.com/Ngheissari",
"followers_url": "https://api.github.com/users/Ngheissari/followers",
"following_url": "https://api.github.com/users/Ngheissari/following{/other_user}",
"gists_url": "https://api.github.com/users/Ngheissari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ngheissari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ngheissari/subscriptions",
"organizations_url": "https://api.github.com/users/Ngheissari/orgs",
"repos_url": "https://api.github.com/users/Ngheissari/repos",
"events_url": "https://api.github.com/users/Ngheissari/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ngheissari/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nYes it's totally possible to re-create that. Basically it comes down to translating [this script](https://github.com/dandelin/ViLT/blob/master/demo.py) to a Gradio demo.\r\n\r\nI'm marking this as \"good first issue\" as it seems fairly straightforward.",
"Hi, I would like to work on it",
"@NielsRogge Is this issue still open ? I would to like to work on this. Can you assign this issue to me ?\r\n",
"@NielsRogge , is this issue still open I would like to contribute to this ",
"Yeah, I guess. I think you guys can go ahead and open PR.",
"> @NielsRogge , is this issue still open I would like to contribute to this\r\n\r\nHey Rajath ! Would love to work on this issue with you. Do you mind working with me on this? Kinda new to open source contribution. \r\n",
"Is anyone still working on this? I can help",
"@NielsRogge is this supposed to be implemented as a method on all vilts or a function that takes a vilt model as input and launches the gradio demo? ",
"You can just implement a Gradio demo and host it on https://huggingface.co/spaces.",
"> You can just implement a Gradio demo and host it on https://huggingface.co/spaces.\r\n\r\nI've made one https://huggingface.co/spaces/MikailDuzenli/vilt_demo , I just implemented the demo of the model itself for the moment but I'm trying to add the heatmap (help is welcome).",
"> > You can just implement a Gradio demo and host it on https://huggingface.co/spaces.\r\n> \r\n> I've made one https://huggingface.co/spaces/MikailDuzenli/vilt_demo , I just implemented the demo of the model itself for the moment but I'm trying to add the heatmap (help is welcome).\r\n\r\nI could help",
"Very cool demo @MikailINTech! Awesome work. Final step is indeed visualizing the heatmap",
"> Very cool demo @MikailINTech! Awesome work. Final step is indeed visualizing the heatmap\r\n\r\nThank you ! Just finished adding the heatmap. Is there a way I can have this issue mark as resolved ?",
"Really cool! Although I'd also include the entire image in the result (not just the heat map), to compare.\r\n\r\nThen I'll close this issue!",
"Thanks @NielsRogge for the suggestion, now one can see the image and the heatmap. I hope that it's what @Ngheissari was looking for ",
"Awesome! Closing this issue.\r\n\r\nI tweeted about it here :) https://twitter.com/NielsRogge/status/1580246704011370496"
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
### Feature request
I would like to get the heatmap from ViLT (Visualizations of transportation plan of word patch alignment).
### Motivation
It is useful for debug and also might be useful for other applications. It is shown in Figure 4 in the paper: https://arxiv.org/pdf/2102.03334.pdf
### Your contribution
NA
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18940/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18939/events
|
https://github.com/huggingface/transformers/pull/18939
| 1,366,425,145
|
PR_kwDOCUB6oc4-muAs
| 18,939
|
RFC: Replace custom TF embeddings by Keras embeddings
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"My thoughts:\r\n\r\n- I think the behaviour of `tf.name_scope` is intended and stable, even if it's not documented (TF documentation isn't always great). I think we can rely on that safely, and it's a lot better than using compatibility methods from `v1`.\r\n- I agree that how we're doing this right now isn't great, and this code is a big improvement.\r\n- I think how we use `name_scope` is still a little problematic. However, I don't want to make any big breaking changes there right now because the PT codebase will probably also change soon to use whatever new pickle-free state dict save format the PT devs come up with!\r\n\r\nSo overall, I think this is a good addition that cleans up a longstanding source of issues in the code, and shouldn't take too long to implement across the codebase."
] | 1,662
| 1,662
| 1,662
|
MEMBER
| null |
# What does this PR do?
This is an RFC with a code example in the PR -- my primary goal is not to get the PR approved, but rather to discuss an improvement to our TF codebase, with an example that passes all tests.
## Context
In our TF implementation of models with embedding layers, we rely on two custom-made classes:
1. [`TFSharedEmbeddings`](https://github.com/huggingface/transformers/blob/bb6f6d53386bf2340eead6a8f9320ce61add3e96/src/transformers/modeling_tf_utils.py#L2611) -- a custom embedding layer whose added benefit is the ability to also use it as a dense layer;
2. [`TFWrappedEmbeddings`](https://github.com/huggingface/transformers/blob/bb6f6d53386bf2340eead6a8f9320ce61add3e96/src/transformers/modeling_tf_utils.py#L2838) -- used to manipulate the scope of the weights, which would normally depend on the layer where the weights are first used in an operation. Used with tied weight embeddings.
Problems with this setup include:
1. Users can't use the expected Keras tools to handle embeddings;
2. Relies on TF1 compatibility to set the right name to the weights (`tf.compat.v1.variable_scope`);
3. Resizing the embeddings, a major source of bugs atm, uses complex logic that consists in manipulating `tf.Variable`.
## Proposed change
The proposal is straightforward: replace `TFSharedEmbeddings` by `tf.keras.layer.Embedding`, remove `TFWrappedEmbeddings`, and make the necessary adaptations. A few details to keep in mind (and that you can browse in the code):
1. There is a whole new code path for resizing the embeddings. Instead of `if/else` in the original functions, changed functions were rewritten with `_v2` prepended to their name (which should also facilitate the transition). You can see that the new functions are simpler than the originals;
2. Giving the right name to the embeddings (so we can load existing weights) was the hardest part. TF had limited maneuverability here. To pull it off, I relied on UNDOCUMENTED behavior of `tf.name_scope`. Normally, `tf.name_scope` appends to the existing scope -- if the scope for the current layer is `foo`, weights are in the form of `foo/weights:0`; if we add a context manager `tf.name_scope("bar")`, weights will be in the form of `foo/bar/weights:0`. However, [if the argument of `tf.name_scope` ends with `/`](https://github.com/tensorflow/tensorflow/blob/359c3cdfc5fabac82b3c70b3b6de2b0a8c16874f/tensorflow/python/framework/ops.py#L6984), then it will be a stand-alone name scope. Taking the previous example, with `tf.name_scope("bar/")`, weights will be in the form of `bar/weights:0`. This behavior has been in the TF codebase since its first commit (>7 yrs), and replacing `TFWrappedEmbeddings` relies on this behavior;
3. The existing TF Bart assumes the input/output embeddings are tied, which PT Bart does not assume. I've not changed this part, so the example you can see in this PR is for models with tied weights;
4. If you open PT Bart and compare side by side, you'll see that the implementations on the two frameworks are now more similar :)
I estimate about a 1-2 weeks worth of work to propagate the change, which includes:
1. Replace all `TFSharedEmbeddings` and `TFWrappedEmbeddings`;
2. Handle edge cases -- `resize_token_embeddings` is not implemented/is broken (and untested) in several recent TF models;
3. Remove/deprecate old code after 1 and 2 are done.
## Pros/cons
(+) Simpler and smaller codebase, especially for the models;
(+) TF model code closer to PT's;
(+) Keras-native embeddings ( = users and contributors can be more productive);
(+) `resize_token_embeddings` usable in all models;
(-) Time spent refactoring is time not spent building new things;
(-) The solution still relies on named scopes for cross-framework weight matching, which is hacky.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18939/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18939",
"html_url": "https://github.com/huggingface/transformers/pull/18939",
"diff_url": "https://github.com/huggingface/transformers/pull/18939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18939.patch",
"merged_at": 1662806090000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18938/events
|
https://github.com/huggingface/transformers/pull/18938
| 1,366,411,927
|
PR_kwDOCUB6oc4-mrDT
| 18,938
|
Update default revision for document-question-answering
|
{
"login": "ankrgyl",
"id": 565363,
"node_id": "MDQ6VXNlcjU2NTM2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankrgyl",
"html_url": "https://github.com/ankrgyl",
"followers_url": "https://api.github.com/users/ankrgyl/followers",
"following_url": "https://api.github.com/users/ankrgyl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions",
"organizations_url": "https://api.github.com/users/ankrgyl/orgs",
"repos_url": "https://api.github.com/users/ankrgyl/repos",
"events_url": "https://api.github.com/users/ankrgyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankrgyl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Gentle nudge @Narsil @NielsRogge"
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
# What does this PR do?
Prior to this change, users needed to instantiate a tokenizer themselves while using `impira/layoutlm-document-qa` to set the `add_prefix_space=True` parameter. I made this the default in the tokenizer's config [here](https://huggingface.co/impira/layoutlm-document-qa/commit/52e01b37ccf248953eb527c1d96e9ec1750f3c3c), and this change simply updates the pinned revision to reference it.
After this change, the following commands work:
```
In [1]: from transformers import AutoTokenizer, pipeline
In [2]: nlp = pipeline('document-question-answering')
In [3]: nlp(
...: "https://templates.invoicehome.com/invoice-template-us-neat-750px.png",
...: "What is the invoice number?"
...: )
Out[3]: {'score': 0.9998127222061157, 'answer': 'us-001', 'start': 15, 'end': 15}
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18938/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18938",
"html_url": "https://github.com/huggingface/transformers/pull/18938",
"diff_url": "https://github.com/huggingface/transformers/pull/18938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18938.patch",
"merged_at": 1663077843000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18937/events
|
https://github.com/huggingface/transformers/pull/18937
| 1,366,329,305
|
PR_kwDOCUB6oc4-mYhG
| 18,937
|
Exit early in load if no weights are in the sharded state dict
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
As suggested by @stas00 in #18911, this PR checks whether there any parameters in the state dict to load in the current module and exits early if there are none. This might be useful when loading a huge model with a lot of shards.
@stas00 could you try and see if there is a gain with this or not?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18937/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18937",
"html_url": "https://github.com/huggingface/transformers/pull/18937",
"diff_url": "https://github.com/huggingface/transformers/pull/18937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18937.patch",
"merged_at": 1662750429000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18936/events
|
https://github.com/huggingface/transformers/issues/18936
| 1,366,020,902
|
I_kwDOCUB6oc5Ra9Mm
| 18,936
|
I render only black screen
|
{
"login": "nedzone",
"id": 14924529,
"node_id": "MDQ6VXNlcjE0OTI0NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14924529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nedzone",
"html_url": "https://github.com/nedzone",
"followers_url": "https://api.github.com/users/nedzone/followers",
"following_url": "https://api.github.com/users/nedzone/following{/other_user}",
"gists_url": "https://api.github.com/users/nedzone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nedzone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nedzone/subscriptions",
"organizations_url": "https://api.github.com/users/nedzone/orgs",
"repos_url": "https://api.github.com/users/nedzone/repos",
"events_url": "https://api.github.com/users/nedzone/events{/privacy}",
"received_events_url": "https://api.github.com/users/nedzone/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hmmm weird error! If you don't have a lot of items in your cache, I would recommend removing it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I noticed that this issue might be stale? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,668
| 1,668
|
NONE
| null |
### System Info
I get the following message
`torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E41C550>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E4328B0>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 6 files to the new cache system
0%| | 0/6 [00:01<?, ?it/s]
There was a problem when trying to move your cache:
File "transformers\utils\hub.py", line 1077, in <module>
File "transformers\utils\hub.py", line 1040, in move_cache
File "transformers\utils\hub.py", line 997, in move_to_new_cache
File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink`
My GPU NVIDIA GeForce GTX 1660 Ti
Processor: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
16 RAM DDR4
Can you help me please solve it.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have opened the program and checked it
### Expected behavior
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E41C550>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x0000024B2E4328B0>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 6 files to the new cache system
0%| | 0/6 [00:01<?, ?it/s]
There was a problem when trying to move your cache:
File "transformers\utils\hub.py", line 1077, in <module>
File "transformers\utils\hub.py", line 1040, in move_cache
File "transformers\utils\hub.py", line 997, in move_to_new_cache
File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18936/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18935/events
|
https://github.com/huggingface/transformers/issues/18935
| 1,365,929,481
|
I_kwDOCUB6oc5Ram4J
| 18,935
|
[ViT] Add note about interpolation of position encodings
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nI'm findind hard to achieve to set the interporlate_pos_encoding = True by redefining the model's forward method.\r\nCould you make a brief step by step case on how to do so in order to train the model, and not just make a forward pass?\r\n\r\nI thought it was just about modifying the model.config which feeds the module parameters, as one can set the output_attentions = True per example, but I see is not the same case for the inteporlate_pos_encoding.\r\n\r\nThank you!\r\nP.D: I have post this same question at the hugginface forum",
"Could you clarify? The only thing you need to change is pass `interpolate_pos_encoding=True` to the forward when training the model (no need to redefine the forward method).\r\n\r\nThis issue was fixed in #19103, therefore I'm closing this issue.",
"> \r\nI fine tune the model by using the Trainer build class from transformers, not directly by calling the forward method, so I'm not finding the way to set the interpolate_pos_encoding to true in that case.\r\n",
"Hi @NielsRogge \r\nIs there any method to set the `interpolate_pos_encoding` to `True` while using the Trainer API?\r\nNot able to find a method to pass it as a parameter to the `forward` method without redefining the `forward` method.\r\n\r\n\r\nAlso - if a section to explain this step is added in the example notebooks - would be really helpful for the community.",
"Pinging @sgugger here - the question is whether one can set a boolean argument in the forward of a model to `True` when using the Trainer API.",
"No, that is not possible.",
"Sad to hear. Many code examples use the Trainer API. \r\n\r\nWould be great to bring this feature to the trainer API.",
"> Could you clarify? The only thing you need to change is pass `interpolate_pos_encoding=True` to the forward when training the model (no need to redefine the forward method).\r\n> \r\n> This issue was fixed in #19103, therefore I'm closing this issue.\r\n\r\nSorry to bother. Currently there are many models based on ViT. They do not support this argument (e.g. CLIP, MAE, SAM).\r\nAnd the `interpolate_pos_encoding` is at a very beginning step which makes it hard to hijack. And what I am doing is modifying the source code (paste the interpolate function and run it on the pos_embeddings). Is there any suggestion to do this more elegant since they are all ViTs principlely?\r\nI also tried loading the models to `ViTModel`, some are ok (MAE). but for SAM and CLIP, it cannot match the weights."
] | 1,662
| 1,699
| 1,664
|
CONTRIBUTOR
| null |
### Feature request
We should add a note to the docs on the fact that, in order to fine-tune ViT on higher resolution (e.g. 512x512), one can set `interpolate_pos_encoding=True` in the forward of the model.
### Motivation
This thread on the forum: https://discuss.huggingface.co/t/fine-tuning-image-transformer-on-higher-resolution/22623/4
### Your contribution
I'll take this!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18935/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18935/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18934/events
|
https://github.com/huggingface/transformers/pull/18934
| 1,365,876,886
|
PR_kwDOCUB6oc4-kyMe
| 18,934
|
Neptune.ai integration improvements
|
{
"login": "Raalsky",
"id": 917619,
"node_id": "MDQ6VXNlcjkxNzYxOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/917619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raalsky",
"html_url": "https://github.com/Raalsky",
"followers_url": "https://api.github.com/users/Raalsky/followers",
"following_url": "https://api.github.com/users/Raalsky/following{/other_user}",
"gists_url": "https://api.github.com/users/Raalsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raalsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raalsky/subscriptions",
"organizations_url": "https://api.github.com/users/Raalsky/orgs",
"repos_url": "https://api.github.com/users/Raalsky/repos",
"events_url": "https://api.github.com/users/Raalsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raalsky/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- Neptune Run creation for every training, mostly affects HPO
- Logging model checkpoints
- Support for HPO and DDP
- Better accessibility of `neptune` across the codebase (`report_to all` etc.)
- Docs improved - added an entry about `NeptuneCallback`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Neptune: @shnela
Not sure who to call in the context of integrations:
- trainer: @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18934/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18934",
"html_url": "https://github.com/huggingface/transformers/pull/18934",
"diff_url": "https://github.com/huggingface/transformers/pull/18934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18934.patch",
"merged_at": 1662737855000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18933/events
|
https://github.com/huggingface/transformers/pull/18933
| 1,365,862,256
|
PR_kwDOCUB6oc4-ku56
| 18,933
|
Simplify `is_pad_token_not_equal_to_eos_token_id`
|
{
"login": "ekagra-ranjan",
"id": 3116519,
"node_id": "MDQ6VXNlcjMxMTY1MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekagra-ranjan",
"html_url": "https://github.com/ekagra-ranjan",
"followers_url": "https://api.github.com/users/ekagra-ranjan/followers",
"following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}",
"gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions",
"organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs",
"repos_url": "https://api.github.com/users/ekagra-ranjan/repos",
"events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Reduces a bool check which simplifies the expression for `is_pad_token_not_equal_to_eos_token_id`
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18933/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18933",
"html_url": "https://github.com/huggingface/transformers/pull/18933",
"diff_url": "https://github.com/huggingface/transformers/pull/18933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18933.patch",
"merged_at": 1662738296000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18932/events
|
https://github.com/huggingface/transformers/pull/18932
| 1,365,607,581
|
PR_kwDOCUB6oc4-j24P
| 18,932
|
Fix LayoutXLM wrong link in README
|
{
"login": "Devlee247",
"id": 64190071,
"node_id": "MDQ6VXNlcjY0MTkwMDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/64190071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Devlee247",
"html_url": "https://github.com/Devlee247",
"followers_url": "https://api.github.com/users/Devlee247/followers",
"following_url": "https://api.github.com/users/Devlee247/following{/other_user}",
"gists_url": "https://api.github.com/users/Devlee247/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Devlee247/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Devlee247/subscriptions",
"organizations_url": "https://api.github.com/users/Devlee247/orgs",
"repos_url": "https://api.github.com/users/Devlee247/repos",
"events_url": "https://api.github.com/users/Devlee247/events{/privacy}",
"received_events_url": "https://api.github.com/users/Devlee247/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger could you review this PR please?"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
fix LayoutXLM wrong link in README
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18932/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18932",
"html_url": "https://github.com/huggingface/transformers/pull/18932",
"diff_url": "https://github.com/huggingface/transformers/pull/18932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18932.patch",
"merged_at": 1662636761000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18931/events
|
https://github.com/huggingface/transformers/pull/18931
| 1,365,478,687
|
PR_kwDOCUB6oc4-jazq
| 18,931
|
add DDP HPO support for sigopt. only main_process will have HPO, and …
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger @yao-matrix. please help to review",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger thanks for the review. the torch.distributed.broadcast_object_list only support list(each element is pickable) input, does not support dict class"
] | 1,662
| 1,666
| 1,662
|
CONTRIBUTOR
| null |
…pass argument to other process
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
HPO do not support DDP now. add support in sigopt backend
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18931/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18931",
"html_url": "https://github.com/huggingface/transformers/pull/18931",
"diff_url": "https://github.com/huggingface/transformers/pull/18931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18931.patch",
"merged_at": 1662982645000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18930/events
|
https://github.com/huggingface/transformers/pull/18930
| 1,365,247,759
|
PR_kwDOCUB6oc4-ip46
| 18,930
|
[WIP] Add ZeroShotObjectDetectionPipeline (#18445)
|
{
"login": "sahamrit",
"id": 88420255,
"node_id": "MDQ6VXNlcjg4NDIwMjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/88420255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahamrit",
"html_url": "https://github.com/sahamrit",
"followers_url": "https://api.github.com/users/sahamrit/followers",
"following_url": "https://api.github.com/users/sahamrit/following{/other_user}",
"gists_url": "https://api.github.com/users/sahamrit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sahamrit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sahamrit/subscriptions",
"organizations_url": "https://api.github.com/users/sahamrit/orgs",
"repos_url": "https://api.github.com/users/sahamrit/repos",
"events_url": "https://api.github.com/users/sahamrit/events{/privacy}",
"received_events_url": "https://api.github.com/users/sahamrit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, just seeing the merge messed up the commit history. There are 377 changes, which is impossible for the review and merge the PR into `main`.\r\n\r\nI suggest to reset to the last clean commit locally. Then use `git rebase main` to keep update with `main` (after pulling the latest changes from remote `main` into local `main`). Or any way works (as I am not sure what causes the current git status)",
"> Hi, just seeing the merge messed up the commit history. There are 377 changes, which is impossible for the review and merge the PR into `main`.\r\n> \r\n> I suggest to reset to the last clean commit locally. Then use `git rebase main` to keep update with `main` (after pulling the latest changes from remote `main` into local `main`). Or any way works (as I am not sure what causes the current git status)\r\n\r\nHi @ydshieh sorry for that. Was in a hurry to wrap the PR since I was going for vacation. Messed up in rebasing. Have reverted to stable commit. Will add the correct changes once I am back!",
"No problem, @sahamrit! I am super happy that you are able to get back to the stable commit 💯 . Have a nice vacation!",
"Hi @alaradirik , can you review the changes?",
"> Thank you for this PR.\r\n> \r\n> * I suggest to modify the output of the pipeline to be more \"natural\". (see relevant comment).\r\n> * `text_queries` should be renamed `candidate_labels` to be in line with `zero-shot-classification`.\r\n\r\nHey @Narsil! I suggested using `text_queries` instead because it is a multi-modal model where users query images with free-form text. The queried object is either found or not and the found object's label is not chosen from a selection of candidate labels, so I think it'd make more sense to keep as it is.",
"> Hey @Narsil! I suggested using text_queries instead because it is a multi-modal model where users query images with free-form text. The queried object is either found or not and the found object's label is not chosen from a selection of candidate labels, so I think it'd make more sense to keep as it is.\r\n\r\nAre you sure ? I just tried your code, and it seems all the labels stem from the text being sent. Meaning I think there is a 1-1 correspondance between `label` and `text_queries` (meaning `candidate_labels` would be a fine name).\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\nobject_detector = pipeline(\r\n \"zero-shot-object-detection\", model=\"hf-internal-testing/tiny-random-owlvit-object-detection\"\r\n)\r\n\r\noutputs = object_detector(\r\n \"./tests/fixtures/tests_samples/COCO/000000039769.png\",\r\n text_queries=[\"aaa cat\", \"xx\"],\r\n threshold=0.64,\r\n)\r\nprint(outputs)\r\n```",
"Hi @Narsil, Sure the output labels are taken **exactly** from the input text_queries. The reason of naming it \"text_queries\" instead of \"candidate_labels\" as in case of zero-shot-image-classification is that, in zero-shot-image-classification pipeline, the [candidate labels are wrapped by the hypothesis template](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/zero_shot_image_classification.py#:~:text=candidate_labels%20(%60List%5Bstr,logits_per_image ), whereas here the text_queries are free text queries!\r\n\r\nHope it clarifies",
"> Are you sure ? I just tried your code, and it seems all the labels stem from the text being sent. Meaning I think there is a 1-1 correspondance between `label` and `text_queries` (meaning `candidate_labels` would be a fine name).\r\n> \r\n\r\nYes, there is a 1-1 correspondence but I meant only the query text / a single label is evaluated for each object, whereas the label is selected from among multiple candidate labels for `zero-shot-classification`.",
"> Yes, there is a 1-1 correspondence but I meant only the query text / a single label is evaluated for each object, whereas the label is selected from among multiple candidate labels for zero-shot-classification.\r\n\r\nI still think that `zero-shot` -> `candidate_labels` logic works. If we reuse names, it means that it's easier on users to discover and use pipelines. The fact that they are slightly different doesn't justify in my eyes the use of a different name.\r\nI would even argue that they are exactly the same and the difference in how they are used are cause by `classification` vs `object-detection` not by what `candidate_labels` are.\r\n\r\nI personally think using `candidate_labels` would be misleading and confusing given architecture and use case of this model. There have been other zero-shot object detection papers published very recently and it'd be better to get the naming right in order to avoid future breaking changes.",
"HI @Narsil @alaradirik, kindly review the changes"
] | 1,662
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the `ZeroShotObjectDetectionPipeline`. It is tested on `OwlViTForObjectDetection` model and should enable the inference following inference API
```
from transformers import pipeline
pipe = pipeline("zero-shot-object-detection")
pipe("cats.png", ["cat", "remote"])
```
This pipeline could default to the [https://huggingface.co/google/owlvit-base-patch32](https://huggingface.co/google/owlvit-base-patch32) checkpoint
Fixes # (`18445`)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Link to the [Issue](https://github.com/huggingface/transformers/issues/18445)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@alaradirik @Narsil
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18930/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18930",
"html_url": "https://github.com/huggingface/transformers/pull/18930",
"diff_url": "https://github.com/huggingface/transformers/pull/18930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18930.patch",
"merged_at": 1665151220000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18929/events
|
https://github.com/huggingface/transformers/pull/18929
| 1,365,059,820
|
PR_kwDOCUB6oc4-iDV3
| 18,929
|
Starts on a list of external deps required for dev
|
{
"login": "colindean",
"id": 197224,
"node_id": "MDQ6VXNlcjE5NzIyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/197224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/colindean",
"html_url": "https://github.com/colindean",
"followers_url": "https://api.github.com/users/colindean/followers",
"following_url": "https://api.github.com/users/colindean/following{/other_user}",
"gists_url": "https://api.github.com/users/colindean/gists{/gist_id}",
"starred_url": "https://api.github.com/users/colindean/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/colindean/subscriptions",
"organizations_url": "https://api.github.com/users/colindean/orgs",
"repos_url": "https://api.github.com/users/colindean/repos",
"events_url": "https://api.github.com/users/colindean/events{/privacy}",
"received_events_url": "https://api.github.com/users/colindean/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,674
| 1,662
|
CONTRIBUTOR
| null |
I've found that I need to install MeCab manually on my AS Mac while working on #18702.
# What does this PR do?
Adds nudge to install MeCab from Homebrew to dev contributing instructions.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18929/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18929",
"html_url": "https://github.com/huggingface/transformers/pull/18929",
"diff_url": "https://github.com/huggingface/transformers/pull/18929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18929.patch",
"merged_at": 1662582783000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18928/events
|
https://github.com/huggingface/transformers/pull/18928
| 1,365,014,299
|
PR_kwDOCUB6oc4-h5WI
| 18,928
|
Disable model checkpoint sharding of large models for SageMaker Model Parallel
|
{
"login": "viclzhu",
"id": 20961977,
"node_id": "MDQ6VXNlcjIwOTYxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/20961977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viclzhu",
"html_url": "https://github.com/viclzhu",
"followers_url": "https://api.github.com/users/viclzhu/followers",
"following_url": "https://api.github.com/users/viclzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/viclzhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viclzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viclzhu/subscriptions",
"organizations_url": "https://api.github.com/users/viclzhu/orgs",
"repos_url": "https://api.github.com/users/viclzhu/repos",
"events_url": "https://api.github.com/users/viclzhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/viclzhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Oh ok, I see. That makes sense, thanks!"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
Disable model checkpoint sharding of large models for SageMaker Model Parallel
* Using large max shard size since can't disable completely from the `save_pretrained()` call
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18928/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18928",
"html_url": "https://github.com/huggingface/transformers/pull/18928",
"diff_url": "https://github.com/huggingface/transformers/pull/18928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18928.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18927
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18927/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18927/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18927/events
|
https://github.com/huggingface/transformers/pull/18927
| 1,364,992,447
|
PR_kwDOCUB6oc4-h0s9
| 18,927
|
Skip some doctests in quicktour
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM, but it might be better to use an already processed dataset we could host somewhere so the whole thing can run (particularly since this also becomes a notebook).",
"I'll merge this for now so the daily CI is happy and then update it later with a processed dataset!"
] | 1,662
| 1,662
| 1,662
|
MEMBER
| null |
The quicktour includes code snippets that instantiate a generic `dataset["train"]` and `dataset["test"]` (in the `Trainer` sections) that's only meant to be an example a user can copy/paste and replace with their own dataset. This causes the tests to fail since no dataset is actually being loaded. This PR adds the `# doctest: +SKIP` directive to skip the affected code snippets (the alternative option is to include a real dataset in the examples that can be loaded).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18927/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18927",
"html_url": "https://github.com/huggingface/transformers/pull/18927",
"diff_url": "https://github.com/huggingface/transformers/pull/18927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18927.patch",
"merged_at": 1662587122000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18926/events
|
https://github.com/huggingface/transformers/issues/18926
| 1,364,946,168
|
I_kwDOCUB6oc5RW2z4
| 18,926
|
Follow ups to DocumentQuestionAnswering Pipeline
|
{
"login": "ankrgyl",
"id": 565363,
"node_id": "MDQ6VXNlcjU2NTM2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankrgyl",
"html_url": "https://github.com/ankrgyl",
"followers_url": "https://api.github.com/users/ankrgyl/followers",
"following_url": "https://api.github.com/users/ankrgyl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions",
"organizations_url": "https://api.github.com/users/ankrgyl/orgs",
"repos_url": "https://api.github.com/users/ankrgyl/repos",
"events_url": "https://api.github.com/users/ankrgyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankrgyl/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"cc'ing @Narsil for enabling the model on the inference API, cc'ing @stevhliu for adding tutorial documentation to the task summary",
"@NielsRogge because we removed `donut-swin` from `AutoModelForDocumentQuestionAnswering`, you can no longer create a pipeline with donut, i.e.\r\n\r\n```\r\nIn [2]: p = pipeline('document-question-answering', model='naver-clova-ix/donut-base-finetuned-docvqa')\r\n/Users/ankur/projects/transformers/venv/lib/python3.10/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:2895.)\r\n return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\r\nThe model 'VisionEncoderDecoderModel' is not supported for document-question-answering. Supported models are ['LayoutLMForQuestionAnswering', 'LayoutLMv2ForQuestionAnswering', 'LayoutLMv3ForQuestionAnswering'].\r\n```\r\n\r\nShould we add it back to that list? Or what is the best way to support that?",
"Could we re-open this (I don't think I have permissions to)? There are still a few changes necessary to complete all of the checkboxes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ankrgyl Can I ask you if I can work on this?\r\nIf I want to work on adding support for multi-page documents (e.g. for Donut, we need to present one image per page), may I ask you where I can start to proceed making contributions?",
"Absolutely!\r\n\r\nFeel free to start looking here: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/document_question_answering.py",
"> * Add support for multi-page documents (e.g. for Donut, we need to present one image per page)\r\n\r\nThank you! I carefully read it! In order to add support for multi-page documents in `document_question_answering.py`, should I modify some methods in that file such as `preprocess()`? Can I create a pull request of the file you provided after modifying those methods?",
"@ankrgyl Hello. I would love to contribute to this task : Add tutorial documentation to Task Summary. Is it open and may I get pointers on how to begin working on it?\r\nThank you.",
"@elabongaatuo It seems like the Add tutorial documentation to Task Summary is still open. are you working on it? It seems you need to change starting from [here](https://github.com/huggingface/transformers/blob/3335724376319a0c453049d0cd883504f530ff52/src/transformers/pipelines/document_question_answering.py#L103)",
"Hello @y3sar , no, I am not working on it at the moment. ",
"@elabongaatuo then I would like to take it up if there is no problem with you\r\n\r\n> Hello @y3sar , no, I am not working on it at the moment.\r\n\r\n",
"> @elabongaatuo then I would like to take it up if there is no problem with you\r\n> \r\n> > Hello @y3sar , no, I am not working on it at the moment.\r\n\r\n@y3sar , sure thing. 😊 no problem.",
"@ankrgyl I would Like to work on this Add tutorial documentation to [Task Summary](https://huggingface.co/docs/transformers/v4.21.3/en/task_summary#question-answering) and also in Add support for multi-page documents (e.g. for Donut, we need to present one image per page)",
"@ankrgyl Can i work on Refactor Donut usage ???",
"Hey @ankrgyl ! I would be happy to contribute to this issue by adding support for multi-page documents.\r\nCould you assign this to me ?",
"Hey! For anyone wanting to contribute, the best way is to just open a PR and link it here! We don't usually assign issues as they can be taken over in case of inactivity for example! 🤗 "
] | 1,662
| 1,698
| null |
CONTRIBUTOR
| null |
### Feature request
PR https://github.com/huggingface/transformers/pull/18414 has a number of TODOs left over which we'd like to track as follow up tasks.
## Pipeline
- [x] Add support for documents which have more than the tokenizer span (e.g. 512) words
- [ ] Add support for multi-page documents (e.g. for Donut, we need to present one image per page)
- [x] Rework use of tokenizer to avoid the need for `add_prefix_space=True`
- [x] Re-add support for Donut
- [ ] Refactor Donut usage in the pipeline or move logic into the tokenizer, so that pipeline does not have as much Donut-specific code
## Testing
- [ ] Enable `test_small_model_pt_donut` once `hf-internal-testing/tiny-random-donut` is implemented
## Documentation / Website
- [x] Add DocumentQuestionAnswering demo to [Hosted Inference API](https://huggingface.co/impira/layoutlm-document-qa) so that model demos work
- [ ] Add tutorial documentation to [Task Summary](https://huggingface.co/docs/transformers/v4.21.3/en/task_summary#question-answering)
### Motivation
These are follow ups that we cut from the initial scope of PR #18414.
### Your contribution
Happy to contribute many or all of these.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18926/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18925
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18925/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18925/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18925/events
|
https://github.com/huggingface/transformers/pull/18925
| 1,364,807,230
|
PR_kwDOCUB6oc4-hNHW
| 18,925
|
pin TF 2.9.1 for self-hosted CIs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Related PR -- #18917 "
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
Same as #18818, but for docker image build and self-hosted CIs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18925/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18925",
"html_url": "https://github.com/huggingface/transformers/pull/18925",
"diff_url": "https://github.com/huggingface/transformers/pull/18925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18925.patch",
"merged_at": 1662572774000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18924
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18924/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18924/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18924/events
|
https://github.com/huggingface/transformers/pull/18924
| 1,364,743,025
|
PR_kwDOCUB6oc4-g_J8
| 18,924
|
Use tiny models for ONNX tests
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
| null |
[] |
[
"Thanks, @lewtun \r\n\r\nLet's run the scheduled CI manually (for ONNX tests) before merge :-)",
"> Thanks, @lewtun\r\n> \r\n> Let's run the scheduled CI (for ONNX tests) before merge :-)\r\n\r\nYes, this is still WIP because I discovered some slow tests fail with the new tiny models. Will debug and fix on the model side where necessary :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18924). All of your documentation changes will be reflected on that endpoint.",
"I left a few comments on Hub PRs. In general, we also like to have small vocab size. But if the current issues persist, no problem for me to use the tokenizers config or files from the original model checkpoint.",
"> I left a few comments on Hub PRs. In general, we also like to have small vocab size. But if the current issues persist, no problem for me to use the tokenizers config or files from the original model checkpoint.\r\n\r\nThanks! \r\n\r\nAs discussed offline, using a small vocab size `v` in the model config requires that `len(tokenizer) == v`. Otherwise the model cannot run inference because the tokenizer will generate out of vocab input IDs and throw an `index out of range` error.\r\n\r\nAFAIK the only way to handle this is to train a tokenizer from scratch on a tiny \"corpus\" and use the resulting vocab size in the model config. This is simple for fast tokenizers, but somewhat painful for slow ones that don't have the `train_from_iterator()` method. \r\n\r\nIn the end it may not be entirely necessary to optimise the model size this way if the resulting \"tiny\" models are fast enough for our test suite",
"As discussed internally, we'll revert the changes to the model repos and create dedicated `tiny-random-onnx-x` repos for the ONNX tests",
"Stable bot begone!",
"@gante told me we can use `wip` label to avoid this bot 😃 ",
"Closing in favour of https://github.com/huggingface/transformers/pull/20333"
] | 1,662
| 1,668
| 1,668
|
MEMBER
| null |
# What does this PR do?
Uses tiny random models for the ONNX tests to speed up the test suite. Closes #18819
## TODO
- [x] Add tiny model for `deepmind/language-perceiver`
- [x] Add tiny model for `deepmind/vision-perceiver-conv`
- [x] Add tiny model for `hustvl/yolos-tiny`
- [x] Add tiny model for `nvidia/segformer-b0-finetuned-ade-512-512`
- [x] Add tiny model for `google/long-t5-local-base`
- [ ] Ensure slow tests pass
### Hub PRs that need merging to ensure slow test pass
- [x] https://huggingface.co/hf-internal-testing/tiny-random-beit/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-deit/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-deit/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-clip/discussions/1
- [ ] https://huggingface.co/hf-internal-testing/tiny-random-clip/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-convbert/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-xlm-roberta/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-xlm-roberta/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-ibert/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-ibert/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-blenderbot-small/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-blenderbot-small/discussions/2
- [x] https://huggingface.co/hf-internal-testing/tiny-random-mt5/discussions/1
- [x] https://huggingface.co/hf-internal-testing/tiny-random-mt5/discussions/2
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18924/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18924/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18924",
"html_url": "https://github.com/huggingface/transformers/pull/18924",
"diff_url": "https://github.com/huggingface/transformers/pull/18924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18924.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18923
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18923/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18923/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18923/events
|
https://github.com/huggingface/transformers/pull/18923
| 1,364,737,742
|
PR_kwDOCUB6oc4-g-At
| 18,923
|
Attention_mask generation error in generation_utils.py
|
{
"login": "yushengsu-thu",
"id": 11704492,
"node_id": "MDQ6VXNlcjExNzA0NDky",
"avatar_url": "https://avatars.githubusercontent.com/u/11704492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yushengsu-thu",
"html_url": "https://github.com/yushengsu-thu",
"followers_url": "https://api.github.com/users/yushengsu-thu/followers",
"following_url": "https://api.github.com/users/yushengsu-thu/following{/other_user}",
"gists_url": "https://api.github.com/users/yushengsu-thu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yushengsu-thu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yushengsu-thu/subscriptions",
"organizations_url": "https://api.github.com/users/yushengsu-thu/orgs",
"repos_url": "https://api.github.com/users/yushengsu-thu/repos",
"events_url": "https://api.github.com/users/yushengsu-thu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yushengsu-thu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
NONE
| null |
The original logic to generate the attention_mask (of GPT series models) is wrong. I revised the logic to generate attention_mask.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18923/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18923",
"html_url": "https://github.com/huggingface/transformers/pull/18923",
"diff_url": "https://github.com/huggingface/transformers/pull/18923.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18923.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18922
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18922/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18922/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18922/events
|
https://github.com/huggingface/transformers/pull/18922
| 1,364,726,481
|
PR_kwDOCUB6oc4-g7lD
| 18,922
|
[WIP] add SpeechT5 model
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hello @hollance. I'll be helping with the Transformer Encoder-Decoder.",
"Hi @hollance I will be helping with TextDecoderPrenet / TextDecoderPostnet",
"@anuragshas:\r\n\r\n> I will be helping with TextDecoderPrenet / TextDecoderPostnet\r\n\r\nGreat! Are you also interested in looking at the tokenizer, since I believe the text pre- and post-net need to use that. The original model uses a BPE tokenizer (there is a download link in the README). I'm not sure what the current tokenizer is in the code, it was copied from Wav2Vec2 but I didn't look at it in detail yet.\r\n",
"The `SpeechEncoderPrenet` is complete now. It gives the same results as the original model. However, there still are some TODOs in this part of the code to look at later.",
"The encoder is complete and verified to work (although there are some parts that possibly could be rewritten, marked with `TODO`). I've started adding the decoder but this doesn't work yet (didn't have time yet to fix it up).",
"Thanks for the in-depth review, @sanchit-gandhi! \r\n\r\n> Wondering if it would have been better to copy everything from Speech2Text, including for the encoder? I know I pointed you in the direction of W2V2! But it has a whole bunch of functionality that is pretty specific to W2V2 pre-training that isn't used in SpeechT5 (e.g. SpecAug). It might be possible to condense the code by copying the Speech2Text encoder model, rather than that from W2V2.\r\n\r\nAside from pre-training stuff, I did bring the code in line with Speech2Text. It's not _exactly_ the same but a bit of a hybrid between Wav2Vec2 and Speech2Text. ;-)\r\n\r\n> We can change the function `_get_feature_vector_attention_mask` to match the original implementation if there's a difference. Better to have correctness here rather than duplicated code from UniSpeech.\r\n\r\nI don't think either approach is more \"correct\" than the other. The question is: at the point where the attention mask goes from 1 to 0, this may happen halfway inside a block of 320 samples (or whatever the frame size is). Does that partially-padded block get included in the predictions or is the entire block considered to be padding and gets excluded? Basically: do we round up or down? SpeechT5 simply makes a different choice here than `_get_feature_vector_attention_mask` but either one works fine.\r\n\r\n> Ideally SpeechT5Model should load all the weights for the Transformer backbone, and SpeechT5ForConditionalGeneration all the weights for the Transformer backbone *and* pre-/post-nets. We could try and match the attribute names more closely between SpeechT5Model and SpeechT5ForConditionalGeneration, i.e. always assigning the encoder as self.encoder (rather than self.wrapped_encoder). And then try to make sure the structures follow as closely as possible. Not loading the decoder weights for CTC is fine!\r\n\r\nThe problem here is that `SpeechT5ForConditionalGeneration` will have `encoder.wrapped_encoder` in the checkpoint while SpeechT5Model only has `encoder`. I could fix this by making a \"fake\" wrapper that just calls the encoder without applying a pre-net, so that SpeechT5Model also has the `encoder.wrapped_encoder` path. (BartForCausalLM does something similar so it's not unprecedented.) EDIT: implemented this. Now the models load as expected.",
"> Aside from pre-training stuff, I did bring the code in line with Speech2Text. It's not exactly the same but a bit of a hybrid between Wav2Vec2 and Speech2Text\r\n\r\nThat sounds great - this is a new model that sits somewhere between the two (acoustic encoder is more Wav2Vec2-like, but the transformer decoder is similar to Speech2Text), so taking elements from each is no issue!\r\n\r\n> Basically: do we round up or down? SpeechT5 simply makes a different choice here than `_get_feature_vector_attention_mask` but either one works fine.\r\n\r\nI see, the numerical differences are tiny as you say. Feel free to pick the one you think is more appropriate! I'd opt to bring ours in-line with the 'official' implementation, but you know more about it!\r\n\r\n> EDIT: implemented this. Now the models load as expected.\r\n\r\nAmazing! With similar logic to `BartForCausalLM` in the end?",
"## Design issues & questions\r\n\r\nPhilosophical question for the Transformers team:\r\n\r\n*TL;DR: SpeechT5 is different from the other models in Transformers and doesn't quite fit in with the design of the library. Is the current approach OK, or should we split it up into multiple different, completely independent models?*\r\n\r\nSome background on the model: SpeechT5 is a speech-to-text (or ASR) model, but also a text-to-speech (TTS) model, as well as a speech-to-speech model, and even text-to-text. These are four different model types but they all share the same encoder-decoder structure. The only difference is that they have different so-called pre-nets and post-nets.\r\n\r\nFor example, in the ASR model the encoder pre-net is basically the first set of layers from Wav2Vec2, and the decoder pre- and post-nets are essentially the first and last layers of BART. By swapping in different pre & post-nets, and fine-tuning the model, the same pretrained architecture can handle different tasks.\r\n\r\nSo far I've implemented only the ASR and TTS model, but there are also checkpoints for voice conversion (speech-to-speech) and pretraining that we might want to add.\r\n\r\nSpecifically, these are the issues I ran into:\r\n\r\n- The current design of Transformers assumes that a model always has one kind of input and one kind of output. This is not true for SpeechT5: some versions of the model have text as input, others speech. Likewise for the output.\r\n\r\n- In other seq2seq models, there is a `ForConditionalGeneration` class that does the predictions. Here, we have at least two such classes, so I named them `ForSpeechToText` (ASR) and `ForTextToSpeech` (TTS) instead.\r\n\r\n- Normally, we'd have an `Encoder` and a `Decoder` class. In SpeechT5, the encoder and decoder classes also need to run a pre-net. This is why there are wrapper classes such as:\r\n - SpeechT5EncoderWithSpeechPrenet\r\n - SpeechT5EncoderWithTextPrenet\r\n - SpeechT5EncoderWithoutPrenet\r\n - SpeechT5DecoderWithSpeechPrenet\r\n - SpeechT5DecoderWithTextPrenet\r\n - SpeechT5DecoderWithoutPrenet\r\n\r\n The `SpeechT5ForSpeechToText` and `SpeechT5ForTextToSpeech` models will instantiate the appropriate encoder and decoder wrapper classes (and also run the post-net). \r\n\r\n The base `SpeechT5Model` class needs to have special logic to handle these different wrappers. It shouldn't be used with the \"naked\" `SpeechT5Encoder` / `SpeechT5Decoder` classes, since they don't have any pre-nets.\r\n\r\n This approach works, but it's also trying to shoehorn a model that doesn't quite fit into the design of Transformers.\r\n\r\n- One side-effect of having these different pre- and post-nets, is that `SpeechT5Model` cannot know in advance what sort of data it gets as input. The input could be tokens (`input_ids`) or raw speech (`input_values`) or spectrograms (`input_features`). \r\n\r\n To allow for this ambiguity, I named the input argument `input_values` everywhere. However, that's the same term that is used for raw audio input. None of the other terms (`input_ids` or `input_features` or `input_embeds`) is really suitable either. Suggestions for a better generic input name that covers the three different modalities are welcome. 😄 \r\n\r\n- Our seq2seq models combine the preprocessing for the encoder and for the decoder into a `Processor` class. The SpeechT5 ASR model needs a different Processor than the TTS model. So I made `SpeechT5ProcessorForSpeechToText` and `SpeechT5ProcessorForTextToSpeech`. These also use different `FeatureExtractor` objects as they process the audio in different ways. \r\n\r\n In the design of Transformers it is assumed each model only has one processor / feature extractor, but here we have two, and we might need a third one (`SpeechT5ProcessorForSpeechToSpeech`) for the voice conversion checkpoint.\r\n\r\n Having multiple processors / feature extractors for the some model type doesn't work very well with the `Auto` classes, as this assumes there is always only one.\r\n\r\n- The TTS model applies a vocoder to the output of the encoder-decoder model. The weights for this vocoder model are kept separate from the main model and it has its own `Config` object, but the implementation lives in `modeling_speecht5.py`. Currently there is no way to share vocoders between audio models, but they probably should live in their own separate world. (Also affects the WIP SpeechToSpeech and FastSpeech2 models.)\r\n\r\n- The `model.generate()` logic works fine for the ASR model but not for the TTS model. It would be nice if the `GenerationMixin` could handle the TTS generation logic as well.\r\n\r\n- There is a version of the ASR model that only uses the encoder, which outputs CTC tokens. This uses its own tokenizer, `SpeechT5CTCTokenizer`, that derives from `SpeechT5Tokenizer`. I haven't seen that pattern for any of the other models in the library.\r\n\r\n- The `SpeechT5ProcessorForSpeechToSpeech` doesn't really fit in with the design of `ProcessorMixin`. It has two feature extractors and no tokenizer. In principle this works, except saving two feature extractors is not supported, as they overwrite each others properties. (Could fix this by overriding the save/load_pretrained logic to add namespacing to the JSON file.)\r\n\r\n- Pipelines don't work. When you do the following, it always tries to instantiate the `ForCTC` model. This happens because we have both a CTC and a Seq2Seq model for ASR, while the pipeline logic assumes there's only one of these.\r\n\r\n```python\r\ngenerator = pipeline(task=\"automatic-speech-recognition\", model=\"Matthijs/speecht5_asr\")\r\n```\r\n\r\nExcept for fixing some small issues, the implementation of SpeechT5 is mostly complete, so you can look at the source in this PR in case the above sounds a bit vague. 😃 \r\n\r\nWhat I'd like to know is: How do you feel about the approach I've taken to make this model fit into Transformers?\r\n\r\nObviously, I wouldn't expect a complete redesign of Transformers just to accomodate SpeechT5, but I would like some feedback on whether you think the above decisions are acceptable. It works but it also kind of breaks some of the conventions that users of the library might expect. \r\n\r\nAn alternative would be to create completely different models in different folders, such as `speecht5_asr` and `speecht5_tts` and to treat these as unrelated. One of these would largely be a copy of the other, but with different pre- and post-nets. (We could simply ignore the `ForPreTraining` model, as it's unlikely to be in high demand.)\r\n",
"@hollance Re `.generate()` not supporting TTS -- `transformers` doesn't have any TTS model, and in fact `.generate()` only supports text (or other sets of integers) output. I'm not sure whether expanding `.generate()` is the way to go, I would have to think about it, but I'd be happy to support in whatever is needed from the conditional generation angle!\r\n\r\n@sanchit-gandhi you folks are working on the generation of audio, correct? Do you have plans for `generate()` or anything related to conditional generation?",
"_The documentation is not available anymore as the PR was closed or merged._",
"## Vocoders\r\n\r\nThe TTS and voice conversion models use a vocoder to convert the predicted mel spectrogram into an audio waveform. Currently this is implemented as `SpeechT5HiFiGAN` inside the SpeechT5 modeling file. \r\n\r\nThe vocoder is treated as a separate model (on the Hub under [Matthijs/speecht5_hifigan](https://huggingface.co/Matthijs/speecht5_hifigan)). It has its own weights and config that are separate from the SpeechT5 model. \r\n\r\nTo generate speech, you optionally pass the vocoder object to `model.generate_speech()`. Without it, this method outputs the spectrogram. With the vocoder, it outputs speech. \r\n\r\nThis allows the user to provide their own vocoder instead of the pretrained one.\r\n\r\n(Note that automapping of `SpeechT5HiFiGANConfig` is not working in this implementation because it has its own config file.)\r\n\r\nMy suggestion is that we treat vocoders as separate model types, just like feature extractors and tokenizers, and that they are owned by the `processor`, which calls the vocoder in the postprocessing / decoding step.\r\n\r\nNote that the [original checkpoint](https://huggingface.co/mechanicalsea/speecht5-vc) for the voice conversion model comes with trained vocoders for the different voices, that do not use the Hi-Fi GAN architecture but the one from Parallel WaveGAN. I did not implement this, since the Hi-Fi GAN vocoder works fine here too.\r\n",
"@sanchit-gandhi \r\n\r\n> * Is `SpeechT5ProcessorForSpeechToSpeech` working or are the feature extractors still overriding each other?\r\n\r\nThey are still overriding each other. I think the only way to fix this is to override `save_pretrained` and `from_pretrained` that are inherited from `ProcessorMixin`.\r\n\r\nEven though what gets saved into `preprocessor_config.json` is wrong, the processor actually does process the data OK, so we could get away with it — but this is mostly due to both feature extractors using the same property names. And that would be asking for bugs when someone uses different configuration values.\r\n",
"## SpeechT5ProcessorForSpeechToSpeech\r\n\r\n(Writing this in case we want to fix this issue properly at some point.)\r\n\r\nThe problem: processor objects are assumed to have a tokenizer and a feature extractor. The config for the feature extractor is saved to `preprocessor_config.json`. However, `SpeechT5ProcessorForSpeechToSpeech` has no tokenizer and two feature extractors. As a result, the second feature extractor overwrites the JSON from the first.\r\n\r\nIn my opinion, the correct approach here would be to not hardcode the filename for feature extractors. Rather than using the `FEATURE_EXTRACTOR_NAME` constant, each feature extractor would get a class variable `feature_extractor_config_name = FEATURE_EXTRACTOR_NAME`. By default this is `preprocessor_config.json` but a class can override it if necessary. For the S2S model, we'd have `preprocessor_encoder_config.json` and `preprocessor_decoder_config.json`, for example.\r\n\r\nHowever, the above solution would affect all of the models in Transformers, and it may still not work due to certain assumptions being made (i.e. you need to know the class name of the feature extractor so you can look up what its filename should be, which is a chicken-and-egg problem). So making this change just for SpeechT5 seems excessive at this point.\r\n\r\nHacks I've tried to work around this:\r\n\r\n* Override `save_pretrained` in `SpeechT5ProcessorForSpeechToSpeech` to save each feature extractor's config in a subdir. This works OK for saving. (It requires some changes to `_upload_modified_files` so that it would pick up the config files inside these subdirs.) \r\n\r\n However, it does not work for loading, since the feature extractor's `from_pretrained` does not know that it's supposed to look inside a subdir, and there's no way to tell it to do so. To fix this would require duplicating a lot of code from `FeatureExtractionMixIn`. And even then, it doesn't work with `AutoProcessor`.\r\n\r\n* Create a `FeatureExtractionMixinHack` class that extends `FeatureExtractionMixin`. This duplicates the loading and saving code in order to save using different filenames for each feature extractor. The SpeechT5 feature extractors now extend from this. Very messy and brittle. Not even sure if works OK in all situations.\r\n\r\n* Save the properties of both feature extractors in the same file, as nested dictionaries. This requires massive changes as the code is currently set up to save one file per object.\r\n\r\nFor now, the \"solution\" is not not use `save_pretrained` and `from_pretrained` with `SpeechT5ProcessorForSpeechToSpeech` and pretend everything is fine. 😅 ",
"To the reviewer: This PR has been reviewed by the audio team several times already, and this is (hopefully 😄) the final review before merging.\r\n\r\nThe only remaining thing is replacing the checkpoints with official ones. But I'd rather wait with creating these until the PR has been approved.\r\n\r\n(We decided not to implement fine-tuning right now.)",
"Hi @sgugger, I made the fixes you asked for. There is now just one feature extractor / processor and the auto classes have been removed again.\r\n\r\n(There are two tests that seem to fail but they're not related to this PR.)",
"@sgugger Hi Sylvain, I made the changes to the feature extractor you asked for. Also, the checkpoints have been updated to point to the microsoft organization on the hub. If all is good with you, feel free to merge this PR at your leisure. Thanks! 😄 \r\n\r\n(Again there is a failing test but this seems unrelated to this PR.)\r\n\r\nEDIT: Unless @sanchit-gandhi wants to give this the final once-over too.",
"Good to merge for me too!",
"@sgugger I don't have rights to merge this myself. So someone else needs to press that button. 😅 "
] | 1,662
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Add the SpeechT5 model to Transformers. See also https://github.com/huggingface/transformers/issues/17569
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Current status of this PR
### To-do list
We decided not to implement fine-tuning for now. But when we do:
- verify that the ASR model (`SpeechT5ForSpeechToText`) can be fine-tuned on new data
- verify that the TTS model (`SpeechT5ForTextToSpeech`) can be fine-tuned on new data (still implement loss)
- verify that the voice conversion model (`SpeechT5ForSpeechToSpeech`) can be fine-tuned on new data (still implement loss)
We decided not to implement `SpeechT5ForPreTraining` for now.
### Notes
- When `attention_mask` is not all ones, the output is slightly different from the original model at the point where the mask goes from 1 to 0. This is due to a difference in how both models subsample the attention mask (`_get_feature_vector_attention_mask` in `SpeechT5SpeechEncoderPrenet`). This is not a big deal but may cause tiny ripple differences in the outputs elsewhere.
- The original model sets the `attention_mask` to 0 for padding tokens in the decoder `input_ids`. I disabled this because it does not play nice with `model.generate`. So the predictions are slightly different for the timesteps following the padding token (which really only happens when the sequence is complete but other sequences in the same batch have not completed yet).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18922/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18922/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18922",
"html_url": "https://github.com/huggingface/transformers/pull/18922",
"diff_url": "https://github.com/huggingface/transformers/pull/18922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18922.patch",
"merged_at": 1675446226000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18921
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18921/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18921/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18921/events
|
https://github.com/huggingface/transformers/pull/18921
| 1,364,682,133
|
PR_kwDOCUB6oc4-gx-o
| 18,921
|
Fixed typo
|
{
"login": "tnusser",
"id": 28186947,
"node_id": "MDQ6VXNlcjI4MTg2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/28186947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tnusser",
"html_url": "https://github.com/tnusser",
"followers_url": "https://api.github.com/users/tnusser/followers",
"following_url": "https://api.github.com/users/tnusser/following{/other_user}",
"gists_url": "https://api.github.com/users/tnusser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tnusser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tnusser/subscriptions",
"organizations_url": "https://api.github.com/users/tnusser/orgs",
"repos_url": "https://api.github.com/users/tnusser/repos",
"events_url": "https://api.github.com/users/tnusser/events{/privacy}",
"received_events_url": "https://api.github.com/users/tnusser/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
Fixed typo itmes --> items
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18921/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18921",
"html_url": "https://github.com/huggingface/transformers/pull/18921",
"diff_url": "https://github.com/huggingface/transformers/pull/18921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18921.patch",
"merged_at": 1663009429000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18920
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18920/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18920/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18920/events
|
https://github.com/huggingface/transformers/pull/18920
| 1,364,623,968
|
PR_kwDOCUB6oc4-glUQ
| 18,920
|
Add Table Transformer
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm very much not in favor of adding a new config parameter that controls where the layernorm is applied. I'm not surprised the original code has it, as Facebook AI usually codes models in a modular way, but not Transformers. We had the same thing with BART and friends, and they are coded as distinct models in the library.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this PR in favor of #19614"
] | 1,662
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds [Table Transformer](https://github.com/microsoft/table-transformer) by Microsoft, which are DETR-compatible models for table detection and table structure recognition tasks in unstructured documents.
Note: I'm making some updates to the original DETR implementation, however these are justified by the fact that the original DETR implementation by Facebook AI also includes these things, which I didn't add when first porting DETR. Hence, our DETR implementation is now more aligned with the original one.
To do:
- [ ] transfer checkpoints to the Microsoft organization
- [ ] add link to notebook
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18920/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/18920/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18920",
"html_url": "https://github.com/huggingface/transformers/pull/18920",
"diff_url": "https://github.com/huggingface/transformers/pull/18920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18920.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18919
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18919/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18919/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18919/events
|
https://github.com/huggingface/transformers/pull/18919
| 1,364,387,429
|
PR_kwDOCUB6oc4-fx_7
| 18,919
|
[VideoMAE] Improve code examples
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR simplifies the code examples of VideoMAE, and adds a seed to make sure the video classifier always predicts "eating spaghetti" on the video (as, due to the sampling of frames, it may occur the model predicts another class, like "eating ice cream"):
```
1019
1020 >>> inputs = feature_extractor(list(video), return_tensors="pt")
1021
1022 >>> with torch.no_grad():
1023 ... outputs = model(**inputs)
1024 ... logits = outputs.logits
1025
1026 >>> # model predicts one of the 400 Kinetics-400 classes
1027 >>> predicted_label = logits.argmax(-1).item()
1028 >>> print(model.config.id2label[predicted_label])
Expected:
eating spaghetti
Got:
eating ice cream
```
Weirdly, this wasn't caught by the doc test CI. It could have to do with the addition of i`mport numpy as np` to the code snippet.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18919/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18919",
"html_url": "https://github.com/huggingface/transformers/pull/18919",
"diff_url": "https://github.com/huggingface/transformers/pull/18919.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18919.patch",
"merged_at": 1662546252000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18918
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18918/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18918/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18918/events
|
https://github.com/huggingface/transformers/pull/18918
| 1,364,348,544
|
PR_kwDOCUB6oc4-fpvG
| 18,918
|
update the train_batch_size in case HPO changes batch_size_per_device
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
" @sgugger @yao-matrix please help review it. the bug in finding when HPO is enabled in example. \"Total optimization steps\" is incorrect since train_batch_size is not updated accordingly. add the update of this parameter in trainer.train",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,666
| 1,662
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
get incorrect "total optimization steps" in HPO. since train_batch_size is not updated accordingly
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18918/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18918",
"html_url": "https://github.com/huggingface/transformers/pull/18918",
"diff_url": "https://github.com/huggingface/transformers/pull/18918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18918.patch",
"merged_at": 1662552091000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18917
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18917/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18917/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18917/events
|
https://github.com/huggingface/transformers/pull/18917
| 1,364,340,630
|
PR_kwDOCUB6oc4-foFe
| 18,917
|
TF: unpin maximum TF version
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Why why why would we merge a PR with red ticks? Every contributor making a PR this weekend and until this is resolved will wonder what they did wrong.",
"I was thinking of the self-hosted scheduled CIs only when reviewing this PR. You are right - we should keep CircleCI / push CI green.",
"For the scheduled CI, we can unpin so we know what to fix. But let's do it on Monday if you are ok."
] | 1,662
| 1,662
| 1,662
|
MEMBER
| null |
# What does this PR do?
Unpins TF maximum version.
As in the scheduled run, a few onnx+tf tests broke. I'd say we merge this PR, and put the newly broken tests on our todo list.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18917/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/transformers/issues/18917/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18917",
"html_url": "https://github.com/huggingface/transformers/pull/18917",
"diff_url": "https://github.com/huggingface/transformers/pull/18917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18917.patch",
"merged_at": 1662813182000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18916
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18916/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18916/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18916/events
|
https://github.com/huggingface/transformers/issues/18916
| 1,364,294,692
|
I_kwDOCUB6oc5RUXwk
| 18,916
|
facebook/wav2vec2-xls-r-300m-21-to-en TypeError: expected str, bytes or os.PathLike object, not NoneType
|
{
"login": "Shiro-LK",
"id": 26505641,
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiro-LK",
"html_url": "https://github.com/Shiro-LK",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @Shiro-LK!\r\n\r\nGood catch, we're loading a Wav2Vec2 processor here so need to instantiate the corresponding class accordingly:\r\n\r\n```python\r\nfrom transformers import Wav2Vec2Processor\r\n\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-xls-r-300m-21-to-en\")\r\n```\r\n\r\nI've opened a PR to update the example on the model card with these steps: https://huggingface.co/facebook/wav2vec2-xls-r-300m-21-to-en/discussions/3"
] | 1,662
| 1,662
| 1,662
|
NONE
| null |
### System Info
transformers == 4.21.3
python == 3.9.2
ubuntu 18
### Who can help?
@patrickvonplaten @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. from transformers import Speech2Text2Processor
2. processor =Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-21-to-en")
### Expected behavior
the processor should be loaded but got this error instead :
`TypeError: expected str, bytes or os.PathLike object, not NoneType`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18916/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18915
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18915/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18915/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18915/events
|
https://github.com/huggingface/transformers/pull/18915
| 1,364,254,624
|
PR_kwDOCUB6oc4-fVyP
| 18,915
|
Add image height and width to ONNX dynamic axes
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
MEMBER
| null |
# What does this PR do?
This PR enables dynamic axes for image height / width of ONNX vision models. This allows users to change the height and width of their inputs at runtime with values different from those used to trace the model during the export (usually 224 x 224 pixels)
Here's an example with ResNet and `optimum`:
```python
import requests
from PIL import Image
from optimum.onnxruntime import ORTModelForImageClassification
from transformers import AutoFeatureExtractor
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
# Raw image size 480 x 640 pixels
image = Image.open(requests.get(url, stream=True).raw)
# Resize image to 40 x 40 pixels
preprocessor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50", do_resize=True, size=40)
model = ORTModelForImageClassification.from_pretrained("onnx")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
logits.shape
```
I've also checked the slow tests pass:
```
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "beit or clip or convnext or data2vec-vision or deit or detr or layoutlmv3 or levit or mobilevit or resnet or vit" -s
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18915/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18915",
"html_url": "https://github.com/huggingface/transformers/pull/18915",
"diff_url": "https://github.com/huggingface/transformers/pull/18915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18915.patch",
"merged_at": 1662583366000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18914
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18914/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18914/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18914/events
|
https://github.com/huggingface/transformers/issues/18914
| 1,364,236,988
|
I_kwDOCUB6oc5RUJq8
| 18,914
|
Cannot Import BigBirdModel
|
{
"login": "jaideep11061982",
"id": 38164196,
"node_id": "MDQ6VXNlcjM4MTY0MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/38164196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaideep11061982",
"html_url": "https://github.com/jaideep11061982",
"followers_url": "https://api.github.com/users/jaideep11061982/followers",
"following_url": "https://api.github.com/users/jaideep11061982/following{/other_user}",
"gists_url": "https://api.github.com/users/jaideep11061982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaideep11061982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaideep11061982/subscriptions",
"organizations_url": "https://api.github.com/users/jaideep11061982/orgs",
"repos_url": "https://api.github.com/users/jaideep11061982/repos",
"events_url": "https://api.github.com/users/jaideep11061982/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaideep11061982/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I don't think transformers version 3.0.2 contains the BigBird model. I think updating your version of the transformers package should solve the issue.\r\nI also see that you are using Python version 3.6.6. The latest version of the transformers package requires Python >=3.7.0, so I guess you would also need to update your installed Python version. \r\nHope this helps!",
"Which version does\n\n\nOn Wed, 7 Sep, 2022, 1:43 pm Manish Sridhar, ***@***.***>\nwrote:\n\n> I don't think transformers version 3.0.2 contains the BigBird model. I\n> think updating your version of the transformers package should solve the\n> issue.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/18914#issuecomment-1239059464>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AJDFNZADFM4WMFVCK4LEDSLV5BFD5ANCNFSM6AAAAAAQGPXTJY>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"@jaideep11061982 I believe the model was added in v4.5.0, but I think either @sgugger or @NielsRogge will be able to better comment on the exact version that you could use.",
"Yes, it was introduced in v4.5.0.\r\n\r\nIn general, please do not open issues without updating to some recent version of Transformers, v3.0.2 is more than two years old and bugs are fixed as continuous development of the new versions, so you need to upgrade to the latest releases to see the fixes anyway.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,665
| 1,665
|
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.0.2
- Platform: Linux-5.10.133+-x86_64-with-debian-9.9
- Python version: 3.6.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
----> 2 from transformers import BigBirdTokenizer,BigBirdModel
ImportError: cannot import name 'BigBirdTokenizer'
```
```
----> 4 from transformers import (AlbertModel, AlbertTokenizer, BartModel, BigBirdModel, BigBirdTokenizer,
5 BartTokenizer, BertModel, BertTokenizer,
6 CamembertModel, CamembertTokenizer, CTRLModel,
ImportError: cannot import name 'BigBirdModel'
```
### Expected behavior
Model should get imported without the error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18914/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18913
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18913/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18913/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18913/events
|
https://github.com/huggingface/transformers/pull/18913
| 1,364,023,155
|
PR_kwDOCUB6oc4-ekvt
| 18,913
|
Fix XLA fp16 and bf16 error checking
|
{
"login": "ymwangg",
"id": 19481308,
"node_id": "MDQ6VXNlcjE5NDgxMzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/19481308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ymwangg",
"html_url": "https://github.com/ymwangg",
"followers_url": "https://api.github.com/users/ymwangg/followers",
"following_url": "https://api.github.com/users/ymwangg/following{/other_user}",
"gists_url": "https://api.github.com/users/ymwangg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ymwangg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ymwangg/subscriptions",
"organizations_url": "https://api.github.com/users/ymwangg/orgs",
"repos_url": "https://api.github.com/users/ymwangg/repos",
"events_url": "https://api.github.com/users/ymwangg/events{/privacy}",
"received_events_url": "https://api.github.com/users/ymwangg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"XLA already identifies the device type and publishes it in the environment variable for distributed training:\r\n```\r\nXRT_MULTI_PROCESSING_DEVICE=\"device:ordinal\"\r\n```\r\n\r\nEg: XRT_MULTI_PROCESSING_DEVICE=GPU:0\r\nEg: XRT_MULTI_PROCESSING_DEVICE=TPU:0\r\n\r\nRefer to relevant device specific setup in PT-XLA: https://github.com/pytorch/xla/blob/master/torch_xla/distributed/xla_multiprocessing.py#L219-L276\r\n\r\nLooking into single worker training now.",
"There might be a much easier solution:\r\nThe presence of environment variables of [TPU_NUM_DEVICES](https://github.com/pytorch/xla/blob/6e42e7cb3af01d9f8909e112a3be0148a87acad0/torch_xla/distributed/xla_multiprocessing.py#L79-L88) or [XRT_TPU_CONFIG](https://github.com/pytorch/xla/blob/e7e7fe406c7f276469d3e47ecd23e8c9423ab1b5/TROUBLESHOOTING.md) indicates a TPU environment.\r\nThe presence of environment variable GPU_NUM_DEVICES indicates a GPU environment.\r\n",
"The most systematic logic should be like:\r\n\r\n```\r\nand not (self.device.type == \"xla\" and is_torch_tpu_available() and xm.xla_device() == gpu)\r\n```\r\n\r\nInspired by this logic, it might be better to have an API to return the current torch_xla device so that we could use it here:\r\n\r\n```\r\nand not(self.device_type \"xla\" and get_torch_xla_device() != gpu)\r\n```",
"I'm fine with both solutions :-)",
"I just realized torch_xla already has the API to distinguish different backends.\r\n```\r\ntorch_xla._XLAC._xla_real_devices([str(device)])\r\n```\r\nFor GPU, it returns\r\n```\r\n['GPU:0']\r\n```\r\nFor TPU, it returns\r\n```\r\n['TPU:0']\r\n```\r\nI'll try to implement it with this API.",
"It looks like \"torch.device\" as a type hint can cause CI failure if pytorch is not installed. I've removed it."
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
This PR fixed a bug introduced in https://github.com/huggingface/transformers/pull/15022 that will wrongfully throw an error when training with XLA device + fp16. `GPU_NUM_DEVICES` is unset by torch_xla in distributed training [here](https://github.com/pytorch/xla/blob/master/torch_xla/distributed/xla_multiprocessing.py#L229).
Tested using the following scripts:
```sh
GPU_NUM_DEVICES=8 python -m torch_xla.distributed.xla_spawn --num_gpus 8 language-modeling/run_mlm.py \
--model_name_or_path bert-base-uncased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--overwrite_output_dir true \
--output_dir /tmp/test-mlm \
--per_device_train_batch_size 10 \
--do_eval \
--fp16 true \
--do_train \
--num_train_epochs 3 \
--optim adamw_torch_xla
```
Thanks to @Lokiiiiii @comaniac for reporting this issue.
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18913/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18913/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18913",
"html_url": "https://github.com/huggingface/transformers/pull/18913",
"diff_url": "https://github.com/huggingface/transformers/pull/18913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18913.patch",
"merged_at": 1662579917000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18912
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18912/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18912/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18912/events
|
https://github.com/huggingface/transformers/issues/18912
| 1,363,972,186
|
I_kwDOCUB6oc5RTJBa
| 18,912
|
Failed to import transformers.models.bart.modeling_tf_bart because no module named 'keras'
|
{
"login": "jybsuper",
"id": 7698145,
"node_id": "MDQ6VXNlcjc2OTgxNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7698145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jybsuper",
"html_url": "https://github.com/jybsuper",
"followers_url": "https://api.github.com/users/jybsuper/followers",
"following_url": "https://api.github.com/users/jybsuper/following{/other_user}",
"gists_url": "https://api.github.com/users/jybsuper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jybsuper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jybsuper/subscriptions",
"organizations_url": "https://api.github.com/users/jybsuper/orgs",
"repos_url": "https://api.github.com/users/jybsuper/repos",
"events_url": "https://api.github.com/users/jybsuper/events{/privacy}",
"received_events_url": "https://api.github.com/users/jybsuper/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Just had this exact same issue. Thank you for the fix!",
"Shouldn't this be open until gets fixed?",
"Confirming the issue still exists when using latest TF (2.11). Downgrading TF to 2.9 fix the issue.",
"https://stackoverflow.com/questions/74586892/no-module-named-keras-saving-hdf5-format\r\n\r\nworking for me on TF==2.9"
] | 1,662
| 1,685
| 1,662
|
NONE
| null |
The following line of code causes an error: `ModuleNotFoundError: No module named 'keras'` whenever I try to initialize a Bart model:
https://github.com/huggingface/transformers/blob/0a632f076d6b275690176b79c64c5559e1240b05/src/transformers/modeling_tf_utils.py#L39
Replaced it with `from tensorflow.python.keras.saving.hdf5_format import save_attributes_to_hdf5_group` and the error was gone.
Is this a bug or I didn't install all necessary packages?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18912/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18912/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18911
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18911/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18911/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18911/events
|
https://github.com/huggingface/transformers/pull/18911
| 1,363,913,391
|
PR_kwDOCUB6oc4-eOQn
| 18,911
|
[DeepSpeed ZeRO3] Fix performance degradation in sharded models
|
{
"login": "tjruwase",
"id": 4271600,
"node_id": "MDQ6VXNlcjQyNzE2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4271600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjruwase",
"html_url": "https://github.com/tjruwase",
"followers_url": "https://api.github.com/users/tjruwase/followers",
"following_url": "https://api.github.com/users/tjruwase/following{/other_user}",
"gists_url": "https://api.github.com/users/tjruwase/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tjruwase/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tjruwase/subscriptions",
"organizations_url": "https://api.github.com/users/tjruwase/orgs",
"repos_url": "https://api.github.com/users/tjruwase/repos",
"events_url": "https://api.github.com/users/tjruwase/events{/privacy}",
"received_events_url": "https://api.github.com/users/tjruwase/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger, I was thinking about this discovery and my suggestion is that we do this not just for deepspeed.\r\n\r\nThe thing is - with sharded models like bloom-176 (72 shards) - 90% of the time this code:\r\n\r\nhttps://github.com/tjruwase/transformers/blob/81dbd0ba6c5422dec33dd31cf076e44d96d2d968/src/transformers/modeling_utils.py#L436\r\n\r\nis doing nothing since its `state_dict`'s \"payload\" doesn't match the params of the submodule it's called for, as most of the time they are in another shard.\r\n\r\nNot sure of the cost or the actual promised saving, but it will save many unnecessary `module._load_from_state_dict(*args)` calls.\r\n\r\nEspecially since the code is already there anyway.\r\n\r\nThoughts?",
"Yes, we could probably ignore the load when there is no parameter to load indeed. Will make a PR this morning.",
"Took a stab at it in #18937. Thinking more of it, I'm not sure if we'll get a sensible gain as I expect the calls to `module._load_from_state_dict` to be mostly noops but we can certainly train and measure the difference!",
"I followed up here: https://github.com/huggingface/transformers/pull/18937#pullrequestreview-1100945027\r\n"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
When sharded models were added the deepspeed/zero3 branch of model loading of pretrained weights
https://github.com/huggingface/transformers/blob/7d5fde991d598370d961be8cb7add6541e2b59ce/src/transformers/modeling_utils.py#L427-L429
got ~N-shards-x-slower, since it wastefully gathered weights that weren't in the `state_dict`:
**So for example with BLOOM-176 which has 72 shards, the loading was ~70x slower under deepspeed zero3 and nvme offload!**
This fix takes care of the situation with sharded models by finding an intersection of `state_dict` keys and the keys of the parameters of the current submodule that is being loaded to and thus only gathering the weights that get updated. And if there is no intersection skipping the rest of the branch altogether.
The 1-shard use case still works as the intersection should be 100%.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18911/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18911",
"html_url": "https://github.com/huggingface/transformers/pull/18911",
"diff_url": "https://github.com/huggingface/transformers/pull/18911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18911.patch",
"merged_at": 1662561860000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18910
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18910/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18910/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18910/events
|
https://github.com/huggingface/transformers/pull/18910
| 1,363,903,843
|
PR_kwDOCUB6oc4-eMN1
| 18,910
|
Accelerator end training
|
{
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
Add `accelerator.end_training()` to the ends of the example scripts. This ensures that trackers call their ending/finishing functions.
@sgugger @muellerzr
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18910/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18910",
"html_url": "https://github.com/huggingface/transformers/pull/18910",
"diff_url": "https://github.com/huggingface/transformers/pull/18910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18910.patch",
"merged_at": 1662551186000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18909
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18909/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18909/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18909/events
|
https://github.com/huggingface/transformers/pull/18909
| 1,363,831,128
|
PR_kwDOCUB6oc4-d9OH
| 18,909
|
Fixed typos in comments of OPTDecoderLayer
|
{
"login": "nickypro",
"id": 52249105,
"node_id": "MDQ6VXNlcjUyMjQ5MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/52249105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickypro",
"html_url": "https://github.com/nickypro",
"followers_url": "https://api.github.com/users/nickypro/followers",
"following_url": "https://api.github.com/users/nickypro/following{/other_user}",
"gists_url": "https://api.github.com/users/nickypro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickypro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickypro/subscriptions",
"organizations_url": "https://api.github.com/users/nickypro/orgs",
"repos_url": "https://api.github.com/users/nickypro/repos",
"events_url": "https://api.github.com/users/nickypro/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickypro/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18909). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,665
| 1,665
|
NONE
| null |
# What does this PR do?
Fixed typos in comments of OPTDecoderLayer ( used in Meta OPT models )
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18909/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18909",
"html_url": "https://github.com/huggingface/transformers/pull/18909",
"diff_url": "https://github.com/huggingface/transformers/pull/18909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18909.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18908
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18908/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18908/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18908/events
|
https://github.com/huggingface/transformers/pull/18908
| 1,363,733,430
|
PR_kwDOCUB6oc4-dodf
| 18,908
|
[New Model] Add TimeSformer model
|
{
"login": "fcakyon",
"id": 34196005,
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcakyon",
"html_url": "https://github.com/fcakyon",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@NielsRogge I have added some tests, variable names require some work but maybe I can update names during PR review?",
"Hi @NielsRogge @fcakyon do you need help with finishing this and merging it? I'm happy to add the finishing touches",
"Currently I am a bit busy with my phd qualification exam, barely finding any free time. If I cannot find any time in the following weeks you may continue @Darktex ",
"@NielsRogge can you review it again, please? Tried to address all your concerns 👍 ",
"> Thanks for your work, looks great to me.\r\n\r\nThanks for all the constructive feedback!\r\n",
"Hello @NielsRogge, thanks a lot for all the help you have provided. I have opened multiple PRs for the config and model files of all timesformer variants:\r\nhttps://huggingface.co/facebook/timesformer-base-finetuned-k400/discussions/1\r\nhttps://huggingface.co/facebook/timesformer-hr-finetuned-ssv2/discussions/1\r\nhttps://huggingface.co/facebook/timesformer-hr-finetuned-k600/discussions/1\r\nhttps://huggingface.co/facebook/timesformer-hr-finetuned-k400/discussions/1\r\nhttps://huggingface.co/facebook/timesformer-base-finetuned-k600/discussions/2\r\nhttps://huggingface.co/facebook/timesformer-base-finetuned-ssv2/discussions/2\r\n\r\nIs there anything I should do about this PR?",
"Looks great on my side and ready to merge! Will let @NielsRogge double-check on last time and merge if he's happy :-)",
"Thanks for all your work!\r\n\r\nFeel free to share on social media, we'll amplify ;)",
"Thanks a lot @NielsRogge, will share it after preparing a space :)",
"> Thanks for all your work!\r\n> \r\n> Feel free to share on social media, we'll amplify ;)\r\n\r\nI have shared it on [Twitter](https://twitter.com/fcakyon/status/1599305469017067521) and [Linkedin](https://www.linkedin.com/posts/fcakyon_timesformer-is-the-first-transformer-based-activity-7005071269373095936-Sb9-?utm_source=share&utm_medium=member_desktop) with the space demo link 🚀 "
] | 1,662
| 1,670
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/18724
- [x] Create a working environment for successful inference with the original source code
- [x] Create a debugging script for the original source code
- [x] Separate original model from original preprocessing pipeline
- [x] Test the original model with transformers/VideoMAEFeatureExtractor preprocessing pipeline
- [x] Port TimeSformer to HuggingFace/transformers
- [x] Adds tests for transformers/TimeSformer implementation
- [x] Update variable names to be more explicit
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18908/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18908/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18908",
"html_url": "https://github.com/huggingface/transformers/pull/18908",
"diff_url": "https://github.com/huggingface/transformers/pull/18908.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18908.patch",
"merged_at": 1669968805000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18907
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18907/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18907/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18907/events
|
https://github.com/huggingface/transformers/pull/18907
| 1,363,669,617
|
PR_kwDOCUB6oc4-da9m
| 18,907
|
Fix tflongformer int dtype
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"But it doesn't look to good to CI 😅 ",
"Working on that bit!",
"Quick update: I did a lot of dtype casting which should resolve the remaining issues. Because TFLED has some sections copied from TFLongFormer, TFLED got updated as well. However, the TFLongformerEmbeddings were copied from TFRobertaEmbeddings, but I broke this connection because we have to do some extra casting in TFLongformer due to things like the `global_attention_mask`, and I don't really want to mess with TFRoberta when it doesn't have issues, because it's a heavily-used model.",
"Looks good! 👍 \r\n\r\nThanks for taking care of this one"
] | 1,662
| 1,663
| 1,663
|
MEMBER
| null |
TFLongformer had a lot of `int32` dtypes in its code, caused by `tf.convert_to_tensor()` defaulting to int32 when passed a list of ints, as well as some explicit `int32` lines. We prefer `int64` across our models, so I've converted everything to use that.
Fixes #13632
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18907/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18907",
"html_url": "https://github.com/huggingface/transformers/pull/18907",
"diff_url": "https://github.com/huggingface/transformers/pull/18907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18907.patch",
"merged_at": 1663001470000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18906
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18906/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18906/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18906/events
|
https://github.com/huggingface/transformers/pull/18906
| 1,363,610,060
|
PR_kwDOCUB6oc4-dOPv
| 18,906
|
Fix incorrect size of input for 1st strided window length in `Perplexity of fixed-length models`
|
{
"login": "ekagra-ranjan",
"id": 3116519,
"node_id": "MDQ6VXNlcjMxMTY1MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekagra-ranjan",
"html_url": "https://github.com/ekagra-ranjan",
"followers_url": "https://api.github.com/users/ekagra-ranjan/followers",
"following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}",
"gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions",
"organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs",
"repos_url": "https://api.github.com/users/ekagra-ranjan/repos",
"events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18887
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18906/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18906",
"html_url": "https://github.com/huggingface/transformers/pull/18906",
"diff_url": "https://github.com/huggingface/transformers/pull/18906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18906.patch",
"merged_at": 1662492012000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18905
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18905/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18905/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18905/events
|
https://github.com/huggingface/transformers/pull/18905
| 1,363,550,072
|
PR_kwDOCUB6oc4-dCNM
| 18,905
|
Add checks for more workflow jobs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
Apply the same change in #18583, so we can have a slack message saying something goes very wrong in CIs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18905/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18905",
"html_url": "https://github.com/huggingface/transformers/pull/18905",
"diff_url": "https://github.com/huggingface/transformers/pull/18905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18905.patch",
"merged_at": 1662547898000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18904
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18904/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18904/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18904/events
|
https://github.com/huggingface/transformers/pull/18904
| 1,363,507,170
|
PR_kwDOCUB6oc4-c5Hw
| 18,904
|
Add BART DLM PyTorch pretraining example
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18904). All of your documentation changes will be reflected on that endpoint.",
"Hey @BramVanroy! Thanks for making a start on this PR. In general, we aim to mirror the original repo's functionality as closely as possible. In this case, porting from fairseq is the way to go! So great to see your comments regarding consistency with fariseq, and yes to all of them! If indeed these changes are required, we'll need to update the Flax example accordingly.\r\n\r\nWe can batch samples with datasets.map by passing the `num_workers` arg. To pre-process samples on a specified number of CPU workers concurrently:\r\n```python\r\ndataset = dataset.map(map_fn, num_workers=data_args.preprocessing_num_workers)\r\n```\r\nThis I think is the way to go for processing the dataset being the closest to fariseq. \r\n\r\nAdding auxiliary scripts for config/tokenizer creation is a great idea - all for it! Makes it far easier to reproduce and run the example :-)"
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
Implements a pretraining example for BART (denoising language model). Big focus on getting the data denoising as close to the original fairseq as possible but instead of on the dataset level on the dataloader level.
Heavily inspired by the fairseq implementation and the FLAX implementation. (See `HF (Flax), fairseq, and current implementation`.) Looking for some feedback. Please see `Questions/Uncertainties`.
# Some notes
## Default values
The defaults are set to the [given BART args](https://github.com/facebookresearch/fairseq/issues/1899#issuecomment-1069429320). This differs from the Flax defaults in one respect, namely `poisson_lambda`, which is now set to `3.5` instead of `3.0`.
## HF (Flax), fairseq, and current implementation
There are some differences in implementation between fairseq, the HF FLAX example, and this PyTorch implementation.
- `argwhere` in the Flax example
[in this position](https://github.com/huggingface/transformers/blob/65fb71bc762c46bb067306c1fd083b1cba87a095/examples/flax/language-modeling/run_bart_dlm_flax.py#L319)
is not the same as what is happening in fairseq. [In fairseq](https://github.com/facebookresearch/fairseq/blob/a6a63279422f846a3c2f6c45b9c96d6951cc4b82/fairseq/data/denoising_dataset.py#L230)
we check explicitly that the previous token was not a "full stop" (padding token) but in HF we just check whether the
current token is a full stop. In the current example I also explicitly check that the next token is not a full stop,
in case of padding. (However, in practice that should be a non-issue since all batches/samples should have the
same sequence length and there should not be any padding.)
- I found that the result of sentence permutation was not consistent in terms of where the separating pad token ended
up ([bug report](https://github.com/facebookresearch/fairseq/issues/4695)), so I have reimplemented that method so
that sentences in a sequence are still separated by a padding token, even after permutation.
- In HF FLAX, the token_mask is restricted to [non-special and non-padding tokens](https://github.com/huggingface/transformers/blob/65fb71bc762c46bb067306c1fd083b1cba87a095/examples/flax/language-modeling/run_bart_dlm_flax.py#L361).
In Fairseq, by default, only the first and last tokens are excluded and [all others](https://github.com/facebookresearch/fairseq/blob/1bba712622b8ae4efb3eb793a8a40da386fe11d0/fairseq/data/denoising_dataset.py#L241)
are prone to masking. The HF implementation seems sensible so I follow that. `get_special_tokens_mask` includes the
padding token, though, so no need to add that separately.
- The Flax example does not include methods to add more noise. I have ported those as well.
- However, I did not adapt `add_insertion_noise` to work well with padded sequences. So the inserted noise may occur
ANYWHERE. It is unclear whether this is intended behavior.
Alternatively, we could implement all this processing on the dataset level and use `Dataset.map`. This has some
advantages:
- more true to fairseq implementation (sample level rather than batch level);
- cached.
... and disadvantages:
- potentially slower (not batched), although we can integrate a batched approach. But as discussed above, this will be
less true to the original fairseq implementation in `add_insertion_noise`
- every sample is always processed the same. So in small datasets which are seen multiple times by the model, the
same sample will always be processed the same. In a dataloader, that will not be the case because the processing
occurs on every iteration rather than once before training.
## Questions/Uncertainties
- Do the padding tokens still serve a purpose after permutation? (Teaching the model to learn to detect sentence boundaries?) They _can_ get masked and noised.
- It seems that `add_insertion_noise` can insert noise _anywhere_ (also in fairseq), which means that it will also overwrite special
tokens and that sequence don't necessarily end with a EOS token. Is that a problem?
- I have now added auxiliary scripts for config/tokenizer creation when pre-training. Should I remove those? In the FLAX example, these steps are [described inline](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#bart-denoising-language-modeling) but without a given script. So we could also just do that.
- I have explicitly added fingerprints (hashed) because in the past I've come to encounter issues when using spaCy and Dataset.map (every time you load a spaCy model, it has a different hash so the processing will happen every time). I don't see a better way but feel free to share ideas. Maybe someone of the `datasets` team can chime in, too.
# Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/5096#issuecomment-1237227809
- [x] Did you make sure to update the documentation with your changes?
# Who can review?
- bart: @patrickvonplaten @patil-suraj
- maintained examples (not research project or legacy): @patil-suraj
- flax implementation authors: @sanchit-gandhi @duongna21
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18904/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18904",
"html_url": "https://github.com/huggingface/transformers/pull/18904",
"diff_url": "https://github.com/huggingface/transformers/pull/18904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18904.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18903
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18903/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18903/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18903/events
|
https://github.com/huggingface/transformers/pull/18903
| 1,363,437,776
|
PR_kwDOCUB6oc4-cqOi
| 18,903
|
TF: final bias as a layer in seq2seq models (replicate TFMarian fix)
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
MEMBER
| null |
# What does this PR do?
This PR replicates the exact same change as in https://github.com/huggingface/transformers/pull/18833 (applied to TFMarian) to the other seq2seq TF models. **_The change is exactly the same for all models._**
In essence, weights that are not in layers are not stored/loaded with `.save_weights()` and `.load_weights()`, the functions we use to store to/load from the hub. These changes move `final_logits_bias` to a layer. Many models do NOT use this bias, but some do.
⚠️ Prior to this change, existing TF models from `Helsinki-NLP` (`TFMarian`) were wrong (and new conversions failed the automatic checks). I will revisit the canonical models using these architectures to ensure they are okay, and open PRs with weights if not.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18903/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18903/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18903",
"html_url": "https://github.com/huggingface/transformers/pull/18903",
"diff_url": "https://github.com/huggingface/transformers/pull/18903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18903.patch",
"merged_at": 1662555783000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18902
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18902/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18902/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18902/events
|
https://github.com/huggingface/transformers/pull/18902
| 1,363,157,416
|
PR_kwDOCUB6oc4-bt76
| 18,902
|
Generate: add model class validation
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger @patrickvonplaten I've requested a re-review of this PR. As per @patrickvonplaten's suggestion, the PR was upgraded to contain the exact class the user should use in the exception (as opposed to pointing to all generate-compatible auto classes).\r\n\r\nIn the process of building it, I've noticed that the previous version of this PR was incorrect anyways -- PT and TF had a default `prepare_inputs_for_generation`, so we couldn't rely on its existence. Only 1 model was using this default, so I removed it and implemented it in the missing model. The default `prepare_inputs_for_generation` was a public method, but since this PR blocks the use of `generate()` with classes that are not intended to be used with it anyways, removing the public method should have little impact. Nevertheless, it is a point to consider in the review!\r\n\r\n_______________________________\r\n\r\nHere's an example with the current version of the PR:\r\n```\r\n>>> from transformers import AutoModel\r\n>>> model = AutoModel.from_pretrained(\"distilgpt2\")\r\n>>> model.generate(\"foo\")\r\nTypeError: The current model class (GPT2Model) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'GPT2LMHeadModel'}\r\n```"
] | 1,662
| 1,663
| 1,663
|
MEMBER
| null |
# What does this PR do?
Fixes #18210
This PR adds model class validation at the start of generate (all model classes inherit `GenerationMixin`, but few can use `generate()`). It also adds an exception that attempts to redirect the users to the right classes.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18902/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18902",
"html_url": "https://github.com/huggingface/transformers/pull/18902",
"diff_url": "https://github.com/huggingface/transformers/pull/18902.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18902.patch",
"merged_at": 1663057184000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18901
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18901/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18901/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18901/events
|
https://github.com/huggingface/transformers/pull/18901
| 1,362,942,265
|
PR_kwDOCUB6oc4-a_yD
| 18,901
|
unpin slack_sdk version
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
The issue in Slack SDK 3.18.2 was fixed in 3.18.3, so no longer need to pin 3.18.1.
For more details, see
https://github.com/slackapi/python-slack-sdk/pull/1259#issuecomment-1237731450
https://github.com/slackapi/python-slack-sdk/issues/1261
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18901/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18901",
"html_url": "https://github.com/huggingface/transformers/pull/18901",
"diff_url": "https://github.com/huggingface/transformers/pull/18901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18901.patch",
"merged_at": 1662482521000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18900
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18900/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18900/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18900/events
|
https://github.com/huggingface/transformers/issues/18900
| 1,361,862,147
|
I_kwDOCUB6oc5RLF4D
| 18,900
|
Converting ruGPT3 model (based on GPT2) to ONNX format
|
{
"login": "Gooogr",
"id": 32438284,
"node_id": "MDQ6VXNlcjMyNDM4Mjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/32438284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gooogr",
"html_url": "https://github.com/Gooogr",
"followers_url": "https://api.github.com/users/Gooogr/followers",
"following_url": "https://api.github.com/users/Gooogr/following{/other_user}",
"gists_url": "https://api.github.com/users/Gooogr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gooogr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gooogr/subscriptions",
"organizations_url": "https://api.github.com/users/Gooogr/orgs",
"repos_url": "https://api.github.com/users/Gooogr/repos",
"events_url": "https://api.github.com/users/Gooogr/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gooogr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello, I meet the same situation, have you solved it? @Gooogr "
] | 1,662
| 1,676
| 1,665
|
NONE
| null |
### System Info
## Environment info
Use google colab with turned on GPU
- `transformers` version: 4.21.3
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no (at least not call it directly)
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik
@lewtun
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to export ruGPT3 model to ONNX format in google colab notebook
Code based on:
https://huggingface.co/docs/transformers/main/en/serialization#exporting-a-model-to-onnx
```
! pip install transformers[onnx] datasets sentencepiece >> pip_log.txt
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, GPT2LMHeadModel
from onnxruntime import InferenceSession
# Load model and tokenizer
name = 'sberbank-ai/rugpt3medium_based_on_gpt2'
tokenizer = AutoTokenizer.from_pretrained(name)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.sep_token = tokenizer.eos_token
model = GPT2LMHeadModel.from_pretrained(name)
# Save to disk
tokenizer.save_pretrained("local-pt-checkpoint")
model.save_pretrained("local-pt-checkpoint")
# Run ONNX converter
! python -m transformers.onnx --model=local-pt-checkpoint onnx/ --atol=2e-5
```
I'm getting error
> Some weights of the model checkpoint at local-pt-checkpoint were not used when initializing GPT2Model: ['lm_head.weight']
> - This IS expected if you are initializing GPT2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
> - This IS NOT expected if you are initializing GPT2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
> Using framework PyTorch: 1.12.1+cu113
> Overriding 1 configuration item(s)
> - use_cache -> False
> /usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/modeling_gpt2.py:808: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> if batch_size <= 0:
> Traceback (most recent call last):
> File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> "__main__", mod_spec)
> File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module>
> main()
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 94, in main
> args.output,
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 336, in export
> return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
> File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 199, in export_pytorch
> opset_version=opset,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py", line 365, in export
> export_modules_as_functions,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 178, in export
> export_modules_as_functions=export_modules_as_functions,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 1084, in _export
> dynamic_axes=dynamic_axes,
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 727, in _model_to_graph
> graph, params, torch_out, module = _create_jit_graph(model, args)
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 602, in _create_jit_graph
> graph, torch_out = _trace_and_get_graph_from_model(model, args)
> File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 518, in _trace_and_get_graph_from_model
> model, args, strict=False, _force_outplace=False, _return_inputs_states=True
> File "/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py", line 1175, in _get_trace_graph
> outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
> return forward_call(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py", line 132, in forward
> self._force_outplace,
> File "/usr/local/lib/python3.7/dist-packages/torch/jit/_trace.py", line 118, in wrapper
> outs.append(self.inner(*trace_inputs))
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
> return forward_call(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
> result = self.forward(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 844, in forward
> inputs_embeds = self.wte(input_ids)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
> return forward_call(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
> result = self.forward(*input, **kwargs)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py", line 160, in forward
> self.norm_type, self.scale_grad_by_freq, self.sparse)
> File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2199, in embedding
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
> IndexError: index out of range in self
### Expected behavior
Link to the model
[ruGPT3 medium based on GPT2](https://huggingface.co/sberbank-ai/rugpt3medium_based_on_gpt2)
Given that the model architecture is based on GPT2, I assumed that the script will be able to convert it automatically.
Unfortunately, the error logs are not too detailed and I don't know if it's even possible to solve this problem at the HuggingFace level. Can I convert it by setting custom ONNX configuration? Or probably there is another workaround?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18900/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18899
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18899/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18899/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18899/events
|
https://github.com/huggingface/transformers/issues/18899
| 1,361,772,485
|
I_kwDOCUB6oc5RKv_F
| 18,899
|
NaN when training t5-large with bf16 on multiple GPUs
|
{
"login": "harshil-shah",
"id": 12370376,
"node_id": "MDQ6VXNlcjEyMzcwMzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12370376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harshil-shah",
"html_url": "https://github.com/harshil-shah",
"followers_url": "https://api.github.com/users/harshil-shah/followers",
"following_url": "https://api.github.com/users/harshil-shah/following{/other_user}",
"gists_url": "https://api.github.com/users/harshil-shah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harshil-shah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harshil-shah/subscriptions",
"organizations_url": "https://api.github.com/users/harshil-shah/orgs",
"repos_url": "https://api.github.com/users/harshil-shah/repos",
"events_url": "https://api.github.com/users/harshil-shah/events{/privacy}",
"received_events_url": "https://api.github.com/users/harshil-shah/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@LysandreJik perhaps you could suggest someone who can help with this please?",
"I believe @stas00 has some experience around bfloat16 and nans and may have an idea of where the issue may be coming from",
"I have tried t5-large, tested your script to work fine with t5-small - need to find a box with a few large gpus to test t5-large.\r\n\r\nMeanwhile, we should revisit the scaling. \r\n\r\nthe main benefit of using bf16 over fp16 is that there is very little risk of overflow - since bf16's numerical range is the same as of fp32, so no down scaling is needed here. \r\n\r\nBut perhaps we are hitting underflow here. There is a special tool we have for that - you can try to plug it in and observe where (most likely) underflow is happening \r\nhttps://huggingface.co/docs/transformers/debugging#underflow-and-overflow-detection\r\n\r\nBut then underflow would just lead to no learning and not really nan I think.\r\n\r\nI will try to experiment more with it once I'm able to run t5-large.\r\n\r\n\r\n\r\n ",
"Thanks @stas00 - I had a go at using the underflow/overflow detection tool but actually when I switched from `DataParallel` to `DistributedDataParallel` I didn't get nans with this toy example! I'll try to do some experiments with some real data next week and let you know if this solves it.",
"oh, great, then I don't need to look for a set of large GPUs :) Thank you for this update, @harshil-shah! \r\n\r\nIndeed please do let us know when you get a chance to experiment",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,666
| 1,666
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.15.0-1017-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): 2.4.4 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm getting `nan` immediately when training `t5-large` using `bfloat16` on multiple GPUs, but when I run the same script on a single GPU it's fine. I've made a small example below, which I'm running on a machine with 2 A100s. If I do `CUDA_VISIBLE_DEVICES=0 python script.py` the loss is fine, but if I just do `python script.py` I get `nan` from the first iteration.
```
from typing import List, Tuple
import torch
from torch.utils.data import Dataset, DataLoader
import transformers
class MyDataset(Dataset):
def __init__(
self,
data: List[List[str]],
tokenizer: transformers.PreTrainedTokenizerFast,
) -> None:
super().__init__()
self._data = data
self._tokenizer = tokenizer
def __len__(
self,
) -> int:
return len(self._data)
def __getitem__(
self,
index: int
) -> List[str]:
return self._data[index]
def collate_fn(
self,
batch: List[List[str]],
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
prompts = [b[0] for b in batch]
targets = [b[1] for b in batch]
prompts_tokenized = self._tokenizer(
text=prompts,
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
prompts_input_ids = prompts_tokenized["input_ids"]
prompts_attention_mask = prompts_tokenized["attention_mask"]
targets_tokenized = self._tokenizer(
text=targets,
padding=True,
return_tensors="pt",
return_attention_mask=True,
)
targets_input_ids = targets_tokenized["input_ids"]
targets_attention_mask = targets_tokenized["attention_mask"]
return (
prompts_input_ids,
prompts_attention_mask,
targets_input_ids,
targets_attention_mask,
)
if __name__ == "__main__":
model = transformers.T5ForConditionalGeneration.from_pretrained(
"t5-large",
)
tokenizer = transformers.T5TokenizerFast.from_pretrained(
"t5-large",
)
device = (
torch.device("cuda:0")
if torch.cuda.is_available()
else torch.device("cpu")
)
multi_gpu = torch.cuda.device_count() > 1
if multi_gpu:
model = torch.nn.DataParallel(model)
model = model.to(device)
optimizer = transformers.Adafactor(
params=model.parameters(),
lr=1e-4,
scale_parameter=False,
relative_step=False,
)
grad_scaler = torch.cuda.amp.GradScaler(
enabled=True,
)
my_data = [
[f"This is sentence {i}.", f"This is sentence {i + 1}."]
for i in range(1000000)
]
dataset = MyDataset(
data=my_data,
tokenizer=tokenizer,
)
dataloader = DataLoader(
dataset=dataset,
batch_size=8,
shuffle=True,
collate_fn=dataset.collate_fn,
)
for batch in dataloader:
with torch.autocast(
enabled=True,
device_type=device.type,
dtype=torch.bfloat16,
):
batch = [b.to(device) for b in batch]
(
prompts_input_ids,
prompts_attention_mask,
targets_input_ids,
targets_attention_mask,
) = batch
loss = model(
input_ids=prompts_input_ids,
attention_mask=prompts_attention_mask,
labels=targets_input_ids,
).loss
if multi_gpu:
loss = loss.mean()
grad_scaler.scale(loss).backward()
grad_scaler.step(optimizer)
grad_scaler.update()
optimizer.zero_grad()
print(f"Loss = {loss.item()}")
```
### Expected behavior
No `nan`s when training `t5-large` using `bfloat16` on multiple GPUs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18899/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18899/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18898
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18898/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18898/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18898/events
|
https://github.com/huggingface/transformers/pull/18898
| 1,361,678,823
|
PR_kwDOCUB6oc4-WuYb
| 18,898
|
Fix `test_tf_encode_plus_sent_to_model` for `LayoutLMv3`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"A question -- PT has an equivalent tokenizer test, yet I don't see this test being overwritten in PT's `layoutlmv3`. Do you know why that happens? 🤔 ",
"> A question -- PT has an equivalent tokenizer test, yet I don't see this test being overwritten in PT's layoutlmv3. Do you know why that happens?\r\n\r\nHi @gante I guess you are talking about \r\nhttps://github.com/huggingface/transformers/blob/dae0bfc525dd9867a2c9f5917cbf551fb9cc1732/tests/models/layoutlmv3/test_tokenization_layoutlmv3.py#L1154\r\n(I didn't realized there is a PT version for this test until now)\r\n\r\nThat test method uses `boxes` (and overwrites the common one), see\r\nhttps://github.com/huggingface/transformers/blob/dae0bfc525dd9867a2c9f5917cbf551fb9cc1732/tests/models/layoutlmv3/test_tokenization_layoutlmv3.py#L1185\r\n\r\nI can change this PR to be more like that PT test.",
"Oh, right! I was looking for the test in the wrong file :)"
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
# What does this PR do?
The recently added `TFLayoutLMv3Model` triggered the test `test_tf_encode_plus_sent_to_model`, which needs to prepare an extra argument `boxes` when calling tokenizer methods, otherwise we get an error
```bash
> for word, box in zip(text, boxes):
E TypeError: 'NoneType' object is not iterable
```
[Currently failed test](https://github.com/huggingface/transformers/runs/8173669915?check_suite_focus=true)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18898/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18898",
"html_url": "https://github.com/huggingface/transformers/pull/18898",
"diff_url": "https://github.com/huggingface/transformers/pull/18898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18898.patch",
"merged_at": 1662468664000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18897
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18897/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18897/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18897/events
|
https://github.com/huggingface/transformers/pull/18897
| 1,361,652,699
|
PR_kwDOCUB6oc4-Wo8N
| 18,897
|
fixes bugs to handle non-dict output
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @alaradirik Thank you for the fix.\r\n\r\nHowever, I am wondering if we can let `self.owlvit` returns `OwlViTOutput` instead of `tuple` here. As you can see (I believe), it's not easy to understand the code when the tuple indices are used, and it also becomes more difficult for debugging in general. Let me know your thought on this 🙏 , thanks.",
"_The documentation is not available anymore as the PR was closed or merged._",
"H\r\n\r\n> Hi, @alaradirik Thank you for the fix.\r\n> \r\n> However, I am wondering if we can let `self.owlvit` returns `OwlViTOutput` instead of `tuple` here. As you can see (I believe), it's not easy to understand the code when the tuple indices are used, and it also becomes more difficult for debugging in general. Let me know your thought on this 🙏 , thanks.\r\n\r\nHi @ydshieh, self.owlvit already returns `OwlViTOutput`, which has a `return_dict` argument, the failing tests set `return_dict=False`. I think it'd be better to keep it as it is for consistency as OwlViTModel is almost identical to CLIPModel.",
"This line\r\nhttps://github.com/huggingface/transformers/blob/44471422502a4f8cb606f6a8f8d9ae41207f2c2a/src/transformers/models/owlvit/modeling_owlvit.py#L1270\r\ncould pass `return_dict=True`, and we can keep using the named outputs in the code.\r\n\r\nThis doesn't change the method's input and output, but make things easier to read/understand.\r\n\r\nOf course, the method `image_text_embedder` itself returns a tuple, and `OwlViTForObjectDetection.forward` will need to handle the tuple as it calls `image_text_embedder`. I am totally fine with this.\r\n\r\nThis is merely a suggestion (for the readability/debugging in the future). Let's see if @sgugger has any comment, and I let you make the final call :-) \r\n",
"> This line\r\n> \r\n> https://github.com/huggingface/transformers/blob/44471422502a4f8cb606f6a8f8d9ae41207f2c2a/src/transformers/models/owlvit/modeling_owlvit.py#L1270\r\n> \r\n> \r\n> could pass `return_dict=True`, and we can keep using the named outputs in the code.\r\n> This doesn't change the method's input and output, but make things easier to read/understand.\r\n> \r\nThat makes sense, and it's only a single line of code. I updated the PR, could you take a second look @ydshieh ?\r\n\r\n",
"LGTM! Thanks @alaradirik for the fix :-)",
"Thanks. Sorry about that, @alaradirik, we have to change it back to tuple sadly due to the limitation of `torchscript`.",
"> You can't force `return_dict=True` in any part of the model as this mode is not compatible with torchscript (see [here](https://github.com/huggingface/transformers/blob/f85acb4d73a84fe9bee5279068b0430fc391fb36/src/transformers/configuration_utils.py#L385)).\r\n> \r\n> So this change will make Owl-ViT irremediably incompatible with torchscript I believe.\r\n\r\nThank you @sgugger, I'm reverting to my previous commit then",
"I don't mean to bother here (i.e. not saying we should change again): but @sgugger I tried the commit that with `return_dict=True`, and `test_torch_fx_xxx` and `test_torchscrip_xxx` all pass under torch 1.12.1 and 1.11.0.\r\n\r\nHowever it I changed `configs_no_init.return_dict = False` to `True` in the tests, it will fail.\r\nIt looks like the trace will fail only if `dict` is used at the **final** outputs, but not in the intermediate computation.\r\n\r\n(just FYI only)",
"Oh in that case, feel free to use the dict outputs!",
"@alaradirik Let's not change again (back to `dict`), and feel free to merge as it is.\r\n\r\nI can open a separate PR to use `dict` after a more thorough verification.",
"> @alaradirik Let's not change again (back to `dict`), and feel free to merge as it is.\r\n> \r\n> I can open a separate PR to use `dict` after a more thorough verification.\r\n\r\nI'm merging this for now but yes, `return_dict`would only be set to True for the intermediate computation in this case"
] | 1,662
| 1,665
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes OWL-ViT's failing slow tests: `test_torchscript_simple`, `test_torchscript_output_attentions`, `test_torchscript_output_hidden_state`.
The failures were due to explicitly calling output keys instead of calling by the index. The bugs were introduced in this [PR](https://github.com/huggingface/transformers/pull/18734). Switching to indexing to fix the issue: `output.last_hidden_state` -> `output[0]`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18897/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18897",
"html_url": "https://github.com/huggingface/transformers/pull/18897",
"diff_url": "https://github.com/huggingface/transformers/pull/18897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18897.patch",
"merged_at": 1662470015000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18896
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18896/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18896/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18896/events
|
https://github.com/huggingface/transformers/pull/18896
| 1,361,632,954
|
PR_kwDOCUB6oc4-Wk2m
| 18,896
|
Correct naming pegasus x
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18896/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18896",
"html_url": "https://github.com/huggingface/transformers/pull/18896",
"diff_url": "https://github.com/huggingface/transformers/pull/18896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18896.patch",
"merged_at": 1662369900000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18895
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18895/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18895/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18895/events
|
https://github.com/huggingface/transformers/pull/18895
| 1,361,625,802
|
PR_kwDOCUB6oc4-WjVt
| 18,895
|
[wip: testing doc raises]
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
testing: https://github.com/huggingface/doc-builder/pull/141/
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18895/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18895",
"html_url": "https://github.com/huggingface/transformers/pull/18895",
"diff_url": "https://github.com/huggingface/transformers/pull/18895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18895.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18894
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18894/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18894/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18894/events
|
https://github.com/huggingface/transformers/pull/18894
| 1,361,567,231
|
PR_kwDOCUB6oc4-WXFb
| 18,894
|
Mention TF and Flax checkpoints in the Auto model tutorial
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
MEMBER
| null |
Loading TF and Flax checkpoints within the PyTorch architecture circumvents the security risk.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18894/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18894",
"html_url": "https://github.com/huggingface/transformers/pull/18894",
"diff_url": "https://github.com/huggingface/transformers/pull/18894.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18894.patch",
"merged_at": 1662368979000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18893
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18893/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18893/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18893/events
|
https://github.com/huggingface/transformers/pull/18893
| 1,361,372,268
|
PR_kwDOCUB6oc4-Vufi
| 18,893
|
update docs word error
|
{
"login": "zkep",
"id": 36965534,
"node_id": "MDQ6VXNlcjM2OTY1NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/36965534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zkep",
"html_url": "https://github.com/zkep",
"followers_url": "https://api.github.com/users/zkep/followers",
"following_url": "https://api.github.com/users/zkep/following{/other_user}",
"gists_url": "https://api.github.com/users/zkep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zkep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zkep/subscriptions",
"organizations_url": "https://api.github.com/users/zkep/orgs",
"repos_url": "https://api.github.com/users/zkep/repos",
"events_url": "https://api.github.com/users/zkep/events{/privacy}",
"received_events_url": "https://api.github.com/users/zkep/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
Word error in this PR update document
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18893/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18893",
"html_url": "https://github.com/huggingface/transformers/pull/18893",
"diff_url": "https://github.com/huggingface/transformers/pull/18893.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18893.patch",
"merged_at": 1662400573000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18892
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18892/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18892/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18892/events
|
https://github.com/huggingface/transformers/pull/18892
| 1,361,336,333
|
PR_kwDOCUB6oc4-VnVB
| 18,892
|
README_zh-hans.md Document Correction
|
{
"login": "zkep",
"id": 36965534,
"node_id": "MDQ6VXNlcjM2OTY1NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/36965534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zkep",
"html_url": "https://github.com/zkep",
"followers_url": "https://api.github.com/users/zkep/followers",
"following_url": "https://api.github.com/users/zkep/following{/other_user}",
"gists_url": "https://api.github.com/users/zkep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zkep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zkep/subscriptions",
"organizations_url": "https://api.github.com/users/zkep/orgs",
"repos_url": "https://api.github.com/users/zkep/repos",
"events_url": "https://api.github.com/users/zkep/events{/privacy}",
"received_events_url": "https://api.github.com/users/zkep/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
this 预训练 not 亿训练
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18892/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18892",
"html_url": "https://github.com/huggingface/transformers/pull/18892",
"diff_url": "https://github.com/huggingface/transformers/pull/18892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18892.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18891
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18891/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18891/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18891/events
|
https://github.com/huggingface/transformers/pull/18891
| 1,361,257,732
|
PR_kwDOCUB6oc4-VYCm
| 18,891
|
Adds image-guided object detection support to OWL-ViT
|
{
"login": "unography",
"id": 5240449,
"node_id": "MDQ6VXNlcjUyNDA0NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unography",
"html_url": "https://github.com/unography",
"followers_url": "https://api.github.com/users/unography/followers",
"following_url": "https://api.github.com/users/unography/following{/other_user}",
"gists_url": "https://api.github.com/users/unography/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unography/subscriptions",
"organizations_url": "https://api.github.com/users/unography/orgs",
"repos_url": "https://api.github.com/users/unography/repos",
"events_url": "https://api.github.com/users/unography/events{/privacy}",
"received_events_url": "https://api.github.com/users/unography/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @alaradirik I added an initial version for the image-guided obj detection. I still have to add tests and some other cleanup, however, I've some doubts right now\r\n\r\n1. Is the handling of query_embedding correct, while doing the mean and finding out the most dissimilar embedding?\r\n2. How should the postprocessor handle this case, when there are no labels as such for this?\r\n3. Any other implementation details I may have missed",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18891). All of your documentation changes will be reflected on that endpoint.",
"Hi @alaradirik, I made the changes as per the review comments, could you check if they're fine?\r\n\r\nI'm working on test cases currently. In the file [here](https://github.com/huggingface/transformers/blob/main/tests/models/owlvit/test_modeling_owlvit.py#L530), is it okay if I reuse `pixel_values` itself for `query_pixel_values`?\r\n\r\nSo the above line would return \r\n```\r\nreturn config, pixel_values, input_ids, attention_mask, pixel_values\r\n```\r\nand be re-used as\r\n```\r\nconfig_and_inputs = self.prepare_config_and_inputs()\r\nconfig, pixel_values, input_ids, attention_mask, query_pixel_values = config_and_inputs\r\n```\r\n\r\nAnd apart from the test cases, are there any other changes that I need to make?\r\n",
"> Hi @alaradirik, I made the changes as per the review comments, could you check if they're fine?\r\n> \r\n> I'm working on test cases currently. In the file [here](https://github.com/huggingface/transformers/blob/main/tests/models/owlvit/test_modeling_owlvit.py#L530), is it okay if I reuse `pixel_values` itself for `query_pixel_values`?\r\n> \r\n> So the above line would return\r\n> \r\n> ```\r\n> return config, pixel_values, input_ids, attention_mask, pixel_values\r\n> ```\r\n> \r\n> and be re-used as\r\n> \r\n> ```\r\n> config_and_inputs = self.prepare_config_and_inputs()\r\n> config, pixel_values, input_ids, attention_mask, query_pixel_values = config_and_inputs\r\n> ```\r\n> \r\n> And apart from the test cases, are there any other changes that I need to make?\r\n\r\nHi @unography, thank you for the contribution once again! \r\n\r\nAs for your question regarding the tests, yes, it'd make sense to return `config, pixel_values, input_ids, attention_mask, pixel_values` with `OwlViTForObjectDetectionTester.prepare_config_and_inputs()`.\r\n\r\nWe can add a line to this function to create `query_pixel_values` as follows:\r\n`query_pixel_values = floats_tensor([self.batch_size, self.num_channels, self.query_image_size, self.query_image_size])`",
"> Thank you for the contribution once again! The code seems to be in great shape and I just left a couple of comments regarding minor style corrections and docstrings.\r\n> \r\n> The only issue is the following tests fail:\r\n> \r\n> * OwlViTVisionModelTest.test_model\r\n> * OwlViTVisionModelTest.test_model_outputs_equivalence\r\n> * OwlViTModelTest.test_model_outputs_equivalence\r\n> \r\n> I believe this is due to making `pixel_values` the main argument in `OwlViTForObjectDetection.forward()` but I couldn't pinpoint the exact issue. @ydshieh could you take a look at the test scripts when you have time?\r\n\r\nHi, I couldn't see any test being run by CI. Could you share the error messges?\r\n\r\n@unography Could you follow the instruction below to refresh your CircleCI permission\r\nhttps://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-\r\nso that the CI could be triggered. Thanks.",
"Sure @alaradirik , I'll go through the review comments and make the changes. And actually, on my local, I'm able to get the test cases passed, on running \r\n```\r\nRUN_SLOW=1 pytest tests/models/owlvit/test_modeling_owlvit.py\r\n```\r\nI'll check once more\r\n\r\n\r\nHi @ydshieh , I'm not able to refresh the permission for some reason, I get an error `Something Unexpected Happened` on going to `https://app.circleci.com/settings/user`\r\nI don't have a CircleCI account linked to my Github actually, not sure how to reset the token and run the tests",
"> Hi, I couldn't see any test being run by CI. Could you share the error messges?\r\n\r\n@ydshieh, of course, here is the full error log. `return_dict` argument is causing the errors but there hasn't been any changes in the modeling or test files to cause this error.\r\n\r\n```\r\n―――――――――――――――――――――――――――――――――――――――――――――― OwlViTVisionModelTest.test_model ――――――――――――――――――――――――――――――――――――――――――――――\r\n\r\nself = <tests.models.owlvit.test_modeling_owlvit.OwlViTVisionModelTest testMethod=test_model>\r\n\r\n def test_model(self):\r\n config_and_inputs = self.model_tester.prepare_config_and_inputs()\r\n> self.model_tester.create_and_check_model(*config_and_inputs)\r\n\r\ntests/models/owlvit/test_modeling_owlvit.py:181: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/models/owlvit/test_modeling_owlvit.py:123: in create_and_check_model\r\n self.parent.assertEqual(result.pooler_output.shape, (self.batch_size, num_patches + 1, self.hidden_size))\r\nE AssertionError: torch.Size([12, 32]) != (12, 257, 32)\r\n\r\n tests/models/owlvit/test_modeling_owlvit.py ⨯✓s✓ 13% █▍ \r\n\r\n―――――――――――――――――――――――――――――――――――― OwlViTVisionModelTest.test_model_outputs_equivalence ――――――――――――――――――――――――――――――――――――\r\n\r\nself = <tests.models.owlvit.test_modeling_owlvit.OwlViTVisionModelTest testMethod=test_model_outputs_equivalence>\r\n\r\n def test_model_outputs_equivalence(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n def set_nan_tensor_to_zero(t):\r\n t[t != t] = 0\r\n return t\r\n \r\n def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}):\r\n with torch.no_grad():\r\n tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)\r\n dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple()\r\n \r\n def recursive_check(tuple_object, dict_object):\r\n if isinstance(tuple_object, (List, Tuple)):\r\n for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object):\r\n recursive_check(tuple_iterable_value, dict_iterable_value)\r\n elif isinstance(tuple_object, Dict):\r\n for tuple_iterable_value, dict_iterable_value in zip(\r\n tuple_object.values(), dict_object.values()\r\n ):\r\n recursive_check(tuple_iterable_value, dict_iterable_value)\r\n elif tuple_object is None:\r\n return\r\n else:\r\n self.assertTrue(\r\n torch.allclose(\r\n set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5\r\n ),\r\n msg=(\r\n \"Tuple and dict output are not equal. Difference:\"\r\n f\" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:\"\r\n f\" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has\"\r\n f\" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}.\"\r\n ),\r\n )\r\n \r\n recursive_check(tuple_output, dict_output)\r\n \r\n for model_class in self.all_model_classes:\r\n model = model_class(config)\r\n model.to(torch_device)\r\n model.eval()\r\n \r\n tuple_inputs = self._prepare_for_class(inputs_dict, model_class)\r\n dict_inputs = self._prepare_for_class(inputs_dict, model_class)\r\n> check_equivalence(model, tuple_inputs, dict_inputs)\r\n\r\ntests/test_modeling_common.py:1548: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_common.py:1512: in check_equivalence\r\n tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)\r\n/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/owlvit/modeling_owlvit.py:950: in forward\r\n return self.vision_model(\r\n/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl\r\n return forward_call(*input, **kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = OwlViTVisionTransformer(\r\n (embeddings): OwlViTVisionEmbeddings(\r\n (patch_embedding): Conv2d(3, 32, kernel_size=(2, ..., elementwise_affine=True)\r\n )\r\n )\r\n )\r\n (post_layernorm): LayerNorm((32,), eps=1e-05, elementwise_affine=True)\r\n)\r\npixel_values = tensor([[[[0.6554, 0.4061, 0.0338, ..., 0.4825, 0.8356, 0.8248],\r\n [0.3508, 0.3514, 0.2522, ..., 0.1101, 0.8...07, 0.7844, 0.0197, ..., 0.9217, 0.2872, 0.7545],\r\n [0.6380, 0.8504, 0.1550, ..., 0.4501, 0.0423, 0.5167]]]])\r\noutput_attentions = False, output_hidden_states = False, return_dict = False\r\n\r\n @add_start_docstrings_to_model_forward(OWLVIT_VISION_INPUTS_DOCSTRING)\r\n @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=OwlViTVisionConfig)\r\n def forward(\r\n self,\r\n pixel_values: torch.FloatTensor,\r\n output_attentions: Optional[bool] = None,\r\n output_hidden_states: Optional[bool] = None,\r\n return_dict: Optional[bool] = None,\r\n ) -> Union[Tuple, BaseModelOutputWithPooling]:\r\n r\"\"\"\r\n Returns:\r\n \r\n \"\"\"\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n \r\n hidden_states = self.embeddings(pixel_values)\r\n hidden_states = self.pre_layernorm(hidden_states)\r\n encoder_outputs = self.encoder(\r\n inputs_embeds=hidden_states,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n \r\n last_hidden_state = encoder_outputs[0]\r\n pooled_output = last_hidden_state[:, 0, :]\r\n pooled_output = self.post_layernorm(pooled_output)\r\n \r\n return BaseModelOutputWithPooling(\r\n last_hidden_state=last_hidden_state,\r\n pooler_output=pooled_output,\r\n> hidden_states=encoder_outputs.hidden_states,\r\n attentions=encoder_outputs.attentions,\r\n )\r\nE AttributeError: 'tuple' object has no attribute 'hidden_states'\r\n\r\nsrc/transformers/models/owlvit/modeling_owlvit.py:903: AttributeError\r\n\r\n tests/models/owlvit/test_modeling_owlvit.py ⨯sssss✓s✓✓✓✓✓ss✓✓✓✓sssss✓✓✓s✓sss✓✓✓✓✓✓✓✓✓✓✓s✓✓✓s✓✓sssss✓s✓✓✓✓✓ss✓✓ 47% ████▋ \r\n ✓✓sssss✓s✓sss✓✓✓✓✓✓✓✓✓s✓s✓✓✓ss✓ 63% ██████▍ \r\n\r\n――――――――――――――――――――――――――――――――――――――― OwlViTModelTest.test_model_outputs_equivalence ―――――――――――――――――――――――――――――――――――――――\r\n\r\nself = <tests.models.owlvit.test_modeling_owlvit.OwlViTModelTest testMethod=test_model_outputs_equivalence>\r\n\r\n def test_model_outputs_equivalence(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n def set_nan_tensor_to_zero(t):\r\n t[t != t] = 0\r\n return t\r\n \r\n def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}):\r\n with torch.no_grad():\r\n tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)\r\n dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple()\r\n \r\n def recursive_check(tuple_object, dict_object):\r\n if isinstance(tuple_object, (List, Tuple)):\r\n for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object):\r\n recursive_check(tuple_iterable_value, dict_iterable_value)\r\n elif isinstance(tuple_object, Dict):\r\n for tuple_iterable_value, dict_iterable_value in zip(\r\n tuple_object.values(), dict_object.values()\r\n ):\r\n recursive_check(tuple_iterable_value, dict_iterable_value)\r\n elif tuple_object is None:\r\n return\r\n else:\r\n self.assertTrue(\r\n torch.allclose(\r\n set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5\r\n ),\r\n msg=(\r\n \"Tuple and dict output are not equal. Difference:\"\r\n f\" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:\"\r\n f\" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has\"\r\n f\" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}.\"\r\n ),\r\n )\r\n \r\n recursive_check(tuple_output, dict_output)\r\n \r\n for model_class in self.all_model_classes:\r\n model = model_class(config)\r\n model.to(torch_device)\r\n model.eval()\r\n \r\n tuple_inputs = self._prepare_for_class(inputs_dict, model_class)\r\n dict_inputs = self._prepare_for_class(inputs_dict, model_class)\r\n> check_equivalence(model, tuple_inputs, dict_inputs)\r\n\r\ntests/test_modeling_common.py:1548: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_common.py:1512: in check_equivalence\r\n tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)\r\n/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/owlvit/modeling_owlvit.py:1132: in forward\r\n vision_outputs = self.vision_model(\r\n/opt/miniconda3/envs/hf/lib/python3.8/site-packages/torch/nn/modules/module.py:1110: in _call_impl\r\n return forward_call(*input, **kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = OwlViTVisionTransformer(\r\n (embeddings): OwlViTVisionEmbeddings(\r\n (patch_embedding): Conv2d(3, 32, kernel_size=(2, ..., elementwise_affine=True)\r\n )\r\n )\r\n )\r\n (post_layernorm): LayerNorm((32,), eps=1e-05, elementwise_affine=True)\r\n)\r\npixel_values = tensor([[[[0.4672, 0.5573, 0.4972, ..., 0.3060, 0.1213, 0.4710],\r\n [0.1233, 0.0373, 0.8195, ..., 0.5669, 0.8...20, 0.2224, 0.6059, ..., 0.2634, 0.5912, 0.3576],\r\n [0.1761, 0.1272, 0.9066, ..., 0.9368, 0.1087, 0.4829]]]])\r\noutput_attentions = False, output_hidden_states = False, return_dict = False\r\n\r\n @add_start_docstrings_to_model_forward(OWLVIT_VISION_INPUTS_DOCSTRING)\r\n @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=OwlViTVisionConfig)\r\n def forward(\r\n self,\r\n pixel_values: torch.FloatTensor,\r\n output_attentions: Optional[bool] = None,\r\n output_hidden_states: Optional[bool] = None,\r\n return_dict: Optional[bool] = None,\r\n ) -> Union[Tuple, BaseModelOutputWithPooling]:\r\n r\"\"\"\r\n Returns:\r\n \r\n \"\"\"\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n \r\n hidden_states = self.embeddings(pixel_values)\r\n hidden_states = self.pre_layernorm(hidden_states)\r\n encoder_outputs = self.encoder(\r\n inputs_embeds=hidden_states,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n \r\n last_hidden_state = encoder_outputs[0]\r\n pooled_output = last_hidden_state[:, 0, :]\r\n pooled_output = self.post_layernorm(pooled_output)\r\n \r\n return BaseModelOutputWithPooling(\r\n last_hidden_state=last_hidden_state,\r\n pooler_output=pooled_output,\r\n> hidden_states=encoder_outputs.hidden_states,\r\n attentions=encoder_outputs.attentions,\r\n )\r\nE AttributeError: 'tuple' object has no attribute 'hidden_states'\r\n\r\nsrc/transformers/models/owlvit/modeling_owlvit.py:903: AttributeError\r\n```",
"@alaradirik \r\n\r\nhttps://github.com/huggingface/transformers/blob/bb61e30962c0a6cf866e7e8e5a75b7d86d8589c2/src/transformers/models/owlvit/modeling_owlvit.py\r\n\r\nFrom the file, it looks like the latest version in this PR is different from the version that produced the error you provided above.\r\n\r\nSee \r\nhttps://github.com/huggingface/transformers/blob/bb61e30962c0a6cf866e7e8e5a75b7d86d8589c2/src/transformers/models/owlvit/modeling_owlvit.py#L893-L902\r\nwhere there is \r\n```\r\n if not return_dict:\r\n return (last_hidden_state, pooled_output) + encoder_outputs[1:]\r\n```\r\nbut not in your error message.",
"> Sure @alaradirik , I'll go through the review comments and make the changes. And actually, on my local, I'm able to get the test cases passed, on running\r\n> \r\n> ```\r\n> RUN_SLOW=1 pytest tests/models/owlvit/test_modeling_owlvit.py\r\n> ```\r\n> \r\n> I'll check once more\r\n> \r\n> Hi @ydshieh , I'm not able to refresh the permission for some reason, I get an error `Something Unexpected Happened` on going to `https://app.circleci.com/settings/user` I don't have a CircleCI account linked to my Github actually, not sure how to reset the token and run the tests\r\n\r\nI triggered it :) ",
"@ydshieh great, thank you! I hadn't pulled the latest changes on this branch.\r\n\r\n@unography we can merge this PR once the remaining minor issues are addressed, thank you again for the clean implementation :)",
"Hi @unography! Just wanted to ask if you'll have time to work on this PR this week? \r\n\r\nThe OWL-ViT paper will be presented at ECCV in less than 2 weeks and I can work on the remaining issues / code clean-ups if you don't have the time.",
"Hi @alaradirik and @unography , thanks for working on this. \r\n\r\nI hope this helps development, I was playing with this branch using the following code\r\n\r\n```python\r\nfrom PIL import Image\r\nimport torch\r\n\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\n\r\nimage = Image.open(\"./images/image.jpeg\").convert(\"RGB\")\r\nquery = Image.open(\"./images/query.png\").convert(\"RGB\")\r\n\r\ninputs = processor(query_image=query, images=image, return_tensors=\"pt\")\r\n\r\noutputs = model(**inputs)\r\n\r\n```\r\n\r\nUnfortunately, it gives the following error\r\n\r\n```\r\n File \"/workspace/demo-hf.py\", line 17, in <module>\r\n outputs = model(**inputs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1186, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/owlvit/modeling_owlvit.py\", line 1578, in forward\r\n query_embeds = self.embed_image_query(query_image_feats, query_feature_map)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/owlvit/modeling_owlvit.py\", line 1404, in embed_image_query\r\n mean_sim = torch.einsum(\"d,id->i\", mean_embeds, selected_embeddings)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/functional.py\", line 360, in einsum\r\n return _VF.einsum(equation, operands) # type: ignore[attr-defined]\r\nRuntimeError: einsum(): the number of subscripts in the equation (2) does not match the number of dimensions (1) for operand 1 and no ellipsis was given\r\n```\r\n\r\nThis is quite surprising since tests are green. The code fails inside (see stack trace above) `OwlViTForObjectDetection.embed_image_query` I am lacking the deep knowledge of the model/original code base, but to embed an image query should we just take the first token of the last layer? \r\n\r\nThe code fails at line `1404` since (with my example)` both inputs are 1D tensors\r\n\r\n```python\r\nmean_sim = torch.einsum(\"d,id->i\", mean_embeds, selected_embeddings) # both are 1D tensors\r\n```\r\n\r\nI am also curios to ask what is exactly going on in this function, maybe I am able to help somehow.\r\n\r\nThanks!\r\n",
"> Hi @unography! Just wanted to ask if you'll have time to work on this PR this week?\r\n> \r\n> The OWL-ViT paper will be presented at ECCV in less than 2 weeks and I can work on the remaining issues / code clean-ups if you don't have the time.\r\n\r\nHi @alaradirik, I'll work on this today, but if I'm unable to continue I'll ping you and let you know. My apologies for the delay with this.",
"Hi @alaradirik, I've added a few fixes, but unfortunately, I'm unable to find time to contribute more right now. I hope my current changes are clear enough that you can update with remaining changes ",
"> ```python\r\n> from PIL import Image\r\n> import torch\r\n> \r\n> from transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n> \r\n> processor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\n> model = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\n> \r\n> image = Image.open(\"./images/image.jpeg\").convert(\"RGB\")\r\n> query = Image.open(\"./images/query.png\").convert(\"RGB\")\r\n> \r\n> inputs = processor(query_image=query, images=image, return_tensors=\"pt\")\r\n> \r\n> outputs = model(**inputs)\r\n> ```\r\n\r\nHi @FrancescoSaverioZuppichini, sorry for my late reply and thanks for your input! Could you be running a previous version of the code? If not, could you add your system info? I can't replicate this error on my local.",
"> Hi @alaradirik, I've added a few fixes, but unfortunately, I'm unable to find time to contribute more right now. I hope my current changes are clear enough that you can update with remaining changes\r\n\r\nHi @unography, sorry for the delay! And of course, I'd be happy to finish this up, could you add me to your transformers repo as a collaborator?",
"I think the PR is good to go, @NielsRogge @sgugger could you do a final review when you have the time?\r\n\r\n\r\nI will update the model card and the OWL-ViT notebooks demo with this [PR](https://github.com/huggingface/notebooks/pull/256) after this PR is merged.",
"> > ```python\r\n> > from PIL import Image\r\n> > import torch\r\n> > \r\n> > from transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n> > \r\n> > processor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\n> > model = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\n> > \r\n> > image = Image.open(\"./images/image.jpeg\").convert(\"RGB\")\r\n> > query = Image.open(\"./images/query.png\").convert(\"RGB\")\r\n> > \r\n> > inputs = processor(query_image=query, images=image, return_tensors=\"pt\")\r\n> > \r\n> > outputs = model(**inputs)\r\n> > ```\r\n> \r\n> Hi @FrancescoSaverioZuppichini, sorry for my late reply and thanks for your input! Could you be running a previous version of the code? If not, could you add your system info? I can't replicate this error on my local.\r\n\r\nYes :) That was fixed by @unography with a later commit",
"So I've tested the current branch using the same image and (more or less) the same query from the [official notebook (that uses `ViT-B/16`)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#inference-playground).\r\n\r\nParams:\r\n`min_confidence = 0.6`\r\n`nms_threshold = 0.3` \r\n\r\n<img width=\"1512\" alt=\"Screenshot 2022-10-21 at 15 55 29\" src=\"https://user-images.githubusercontent.com/15908060/197212918-2170543d-6cdc-41f0-a426-f80e0212e14e.png\">\r\n\r\nI am not sure how to replicate the parameters with your implementation, but I am not able to get anything close to it\r\n\r\n```python\r\nfrom PIL import Image\r\nimport torch\r\n\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch16\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch16\")\r\n\r\nimage = Image.open(\"./images/image.jpeg\").convert(\"RGB\")\r\nquery = Image.open(\"./images/query.png\").convert(\"RGB\")\r\n\r\ninputs = processor(query_image=query, images=image, return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\n\r\nw, h = image.size\r\noutputs = processor.post_process(outputs, torch.tensor([[h, w]]))\r\nprint(outputs[0]['scores'].mean()) # 9.3531e-08\r\nprint(torch.where(outputs[0]['scores'] > .3)) # empty\r\n```\r\nPasted below `image` and `query` \r\n\r\n\r\n<img width=\"209\" alt=\"query\" src=\"https://user-images.githubusercontent.com/15908060/197228062-33044dd2-17aa-4419-aee6-a811935ea7ce.png\">\r\n\r\nHope it helps :) \r\n\r\n",
"> So I've tested the current branch using the same image and (more or less) the same query from the [official notebook (that uses `ViT-B/16`)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#inference-playground).\r\n> \r\n> Params: `min_confidence = 0.6` `nms_threshold = 0.3`\r\n> \r\n> <img alt=\"Screenshot 2022-10-21 at 15 55 29\" width=\"1512\" src=\"https://user-images.githubusercontent.com/15908060/197212918-2170543d-6cdc-41f0-a426-f80e0212e14e.png\">\r\n> \r\n> I am not sure how to replicate the parameters with your implementation, but I am not able to get anything close to it\r\n> \r\n> ```python\r\n> from PIL import Image\r\n> import torch\r\n> \r\n> from transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n> \r\n> processor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch16\")\r\n> model = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch16\")\r\n> \r\n> image = Image.open(\"./images/image.jpeg\").convert(\"RGB\")\r\n> query = Image.open(\"./images/query.png\").convert(\"RGB\")\r\n> \r\n> inputs = processor(query_image=query, images=image, return_tensors=\"pt\")\r\n> outputs = model(**inputs)\r\n> \r\n> w, h = image.size\r\n> outputs = processor.post_process(outputs, torch.tensor([[h, w]]))\r\n> print(outputs[0]['scores'].mean()) # 9.3531e-08\r\n> print(torch.where(outputs[0]['scores'] > .3)) # empty\r\n> ```\r\n> \r\n> Pasted below `image` and `query`\r\n> \r\n>  <img alt=\"query\" width=\"209\" src=\"https://user-images.githubusercontent.com/15908060/197228062-33044dd2-17aa-4419-aee6-a811935ea7ce.png\">\r\n> \r\n> Hope it helps :)\r\n\r\nSame problem. A working demo would be helpful to users.",
"Hey @NielsRogge @sgugger, I added an `OwlViTForImageGuidedObjectDetection` head class in order to keep the forward signature of `OwlViTForObjectDetection` the same but I'm guessing I need to create separate repos on the hub for this class. Is there a way around this? Or shall I add image guided object detection as a method of the `OwlViTForObjectDetection` class?",
"> Hey @NielsRogge @sgugger, I added an `OwlViTForImageGuidedObjectDetection` head class in order to keep the forward signature of `OwlViTForObjectDetection` the same but I'm guessing I need to create separate repos on the hub for this class. Is there a way around this?\r\n\r\nWhy? If the model is structured the same way, the weights should be loaded from the checkpoint with no issue.\r\n\r\n",
"> > Hey @NielsRogge @sgugger, I added an `OwlViTForImageGuidedObjectDetection` head class in order to keep the forward signature of `OwlViTForObjectDetection` the same but I'm guessing I need to create separate repos on the hub for this class. Is there a way around this?\r\n> \r\n> Why? If the model is structured the same way, the weights should be loaded from the checkpoint with no issue.\r\n\r\nI meant creating separate repos for the same checkpoint to load the `OwlViTForImageGuidedObjectDetection` head but yes, it seems doable.\r\n\r\nWith that said, I ended up restructuring image guided detection as a method of the `OwlViTForObjectDetection` class since this is more consistent with the original work and `OwlViTForImageGuidedObjectDetection` would not be a trainable class as it just repurposes the pretrained zero-shot detection model.\r\n\r\nI fixed the detection issues (redundant normalization of visual embeddings + lack of postprocessing) and created a separate [PR](https://github.com/huggingface/notebooks/pull/256) to update the [demo notebook](https://github.com/alaradirik/notebooks/blob/update-owlvit-demo/examples/zeroshot_object_detection_with_owlvit.ipynb). \r\n\r\n@NielsRogge @sgugger could you re-review when you have the time?",
"Hi @alaradirik, sorry my notifications got messed up, I was able to go through the comments only now. Do I need to change anything for merging? Upstream url or anything else?",
"> Hi @alaradirik, sorry my notifications got messed up, I was able to go through the comments only now. Do I need to change anything for merging? Upstream url or anything else?\r\n\r\nHey @unography no problem at all! I'm about to merge a clean [PR](https://github.com/huggingface/transformers/pull/20136) with the correct upstream. Could you give me your email address so that I can add you as the co-author to my commits? ",
"@alaradirik sure, this is my email - k4r4n.dhruv@gmail.com",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
This adds support for doing object detection with OWL-ViT using query image(s)
For https://github.com/huggingface/transformers/issues/18748
cc: @alaradirik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18891/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18891",
"html_url": "https://github.com/huggingface/transformers/pull/18891",
"diff_url": "https://github.com/huggingface/transformers/pull/18891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18891.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18890
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18890/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18890/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18890/events
|
https://github.com/huggingface/transformers/issues/18890
| 1,361,163,788
|
I_kwDOCUB6oc5RIbYM
| 18,890
|
BART example does not produce expected masks
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Seems that it was a combination of: too short sequence length and not the best way of calculating the actual coverage (not accounting for span masks). Revised example that yields the expected results:\r\n\r\n```python\r\nimport math\r\nfrom itertools import chain\r\nfrom typing import Dict, List\r\n\r\nimport numpy as np\r\nfrom transformers import AutoTokenizer, BatchEncoding, PreTrainedTokenizerBase\r\n\r\n\r\ndef shift_tokens_right(input_ids: np.array, pad_token_id: int, decoder_start_token_id: int) -> np.ndarray:\r\n \"\"\"\r\n Shift input ids one token to the right.\r\n \"\"\"\r\n shifted_input_ids = np.zeros_like(input_ids)\r\n shifted_input_ids[:, 1:] = input_ids[:, :-1]\r\n shifted_input_ids[:, 0] = decoder_start_token_id\r\n\r\n shifted_input_ids = np.where(shifted_input_ids == -100, pad_token_id, shifted_input_ids)\r\n return shifted_input_ids\r\n\r\n\r\ndef collate(examples: List[Dict[str, List[int]]], tokenizer: PreTrainedTokenizerBase, decoder_start_token_id=2, permute_sentence_ratio=1.0,\r\n mask_ratio=0.3, poisson_lambda=3.5) -> BatchEncoding:\r\n # convert list to dict and tensorize input\r\n batch = BatchEncoding(\r\n {k: np.array([examples[i][k] for i in range(len(examples))], dtype=int) for k, v in examples[0].items()}\r\n )\r\n\r\n batch[\"labels\"] = batch[\"input_ids\"].copy()\r\n batch[\"decoder_input_ids\"] = shift_tokens_right(\r\n batch[\"labels\"], tokenizer.pad_token_id, decoder_start_token_id\r\n )\r\n # permuting sentences\r\n do_permute = False\r\n if permute_sentence_ratio > 0.0:\r\n batch[\"input_ids\"] = permute_sentences(batch[\"input_ids\"], tokenizer, permute_sentence_ratio=permute_sentence_ratio)\r\n do_permute = True\r\n\r\n # masking span of tokens (text infilling in the paper)\r\n if mask_ratio:\r\n batch[\"input_ids\"], batch[\"labels\"] = span_mask_tokens(\r\n batch[\"input_ids\"], batch[\"labels\"], tokenizer, do_permute=do_permute, poisson_lambda=poisson_lambda, mask_ratio=mask_ratio\r\n )\r\n\r\n # ignore pad tokens\r\n batch[\"attention_mask\"] = (batch[\"input_ids\"] != tokenizer.pad_token_id).astype(int)\r\n batch[\"decoder_attention_mask\"] = (batch[\"decoder_input_ids\"] != tokenizer.pad_token_id).astype(int)\r\n return batch\r\n\r\ndef permute_sentences(input_ids, tokenizer, permute_sentence_ratio=1.0):\r\n \"\"\"\r\n Shuffle sentences in each document.\r\n \"\"\"\r\n results = input_ids.copy()\r\n\r\n # find end locations of sentences\r\n end_sentence_mask = input_ids == tokenizer.pad_token_id\r\n sentence_ends = np.argwhere(end_sentence_mask)\r\n sentence_ends[:, 1] += 1\r\n example_has_multiple_sentences, num_sentences = np.unique(sentence_ends[:, 0], return_counts=True)\r\n num_sentences_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_sentences)}\r\n\r\n num_to_permute = np.ceil(num_sentences * permute_sentence_ratio).astype(int)\r\n num_to_permute_map = {\r\n sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_to_permute)\r\n }\r\n\r\n sentence_ends = np.split(sentence_ends[:, 1], np.unique(sentence_ends[:, 0], return_index=True)[1][1:])\r\n sentence_ends_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, sentence_ends)}\r\n\r\n for i in range(input_ids.shape[0]):\r\n if i not in example_has_multiple_sentences:\r\n continue\r\n substitutions = np.random.permutation(num_sentences_map[i])[: num_to_permute_map[i]]\r\n ordering = np.arange(0, num_sentences_map[i])\r\n ordering[substitutions] = substitutions[np.random.permutation(num_to_permute_map[i])]\r\n\r\n # write shuffled sentences into results\r\n index = 0\r\n for j in ordering:\r\n sentence = input_ids[i, (sentence_ends_map[i][j - 1] if j > 0 else 0) : sentence_ends_map[i][j]]\r\n results[i, index : index + sentence.shape[0]] = sentence\r\n index += sentence.shape[0]\r\n return results\r\n\r\n\r\ndef span_mask_tokens(input_ids, labels, tokenizer, do_permute=True, poisson_lambda=3.5, mask_ratio=0.3):\r\n \"\"\"\r\n Sampling text spans with span lengths drawn from a Poisson distribution and masking them.\r\n \"\"\"\r\n special_tokens_mask_labels = [\r\n tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()\r\n ]\r\n special_tokens_mask_inputs = [\r\n tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in input_ids.tolist()\r\n ]\r\n special_tokens_mask_labels = np.array(special_tokens_mask_labels, dtype=bool)\r\n special_tokens_mask_inputs = np.array(special_tokens_mask_inputs, dtype=bool)\r\n\r\n # determine how many tokens we need to mask in total\r\n is_token_mask = ~(input_ids == tokenizer.pad_token_id) & ~special_tokens_mask_inputs\r\n num_tokens_to_mask = int(math.ceil(is_token_mask.astype(float).sum() * mask_ratio))\r\n if num_tokens_to_mask == 0:\r\n return input_ids, labels\r\n\r\n # generate a sufficient number of span lengths\r\n span_lengths = np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))\r\n while np.cumsum(span_lengths, 0)[-1] < num_tokens_to_mask:\r\n span_lengths = np.concatenate(\r\n [span_lengths, np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))]\r\n )\r\n\r\n # remove all spans of length 0\r\n # note that BART inserts additional mask tokens where length == 0,\r\n # which we do not implement for now as it adds additional complexity\r\n span_lengths = span_lengths[span_lengths > 0]\r\n\r\n # trim to about num_tokens_to_mask tokens\r\n cutoff_idx = np.argmin(np.abs(np.cumsum(span_lengths, 0) - num_tokens_to_mask)) + 1\r\n span_lengths = span_lengths[:cutoff_idx]\r\n\r\n # randomly choose starting positions for masking\r\n token_indices = np.argwhere(is_token_mask == 1)\r\n span_starts = np.random.permutation(token_indices.shape[0])[: span_lengths.shape[0]]\r\n # prepare mask\r\n masked_indices = np.array(token_indices[span_starts])\r\n mask = np.full_like(input_ids, fill_value=False)\r\n\r\n # mask starting positions\r\n for mi in masked_indices:\r\n mask[tuple(mi)] = True\r\n span_lengths -= 1\r\n\r\n # fill up spans\r\n max_index = input_ids.shape[1] - 1\r\n remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)\r\n while np.any(remaining):\r\n masked_indices[remaining, 1] += 1\r\n for mi in masked_indices:\r\n mask[tuple(mi)] = True\r\n span_lengths -= 1\r\n remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)\r\n\r\n # place the mask tokens\r\n mask[np.where(special_tokens_mask_inputs)] = False\r\n input_ids[np.where(mask)] = tokenizer.mask_token_id\r\n if not do_permute:\r\n labels[np.where(mask == 0)] = -100\r\n else:\r\n labels[np.where(special_tokens_mask_labels)] = -100\r\n\r\n # remove mask tokens that are not starts of spans\r\n to_remove = (mask == 1) & np.roll((mask == 1), 1, 1)\r\n new_input_ids = np.full_like(input_ids, fill_value=tokenizer.pad_token_id)\r\n for i, example in enumerate(input_ids):\r\n new_example = example[~to_remove[i]]\r\n new_input_ids[i, : new_example.shape[0]] = new_example\r\n\r\n return new_input_ids, labels\r\n\r\n\r\ndef group_texts(examples, max_seq_length):\r\n # Concatenate all texts.\r\n concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can\r\n # customize this part to your needs.\r\n if total_length >= max_seq_length:\r\n total_length = (total_length // max_seq_length) * max_seq_length\r\n # Split by chunks of max_len.\r\n result = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n return result\r\n\r\n\r\ndef main():\r\n pass\r\n\r\ndef get_n_mask_tokens(tokens, mask_token_id):\r\n unique, counts = np.unique(tokens, return_counts=True)\r\n counter = dict(zip(unique, counts))\r\n return counter[mask_token_id]\r\n\r\n\r\ndef get_n_nonspecial_tokens(tokens, all_special_ids):\r\n return len([t for t in tokens if t not in all_special_ids])\r\n\r\n\r\nif __name__ == '__main__':\r\n tokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\n text = \"On september 2nd the Group of Seven (G7) countries launched a new attempt to regain the advantage in the\" \\\r\n \" West’s energy confrontation with Russia: imposing a price cap on purchases of Russian oil and oil\" \\\r\n \" products, probably to take effect on December 5th. \"\r\n encoded = tokenizer(text)\r\n input_ids = encoded[\"input_ids\"]\r\n n_input_toks = get_n_nonspecial_tokens(input_ids, tokenizer.all_special_ids)\r\n print(\"DECODED INPUT\", tokenizer.decode(input_ids))\r\n processed = collate([encoded], tokenizer)\r\n input_ids_out = processed[\"input_ids\"].squeeze().tolist()\r\n n_output_toks = get_n_nonspecial_tokens(input_ids_out, tokenizer.all_special_ids)\r\n print(\"DECODED OUTPUT\", tokenizer.decode(input_ids_out))\r\n\r\n n_masks_out = get_n_mask_tokens(input_ids_out, tokenizer.mask_token_id) + (n_input_toks-n_output_toks)\r\n print(f\"MASK RATIO ({n_masks_out}/{n_input_toks})\", n_masks_out/n_input_toks)\r\n```\r\n\r\nOutput:\r\n\r\n```\r\nDECODED INPUT <s>On september 2nd the Group of Seven (G7) countries launched a new attempt to regain the advantage in the West’s energy confrontation with Russia: imposing a price cap on purchases of Russian oil and oil products, probably to take effect on December 5th. </s>\r\nDECODED OUTPUT <s>On september 2nd the<mask>) countries launched a new attempt to regain the advantage in the West’s<mask> price cap on purchases of Russian oil and oil products, probably to take effect on December<mask>th. </s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>\r\nMASK RATIO (17/57) 0.2982456140350877\r\n```",
"Hey @BramVanroy! Thanks for posting this issue with such clear and concise code-snippets! In your opinion, would it be worth implementing these changes in the example script? Or are they specific to your use case? Perhaps you could summarise quickly the changes you made to computing the actual coverage! Thanks!",
"Hey @sanchit-gandhi. The issue was with how I calculated the coverage for reporting, so there was a mistake on my end not on yours!"
] | 1,662
| 1,662
| 1,662
|
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.21.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
### Who can help?
@sgugger @patil-suraj
The example was added by @duongna21 and @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is a minimal example that I extracted from [the example](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_bart_dlm_flax.py) (turned into functions, added example data).
```python
import math
from itertools import chain
from typing import Dict, List
import numpy as np
from transformers import AutoTokenizer, BatchEncoding, PreTrainedTokenizerBase
def shift_tokens_right(input_ids: np.array, pad_token_id: int, decoder_start_token_id: int) -> np.ndarray:
"""
Shift input ids one token to the right.
"""
shifted_input_ids = np.zeros_like(input_ids)
shifted_input_ids[:, 1:] = input_ids[:, :-1]
shifted_input_ids[:, 0] = decoder_start_token_id
shifted_input_ids = np.where(shifted_input_ids == -100, pad_token_id, shifted_input_ids)
return shifted_input_ids
def collate(examples: List[Dict[str, List[int]]], tokenizer: PreTrainedTokenizerBase, decoder_start_token_id=2, permute_sentence_ratio=1.0,
mask_ratio=0.3, poisson_lambda=3.5) -> BatchEncoding:
# convert list to dict and tensorize input
batch = BatchEncoding(
{k: np.array([examples[i][k] for i in range(len(examples))], dtype=int) for k, v in examples[0].items()}
)
batch["labels"] = batch["input_ids"].copy()
batch["decoder_input_ids"] = shift_tokens_right(
batch["labels"], tokenizer.pad_token_id, decoder_start_token_id
)
# permuting sentences
do_permute = False
if permute_sentence_ratio > 0.0:
batch["input_ids"] = permute_sentences(batch["input_ids"], tokenizer, permute_sentence_ratio=permute_sentence_ratio)
do_permute = True
# masking span of tokens (text infilling in the paper)
if mask_ratio:
batch["input_ids"], batch["labels"] = span_mask_tokens(
batch["input_ids"], batch["labels"], tokenizer, do_permute=do_permute, poisson_lambda=poisson_lambda, mask_ratio=mask_ratio
)
# ignore pad tokens
batch["attention_mask"] = (batch["input_ids"] != tokenizer.pad_token_id).astype(int)
batch["decoder_attention_mask"] = (batch["decoder_input_ids"] != tokenizer.pad_token_id).astype(int)
return batch
def permute_sentences(input_ids, tokenizer, permute_sentence_ratio=1.0):
"""
Shuffle sentences in each document.
"""
results = input_ids.copy()
# find end locations of sentences
end_sentence_mask = input_ids == tokenizer.pad_token_id
sentence_ends = np.argwhere(end_sentence_mask)
sentence_ends[:, 1] += 1
example_has_multiple_sentences, num_sentences = np.unique(sentence_ends[:, 0], return_counts=True)
num_sentences_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_sentences)}
num_to_permute = np.ceil(num_sentences * permute_sentence_ratio).astype(int)
num_to_permute_map = {
sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, num_to_permute)
}
sentence_ends = np.split(sentence_ends[:, 1], np.unique(sentence_ends[:, 0], return_index=True)[1][1:])
sentence_ends_map = {sent_idx: count for sent_idx, count in zip(example_has_multiple_sentences, sentence_ends)}
for i in range(input_ids.shape[0]):
if i not in example_has_multiple_sentences:
continue
substitutions = np.random.permutation(num_sentences_map[i])[: num_to_permute_map[i]]
ordering = np.arange(0, num_sentences_map[i])
ordering[substitutions] = substitutions[np.random.permutation(num_to_permute_map[i])]
# write shuffled sentences into results
index = 0
for j in ordering:
sentence = input_ids[i, (sentence_ends_map[i][j - 1] if j > 0 else 0) : sentence_ends_map[i][j]]
results[i, index : index + sentence.shape[0]] = sentence
index += sentence.shape[0]
return results
def span_mask_tokens(input_ids, labels, tokenizer, do_permute=True, poisson_lambda=3.5, mask_ratio=0.3):
"""
Sampling text spans with span lengths drawn from a Poisson distribution and masking them.
"""
special_tokens_mask_labels = [
tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()
]
special_tokens_mask_inputs = [
tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in input_ids.tolist()
]
special_tokens_mask_labels = np.array(special_tokens_mask_labels, dtype=bool)
special_tokens_mask_inputs = np.array(special_tokens_mask_inputs, dtype=bool)
# determine how many tokens we need to mask in total
is_token_mask = ~(input_ids == tokenizer.pad_token_id) & ~special_tokens_mask_inputs
num_tokens_to_mask = int(math.ceil(is_token_mask.astype(float).sum() * mask_ratio))
if num_tokens_to_mask == 0:
return input_ids, labels
# generate a sufficient number of span lengths
span_lengths = np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))
while np.cumsum(span_lengths, 0)[-1] < num_tokens_to_mask:
span_lengths = np.concatenate(
[span_lengths, np.random.poisson(lam=poisson_lambda, size=(num_tokens_to_mask,))]
)
# remove all spans of length 0
# note that BART inserts additional mask tokens where length == 0,
# which we do not implement for now as it adds additional complexity
span_lengths = span_lengths[span_lengths > 0]
# trim to about num_tokens_to_mask tokens
cutoff_idx = np.argmin(np.abs(np.cumsum(span_lengths, 0) - num_tokens_to_mask)) + 1
span_lengths = span_lengths[:cutoff_idx]
# randomly choose starting positions for masking
token_indices = np.argwhere(is_token_mask == 1)
span_starts = np.random.permutation(token_indices.shape[0])[: span_lengths.shape[0]]
# prepare mask
masked_indices = np.array(token_indices[span_starts])
mask = np.full_like(input_ids, fill_value=False)
# mask starting positions
for mi in masked_indices:
mask[tuple(mi)] = True
span_lengths -= 1
# fill up spans
max_index = input_ids.shape[1] - 1
remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)
while np.any(remaining):
masked_indices[remaining, 1] += 1
for mi in masked_indices:
mask[tuple(mi)] = True
span_lengths -= 1
remaining = (span_lengths > 0) & (masked_indices[:, 1] < max_index)
# place the mask tokens
mask[np.where(special_tokens_mask_inputs)] = False
input_ids[np.where(mask)] = tokenizer.mask_token_id
if not do_permute:
labels[np.where(mask == 0)] = -100
else:
labels[np.where(special_tokens_mask_labels)] = -100
# remove mask tokens that are not starts of spans
to_remove = (mask == 1) & np.roll((mask == 1), 1, 1)
new_input_ids = np.full_like(input_ids, fill_value=tokenizer.pad_token_id)
for i, example in enumerate(input_ids):
new_example = example[~to_remove[i]]
new_input_ids[i, : new_example.shape[0]] = new_example
return new_input_ids, labels
def group_texts(examples, max_seq_length):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= max_seq_length:
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
def main():
pass
if __name__ == '__main__':
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
text = [f"I have never seen a man eating such a large cookie in one sitting.{tokenizer.pad_token}Wow!",
"In the evening he often breaks into the bakery to much on cookies and milk"]
# For the example we first need to work with a batch (because group_texts is batched)
# and then convert it back to a list of samples (to test the the collator)
encoded = tokenizer(text, padding=True, return_attention_mask=False)
max_len = max(len(e) for e in encoded["input_ids"])
encoded = group_texts(encoded, max_len//2)
print(f"input_ids after group (len: {max_len//2})", tokenizer.batch_decode(encoded["input_ids"]))
# Convert batch back to list of samples and collate
pre_collate = [{k: seq} for k, v in encoded.items() for seq in v]
expected_mask_ratio = 0.3
batch = collate(pre_collate, tokenizer, mask_ratio=expected_mask_ratio)
print("input_ids after collate", tokenizer.batch_decode(batch["input_ids"]))
# Calculate mask coverage
unique, counts = np.unique(batch["input_ids"], return_counts=True)
maskcounts = dict(zip(unique, counts))[tokenizer.mask_token_id]
# Count the number of tokens but exclude special tokens
ntokens = sum([len([t for t in e if t not in tokenizer.all_special_ids]) for e in encoded["input_ids"]])
print(f"produced masks coverage (expected {expected_mask_ratio})",
maskcounts / ntokens, f"({maskcounts}/{ntokens})")
ntokens_after_collate = sum([len([t for t in e if t not in tokenizer.all_special_ids]) for e in batch["input_ids"]])
print(f"no. tokens before {ntokens}, no. tokens after collate {ntokens_after_collate}")
```
The output will be something like this:
```
input_ids after group (len: 10) ['<s>I have never seen a man eating such a', ' large cookie in one sitting.<pad>Wow!</s>', '<s>In the evening he often breaks into the bakery', ' to much on cookies and milk</s><pad><pad><pad>']
input_ids after collate ['<s>I have never seen a<mask> eating<mask><pad>', ' large cookie<mask> one sitting.<pad>Wow!</s>', '<s>In the evening he<mask> the bakery<pad><pad>', '<pad><pad> to much on cookies and milk</s><pad>']
produced masks coverage (expected 0.3) 0.125 (4/32)
no. tokens before 32, no. tokens after collate 25
```
### Expected behavior
Am I wrong in expecting a masking coverage that is closer to 0.3 (here it is only 0.125)?
In addition, I find it very odd that the last sequence suddenly has _prepended_ padding tokens?! I found that this does not always happen (there is some random sampling happening so results vary). But I don't think this should every happen?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18890/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18889
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18889/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18889/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18889/events
|
https://github.com/huggingface/transformers/issues/18889
| 1,361,105,813
|
I_kwDOCUB6oc5RINOV
| 18,889
|
Larger Logits != Larger Probability
|
{
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @Hannibal046 👋 \r\n\r\nThe sentences are sorted by the sum of the scores (i.e. logits with potential modifiers on top), as you wrote. However, if you want to map it back to probabilities, the operator to apply is the product, not the sum :) The sum of logarithms corresponds to the logarithm of the product. Applying the product instead, you'll see the examples are correctly sorted.\r\n\r\nFor more info, see [our blog post](https://huggingface.co/blog/how-to-generate)\r\n\r\n___________________________\r\n\r\nSince the original question is solved, I'm closing the issue. Feel free to reopen if you find related bugs!"
] | 1,662
| 1,662
| 1,662
|
NONE
| null |
### System Info
transformers version: 4.22.0.dev0
Platform: Linux-5.8.0-51-generic-x86_64-with-glibc2.10
Python version: 3.8.13
Huggingface_hub version: 0.8.1
PyTorch version (GPU?): 1.12.1+cu113 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?:
Using distributed or parallel set-up in script?:
### Who can help?
@gante @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import BartTokenizer,BartForConditionalGeneration
model_path = "/data/pretrained_model/bart_base"
toker = BartTokenizer.from_pretrained(model_path)
model = BartForConditionalGeneration.from_pretrained(model_path)
input_tokens = ["what do you think it ? huggingface is a great library. And I enjoy it very much",
"transformers is so good"]
batch_size = 2
num_beams = 10
max_length = 5
num_return_sequences = 5
input_ids = toker(input_tokens,return_tensors='pt',padding=True).input_ids
output = model.generate(input_ids,max_length=max_length,num_beams=num_beams,num_return_sequences=num_return_sequences,
return_dict_in_generate=True,output_scores=True)
def get_logits_and_probs(output,num_return_sequence,batch_size,eos_token_id):
"""
using for-loop to get positional-wise logits and probability
"""
import torch
total = num_return_sequence * batch_size
token_logits = [[] for _ in range(total)]
token_probs = [[] for _ in range(total)]
continue_or_not = [True for _ in range(total)]
for time_step in range(len(output.scores)):
cur_scores = output.scores[time_step] ## num_beam,vocab_size
for idx in range(total):
cur_beam = output.beam_indices[idx][time_step]
cur_token = output.sequences[idx][time_step+1] ## decoder_start_token_id
if continue_or_not[idx]:
token_probs[idx].append(torch.softmax(cur_scores[cur_beam],dim=-1)[cur_token].item())
token_logits[idx].append(cur_scores[cur_beam][cur_token].item())
if cur_token==eos_token_id:
continue_or_not[idx]=False
return token_logits,token_probs
token_logits,token_probs = get_logits_and_probs(output,num_return_sequences,batch_size,toker.eos_token_id)
def avg(ls):
return sum(ls)/len(ls)
## check if my get_logits_and_probs function is correct by compare it with output.sequences_scores
for idx in range(num_return_sequences*batch_size):
if idx == num_return_sequences:
print("*"*20)
print(avg(token_logits[idx]),output.sequences_scores[idx].item())
print("probability")
for idx in range(num_return_sequences*batch_size):
if idx == num_return_sequences:
print("*"*20)
print(avg(token_probs[idx]))
```


### Expected behavior
I find that the order given by beam search is determined by `sum(logits)` rather than `sum(probability)`. I am not sure if it is correct, intuitively, the probability is a relative value that is comparable between tokens generated from different time step, but logits are not.
the example above shows that the 5th sequence given by beam search actually has higher probability than 2nd sentence.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18889/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18888
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18888/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18888/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18888/events
|
https://github.com/huggingface/transformers/issues/18888
| 1,361,056,567
|
I_kwDOCUB6oc5RIBM3
| 18,888
|
Longformer TF int32 vs int64
|
{
"login": "ichenjia",
"id": 3719451,
"node_id": "MDQ6VXNlcjM3MTk0NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3719451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ichenjia",
"html_url": "https://github.com/ichenjia",
"followers_url": "https://api.github.com/users/ichenjia/followers",
"following_url": "https://api.github.com/users/ichenjia/following{/other_user}",
"gists_url": "https://api.github.com/users/ichenjia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ichenjia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ichenjia/subscriptions",
"organizations_url": "https://api.github.com/users/ichenjia/orgs",
"repos_url": "https://api.github.com/users/ichenjia/repos",
"events_url": "https://api.github.com/users/ichenjia/events{/privacy}",
"received_events_url": "https://api.github.com/users/ichenjia/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I believe you'll find the solution in [#13632](https://github.com/huggingface/transformers/issues/13632)\n\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,665
| 1,665
|
NONE
| null |
### System Info
Transformers Version: 4.20.0.dev0
Ubuntu 20
Python 3.8
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi,
@ibeltagy
I am trying an example of fine-tuning longformer and got the error of
`TypeError: Input 'updates' of 'TensorScatterUpdate' Op has type int32 that does not match type int64 of argument 'tensor'.
`
Not sure what's going on. Here is my code example. Any help would be great:
`
from transformers import LongformerTokenizer, TFLongformerForSequenceClassification
from datasets import Dataset
import tensorflow as tf
import pickle
import numpy as np
tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096')
model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', num_labels=2, gradient_checkpointing=True)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
my_dict = {'text': ["random text 1", "random text 2", "random text 3"],
'label': np.array([0, 0, 1], dtype=np.int64)}
dataset = Dataset.from_dict(my_dict)
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets.shuffle(seed=42)
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = small_train_dataset.to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=8,
)
model.fit(tf_train_dataset, batch_size=1)
`
### Expected behavior
Not giving the error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18888/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18887
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18887/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18887/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18887/events
|
https://github.com/huggingface/transformers/issues/18887
| 1,361,056,098
|
I_kwDOCUB6oc5RIBFi
| 18,887
|
Incorrect size of input for 1st strided window length in `Perplexity of fixed-length models`
|
{
"login": "ekagra-ranjan",
"id": 3116519,
"node_id": "MDQ6VXNlcjMxMTY1MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3116519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekagra-ranjan",
"html_url": "https://github.com/ekagra-ranjan",
"followers_url": "https://api.github.com/users/ekagra-ranjan/followers",
"following_url": "https://api.github.com/users/ekagra-ranjan/following{/other_user}",
"gists_url": "https://api.github.com/users/ekagra-ranjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekagra-ranjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekagra-ranjan/subscriptions",
"organizations_url": "https://api.github.com/users/ekagra-ranjan/orgs",
"repos_url": "https://api.github.com/users/ekagra-ranjan/repos",
"events_url": "https://api.github.com/users/ekagra-ranjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekagra-ranjan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Yes, it looks like the PPL is not 19.64 as advertised. Would you like to make a PR with the suggested changes? It all looks good to me.",
"Sure @sgugger. On it."
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cpu (False)
- Tensorflow version (GPU?): 2.6.4 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
### Who can help?
@sgugger, @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. [The example script for finding the perplexity of fixed length](https://huggingface.co/docs/transformers/perplexity) model using strided windows takes an shorter window size for the 1st set of inputs.
For demo, let's us take a subset of test data in the script and print the `begin_loc`, `end_loc`, and `trg_len`. The example script picks a window of [0: 512] for the `input_ids` in 1st pass and then picks [0: 1024] in 2nd pass. The size of `input_ids` in the 1st pass is shorter than expected which should have been [0: 1024] in the 1st pass itself as the model's max length is 1024.
This leads to higher overall PPL as the output in the 2nd pass get a smaller context window size. This can be seen in the shared notebook which prints these stats for the **example script** and the **proposed script** for comparison: https://www.kaggle.com/code/ekagra/perplexity-contrib-small?scriptVersionId=104864785
2. The PPL for GPT2-large when running the example script comes to be 16.44 instead of ~~19.64~~ 16.53. Maybe improvement was made in the tokenizer or in the GPT2 model definition after the example script was written which improves the PPL?
Sharing notebook for PPL computed on the entire test data for [example script](https://www.kaggle.com/code/ekagra/perplexity-contrib/notebook?scriptVersionId=104846069) and [proposed script](https://www.kaggle.com/code/ekagra/perplexity-contrib?scriptVersionId=104846058). Both approaches obtain the same PPL of 16.44 as we are taking average over a lot of numbers so the small difference in NLLs in the 1st 2 window size vanishes. The difference however can be seen in the previous notebook which finds PPL over a smaller test data.
If this makes sense then I could raise a PR for this.
EDIT 1: replaced 19.64 with 16.53 which is the right metric to look for stride size of 512.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18887/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18886
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18886/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18886/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18886/events
|
https://github.com/huggingface/transformers/issues/18886
| 1,361,031,641
|
I_kwDOCUB6oc5RH7HZ
| 18,886
|
While trying to train seq2seq distillation model i am getting an error message that __init__() got an unexpected keyword argument 'weights_summary'.
|
{
"login": "techthiyanes",
"id": 25921035,
"node_id": "MDQ6VXNlcjI1OTIxMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25921035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techthiyanes",
"html_url": "https://github.com/techthiyanes",
"followers_url": "https://api.github.com/users/techthiyanes/followers",
"following_url": "https://api.github.com/users/techthiyanes/following{/other_user}",
"gists_url": "https://api.github.com/users/techthiyanes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/techthiyanes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/techthiyanes/subscriptions",
"organizations_url": "https://api.github.com/users/techthiyanes/orgs",
"repos_url": "https://api.github.com/users/techthiyanes/repos",
"events_url": "https://api.github.com/users/techthiyanes/events{/privacy}",
"received_events_url": "https://api.github.com/users/techthiyanes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This example is not maintained anymore. It was written for an older version of PyTorch Lightning, so you probably need to downgrade to what was the last release at the time around its release (roughly 2 years ago).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,665
| 1,665
|
NONE
| null |
### System Info
Transformer version : 4.21.2
pytorch-lightning 1.7.4
Used colab to test
Please find below colab link
https://colab.research.google.com/drive/1INGpr5nV2qnb8wKf2FAc0VqgPQWSPFdZ#scrollTo=DC8Tj2qtlfiZ
### Who can help?
@sgugger
_No response_
### Information
- https://github.com/huggingface/transformers/blob/main/examples/research_projects/seq2seq-distillation/distillation.py
### Tasks
- XSUM dataset with seq2seq distillation
### Reproduction
Kindly try to train the below snippet.
!python /content/transformers/examples/research_projects/seq2seq-distillation/distillation.py \
--teacher facebook/bart-large-xsum \
--data_dir /content/transformers/examples/research_projects/seq2seq-distillation/xsum \
--tokenizer_name facebook/bart-large-xsum \
--student_decoder_layers 6 --student_encoder_layers 12 \
--freeze_encoder --freeze_embeds \
--learning_rate=3e-4 \
--do_train \
--do_predict \
--fp16 --fp16_opt_level=O1 \
--val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 \
--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \
--model_name_or_path IGNORED \
--alpha_hid=3. \
--train_batch_size=16 --eval_batch_size=16 --gradient_accumulation_steps=2 \
--sortish_sampler \
--num_train_epochs=6 \
--warmup_steps 500 \
--output_dir distilbart_xsum_12_6 \
--weights_summary None \
"$@"
### Expected behavior
Is it something related to pt lightning version that i installed?
Error stacktrace:
Global seed set to 42
Traceback (most recent call last):
File "/content/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 454, in <module>
main(args)
File "/content/transformers/examples/research_projects/seq2seq-distillation/finetune.py", line 429, in main
logger=logger,
File "/content/transformers/examples/research_projects/seq2seq-distillation/lightning_base.py", line 387, in generic_train
**train_params,
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 2449, in from_argparse_args
return from_argparse_args(cls, args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/argparse.py", line 72, in from_argparse_args
return cls(**trainer_kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18886/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18885
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18885/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18885/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18885/events
|
https://github.com/huggingface/transformers/issues/18885
| 1,360,982,650
|
I_kwDOCUB6oc5RHvJ6
| 18,885
|
Can't find ‘romanian_postprocessing.md’ file
|
{
"login": "sriatz",
"id": 66962833,
"node_id": "MDQ6VXNlcjY2OTYyODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/66962833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sriatz",
"html_url": "https://github.com/sriatz",
"followers_url": "https://api.github.com/users/sriatz/followers",
"following_url": "https://api.github.com/users/sriatz/following{/other_user}",
"gists_url": "https://api.github.com/users/sriatz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sriatz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sriatz/subscriptions",
"organizations_url": "https://api.github.com/users/sriatz/orgs",
"repos_url": "https://api.github.com/users/sriatz/repos",
"events_url": "https://api.github.com/users/sriatz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sriatz/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,665
| 1,665
|
NONE
| null |
### System Info
In this model card: https://huggingface.co/facebook/mbart-large-en-ro, it says ' Instructions in romanian_postprocessing.md'. But I cannot find romanian_postprocessing.md.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18885/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18884
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18884/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18884/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18884/events
|
https://github.com/huggingface/transformers/issues/18884
| 1,360,965,058
|
I_kwDOCUB6oc5RHq3C
| 18,884
|
Generating with Flax fails when using Causal Language models
|
{
"login": "SamKG",
"id": 15336495,
"node_id": "MDQ6VXNlcjE1MzM2NDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/15336495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamKG",
"html_url": "https://github.com/SamKG",
"followers_url": "https://api.github.com/users/SamKG/followers",
"following_url": "https://api.github.com/users/SamKG/following{/other_user}",
"gists_url": "https://api.github.com/users/SamKG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamKG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamKG/subscriptions",
"organizations_url": "https://api.github.com/users/SamKG/orgs",
"repos_url": "https://api.github.com/users/SamKG/repos",
"events_url": "https://api.github.com/users/SamKG/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamKG/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey, I just ran into the same issue.\r\nFor me it does not occur when I explicitly specify the `pad_token_id` as in this example:\r\n```\r\nmodel.generate(\r\n prompt_tokenized,\r\n params=model.params,\r\n pad_token_id=50256,\r\n)\r\n```\r\nI'm also using the `\"EleutherAI/gpt-neo-1.3B\"` model, one might need to adjust the `pad_token_id` for different models / tokenizers.\r\n\r\n\r\nYou can also use the workaround as in the test here: https://github.com/huggingface/transformers/blob/a541d97477a8901e37e5f850f2cd707ffc82445b/tests/generation/test_generation_flax_utils.py#L83-L85\r\n\r\n-> just set `model.config.pad_token_id = model.config.eos_token_id` before generating.",
"Hey @SamKG!\r\n\r\nAs @maxidl has kindly pointed out, the `pad_token_id` needs to be specified for autoregressive generation.\r\n\r\nYou can set this to `tokenizer.pad_token_id` **if** the tokenizer has a `pad_token_id` defined:\r\n```python\r\nif tokenizer.pad_token_id is not None:\r\n model.config.pad_token_id = tokenizer.pad_token_id\r\n```\r\n\r\nOtherwise, setting it to the `eos_token_id` is possible:\r\n```python\r\nmodel.config.pad_token_id = model.config.eos_token_id\r\n```\r\n\r\nThe `pad_token_id` should always either be passed to the `.generate()` method or specified in the `config`. \r\n\r\n@patrickvonplaten IMO it's worth a warning message when the `pad_token_id` is omitted! Would prevent hidden errors such as this one.\r\n\r\nAlso cc'ing @patil-suraj who I believe has come across this issue before with GPT-J models",
"@sanchit-gandhi added to the `generate` to do list 👍 ",
"Thanks @gante ",
"#21009 Fixes it -- Flax now assumes the value of `pad_token_id` when it is `None` and `eos_token_id` is not `None`, like TF and PT do. This should also be the case in the examples above.\r\n\r\n@SamKG @maxidl I'm closing this issue as it seems to be solved, but feel free to reopen it with further queries :)"
] | 1,662
| 1,672
| 1,672
|
NONE
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.17
- JaxLib version: 0.3.15
- Using GPU in script?: Yes (Nvidia A100)
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten
@Narsil
@patil-suraj
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code snippet:
```python
from jax import numpy as jnp
import transformers
model = transformers.FlaxAutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = transformers.AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
entence = "Paris is one of the densest populated areas in Europe."
input_ids = tokenizer(sentence, return_tensors="jax")["input_ids"]
model.generate(input_ids)
```
### Expected behavior
Expected behavior is that the model generates completions for the given input id.
Observed behavior is that the following error is thrown:
```
File ~/.conda/envs/lm-extraction/lib/python3.10/site-packages/jax/_src/lax/lax.py:4577, in _check_same_dtypes(name, ignore_fp_precision, *ttypes)
4575 equiv = _JNP_FUNCTION_EQUIVALENTS[name]
4576 msg += f" (Tip: jnp.{equiv} is a similar function that does automatic type promotion on inputs)."
-> 4577 raise TypeError(msg.format(name, ", ".join(map(str, types))))
TypeError: lax.dynamic_update_slice requires arguments to have the same dtypes, got int32, float32.
```
This seems to be a type mismatch error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18884/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18883
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18883/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18883/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18883/events
|
https://github.com/huggingface/transformers/issues/18883
| 1,360,941,199
|
I_kwDOCUB6oc5RHlCP
| 18,883
|
BART decoder output length changes
|
{
"login": "elangovana",
"id": 5715658,
"node_id": "MDQ6VXNlcjU3MTU2NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5715658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elangovana",
"html_url": "https://github.com/elangovana",
"followers_url": "https://api.github.com/users/elangovana/followers",
"following_url": "https://api.github.com/users/elangovana/following{/other_user}",
"gists_url": "https://api.github.com/users/elangovana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elangovana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elangovana/subscriptions",
"organizations_url": "https://api.github.com/users/elangovana/orgs",
"repos_url": "https://api.github.com/users/elangovana/repos",
"events_url": "https://api.github.com/users/elangovana/events{/privacy}",
"received_events_url": "https://api.github.com/users/elangovana/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @elangovana , \r\n\r\n> The sequence length 5 seems to vary based on the input size. Doesn't this mean that the output can never be longer than the output?\r\n\r\nThe output can be longer than the input because Bart is an encoder-decoder model so the decoder can output variable length of tokens independent of the len of input tokens. This is because the decoder works in an auto-regressive manner.\r\n\r\nBart is an [encoder-decoder model](https://huggingface.co/blog/encoder-decoder) so the length of 5 for your example is something which will be consumed by the encoder [art of Bart. The length of target is something which the decoder part of Bart will be concerned about. Bart like any other model accepts `input_ids` and `labels`. The `input_ids` are input to the encoder whereas the `labels` are the expected output for the decoder. \r\n\r\nNow how do we get the `input_ids` for the decoder of Bart?\r\n* During training: you would provide the model with labels which will be shifted internally by HF and will be fed as input to the decoder (for teacher forcing style of decoding)\r\n* During Inference/test: you would generate the decoder output **auto-regressively** wherein the output of the decoder in previous time step becomes the decoder input to the current time step \r\n\r\nHow long can the decoder output be? This is decided by your choice of stopping criteria of the auto-regressive decoding, e.g., :\r\n* stop when the number of tokens in decoded sequence reach a max limit\r\n* stop when all the decoded sequences in a batch have outputted a terminating token like EOS\r\n\r\n[Check this out ](https://huggingface.co/blog/how-to-generate) for more info on `generate` function in HF and auto-regressive decoding.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,665
| 1,665
|
NONE
| null |
### System Info
Hi, BART for conditional generation, the output sequence length changes?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
m = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base")
t = AutoTokenizer.from_pretrained("facebook/bart-base")
b = t(["This is test"] ,
padding='longest',
truncation=True,
is_split_into_words=False,
max_length=512,
return_tensors='pt')
print("input data: ", b)
print("model logit output shape: ", m(**b.data).logits.shape)
```
```text
input data: {'input_ids': tensor([[ 0, 713, 16, 1296, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])}
model logit output shape: torch.Size([1, 5, 50265])
```
The sequence length 5 seems to vary based on the input size. Doesn't this mean that the output can never be longer than the output?
In addition is affects how the `y` target sequence length is determined. The y target tokenise cannot be set to `max_length`, neither to longest cos then when the loss function is computed the predicted seq length doesn't match target.
```
batch_y = t(
text=label_texts,
padding='max_length',
truncation=True,
is_split_into_words=False,
max_length=self._max_length,
return_tensors='pt',
)
```
### Expected behavior
I am not sure how varying output length works with having to compute the loss function
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18883/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18882
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18882/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18882/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18882/events
|
https://github.com/huggingface/transformers/issues/18882
| 1,360,932,476
|
I_kwDOCUB6oc5RHi58
| 18,882
|
Tried running Stable Diffusion GRisk GUI.exe and no go.
|
{
"login": "nymb",
"id": 10681246,
"node_id": "MDQ6VXNlcjEwNjgxMjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/10681246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nymb",
"html_url": "https://github.com/nymb",
"followers_url": "https://api.github.com/users/nymb/followers",
"following_url": "https://api.github.com/users/nymb/following{/other_user}",
"gists_url": "https://api.github.com/users/nymb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nymb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nymb/subscriptions",
"organizations_url": "https://api.github.com/users/nymb/orgs",
"repos_url": "https://api.github.com/users/nymb/repos",
"events_url": "https://api.github.com/users/nymb/events{/privacy}",
"received_events_url": "https://api.github.com/users/nymb/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Sorry the GUI did open on another monitor I had turned off but the terminal did mention some issues."
] | 1,662
| 1,662
| 1,662
|
NONE
| null |
### System Info
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001A24A9D15E0>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
torch\_jit_internal.py:751: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001A24A9E4940>.
warnings.warn(f"Unable to retrieve source for @torch.jit._overload function: {func}.")
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 10 files to the new cache system
0%| | 0/10 [00:00<?, ?it/s]
There was a problem when trying to move your cache:
File "transformers\utils\hub.py", line 1077, in <module>
File "transformers\utils\hub.py", line 1040, in move_cache
File "transformers\utils\hub.py", line 997, in move_to_new_cache
File "huggingface_hub\file_download.py", line 841, in _create_relative_symlink
Not sure how to proceed..
@LysandreJik
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tried running Stable Diffusion GRisk GUI.exe and no go.
### Expected behavior
Expected the GUI to load..
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18882/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18881
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18881/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18881/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18881/events
|
https://github.com/huggingface/transformers/issues/18881
| 1,360,924,104
|
I_kwDOCUB6oc5RHg3I
| 18,881
|
Flax BART training fails when evaluating
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This seems to originate when this basic collate function is unsuccessful in creating valid numpy arrays, i.e., with one unspecified dimension, like `(2,)` instead of `(2, 16)` (bsz x seq_len).\r\n\r\nhttps://github.com/huggingface/transformers/blob/65fb71bc762c46bb067306c1fd083b1cba87a095/examples/flax/language-modeling/run_bart_dlm_flax.py#L287-L289\r\n\r\nI haven't dug further in why this occurs here, but presumably because the tokenizer is not explicitly padding to max length so in edge cases the sequences might not be of the same length? (The error mentioned above happened in the last batch (as you can see in the progress bar, 77/78)) so probably the last batch contained the last sequence which was not of the expected size.",
"Hey @BramVanroy! Thank you for posting this issue; the Flax BART training example is fresh off the press, so there might well be some small issues to fix.\r\n\r\nBased on the traceback, I would agree that this is likely an issue related to tokenizer padding. Have you tried padding to max length? This seems like a very sensible next step!\r\n\r\nKeep me posted with how you go on this! Happy to help if this remains a road-block.",
"I haven't figured this out yet. From reading the code, all blocks should be of size sequence length and the small remainder dropped:\r\n\r\nhttps://github.com/huggingface/transformers/blob/cfd623a859890c6d106610d3c688064eadc7bd61/examples/flax/language-modeling/run_bart_dlm_flax.py#L657-L660\r\n\r\nSo I haven't looked further how this is being caused.\r\n\r\nBy the way, also getting these UserWarnings:\r\n\r\n```\r\nSome donated buffers were not usable: ShapedArray(float32[1,50265]), ShapedArray(float32[1026,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[1026,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[3072]), ShapedArray(float32[768,3072]), ShapedArray(float32[768]), ShapedArray(float32[3072,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768,768]), ShapedArray(float32[768]), ShapedArray(float32[768]), ShapedArray(float32[50265,768]).\r\nSee an explanation at https://jax.readthedocs.io/en/latest/faq.html#buffer-donation.\r\n warnings.warn(f\"Some donated buffers were not usable: {', '.join(unused_donations)}.\\n{msg}\")\r\n```\r\n\r\nI am working on transposing `fairseq`'s data implementation to PyTorch and adding a full training example in transformers. Would you be open to code-reviewing it when it's finished? The train script itself will probably borrow a lot from your code here if that's okay!",
"> From reading the code, all blocks should be of size sequence length and the small remainder dropped:\r\n\r\nCertainly, that should ideally be the case!\r\n\r\n> By the way, also getting these UserWarnings:\r\n\r\nAre those UserWarnings being thrown in the parameter update step? It suggests to me a mis-match between the update and parameter dtypes! \r\n\r\n> I am working on transposing fairseq's data implementation to PyTorch and adding a full training example in transformers. \r\n\r\nMore than happy to perform a code-review when finished!\r\n\r\nAlso cc'ing @duongna21 who must take all credit for implementing the Flax BART training example!",
" Hi @BramVanroy, I'm the main author of `run_bart_dlm_flax`. It's unfortunate that I cannot reproduce your bug. Ran your code and things work as expected except for the `nan` loss bug that you can handle [like this](https://github.com/huggingface/transformers/pull/18458#discussion_r976303806).\r\n\r\n\r\n\r\nAfter set `drop_last=True`, val loss is fine:\r\n\r\n\r\n\r\n",
"@duongna21 Thanks for chiming in! I am going to close this as I am not working on this directly any more. I assume that drop_last should indeed fix the issue."
] | 1,662
| 1,663
| 1,663
|
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.9.1
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.17
- JaxLib version: 0.3.15
### Who can help?
@sgugger @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. make a dir `"./my_bart_model`
2. Train a tokenizer (let's use a small, Dutch corpus). **Note**: the repo README uses `tokenizer.save` but that only saves tokenizer.config and not the merges, so I think this is a second issue that should be fixed. Below I use `save_model` instead.
```python
from datasets import load_dataset
from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer
# load dataset
dataset = load_dataset("dbrd", "plain_text", split="train")
# Instantiate tokenizer
tokenizer = ByteLevelBPETokenizer()
def batch_iterator(batch_size=1000):
for i in range(0, len(dataset), batch_size):
yield dataset[i: i + batch_size]["text"]
# Customized training
tokenizer.train_from_iterator(batch_iterator(), vocab_size=50265, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
# Save files to disk
tokenizer.save_model("./my_bart_model")
```
3. Create a BART config for it
```python
from transformers import BartConfig
config = BartConfig.from_pretrained("facebook/bart-base", vocab_size=50265)
config.save_pretrained("./my_bart_model")
```
4. Train the model with a quick evaluation (command from the root of the transformers lib)
```sh
python examples/flax/language-modeling/run_bart_dlm_flax.py --output_dir ./my_bart_model --config_name ./my_bart_model --tokenizer_name ./my_bart_model --dataset_name dbrd --dataset_config_name plain_text --max_seq_length 128 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 1e-4 --warmup_steps 100 --overwrite_output_dir --logging_steps 200 --save_steps 500 --eval_steps 200
```
This leads to the following error (also note the VisibleDeprecation, although that might be unrelated to the triggered error):
```
transformers/examples/flax/language-modeling/run_bart_dlm_flax.py:288: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
{k: np.array([examples[i][k] for i in range(len(examples))]) for k, v in examples[0].items()}
Evaluating ...: 99%|██████████████████████████████████████████████████████████████████▏| 77/78 [00:08<00:00, 9.03it/s]Training...: 14%|█████████▍ | 200/1429 [00:54<05:32, 3.69it/s]Epoch ... : 0%| | 0/3 [00:54<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/flax/language-modeling/run_bart_dlm_flax.py", line 964, in <module>
main()
File "transformers/examples/flax/language-modeling/run_bart_dlm_flax.py", line 896, in main
model_inputs = data_collator(samples)
File "transformers/examples/flax/language-modeling/run_bart_dlm_flax.py", line 291, in __call__
batch["decoder_input_ids"] = shift_tokens_right(
File "/home/bram/.local/share/virtualenvs/bart-tTDq1jwG/lib/python3.8/site-packages/transformers/models/bart/modeling_flax_bart.py", line 228, in shift_tokens_right
shifted_input_ids[:, 1:] = input_ids[:, :-1]
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
```
### Expected behavior
No errors and preferably no deprecation warnings.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18881/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18880
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18880/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18880/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18880/events
|
https://github.com/huggingface/transformers/issues/18880
| 1,360,910,602
|
I_kwDOCUB6oc5RHdkK
| 18,880
|
Are there any higher version transformers compatible with transformers==3.0.2
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @CaffreyR! We would recommend you migrate your codebase from v3.0.2 to v4.x, which should then be compatible with all newer methods.\r\n\r\nWhat is your problem when trying to upgrade?",
"Hi @LysandreJik , I run the [code](https://github.com/facebookresearch/FiD) in transformers 3.0.2, it works well. But in 4.21.3, it went wrong! Many thanks!\r\n```\r\n/tokenization_t5.py:220: UserWarning: This sequence already has </s>. In future versions this behavior may lead to duplicated eos tokens being added.\r\n f\"This sequence already has {self.eos_token}. In future versions this behavior may lead to duplicated\"\r\n/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5.py:220: UserWarning: This sequence already has </s>. In future versions this behavior may lead to duplicated eos tokens being added.\r\n f\"This sequence already has {self.eos_token}. In future versions this behavior may lead to duplicated\"\r\nTraceback (most recent call last):\r\n File \"train_reader.py\", line 212, in <module>\r\n checkpoint_path\r\n File \"train_reader.py\", line 55, in train\r\n labels=labels.cuda()\r\n File \"/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/user/FiD/src/model.py\", line 45, in forward\r\n **kwargs\r\n File \"/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py\", line 1686, in forward\r\n encoder_last_hidden_state=encoder_outputs.last_hidden_state,\r\nAttributeError: 'tuple' object has no attribute 'last_hidden_state'\r\n```\r\n\r\n",
"You can fix the code to make it work like this on the latest version:\r\n```train_loss = model(input_ids=context_ids.cuda(), attention_mask=context_mask.cuda(), labels=labels.cuda(), return_dict=False )[0]```\r\n\r\nAs per the official docs, the forward method takes this argument which determines the format of the output:\r\n>return_dict (bool, optional) — Whether or not to return a [ModelOutput](https://huggingface.co/docs/transformers/v4.21.3/en/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.",
"Hi @saradhix , as we fixed this problems, a new problem occurred. Seems to be another version problem\r\n```\r\nTraceback (most recent call last):\r\n File \"train_reader.py\", line 213, in <module>\r\n checkpoint_path\r\n File \"train_reader.py\", line 71, in train\r\n dev_em = evaluate(model, eval_dataset, tokenizer, collator, opt)\r\n File \"train_reader.py\", line 114, in evaluate\r\n max_length=50\r\n File \"/home/user/FiD/src/model.py\", line 54, in generate\r\n max_length=max_length\r\n File \"/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/generation_utils.py\", line 1163, in generate\r\n inputs_tensor, model_input_name, model_kwargs = self._prepare_model_inputs(inputs, bos_token_id, model_kwargs)\r\n File \"/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/transformers/generation_utils.py\", line 412, in _prepare_model_inputs\r\n and self.encoder.main_input_name != self.main_input_name\r\n File \"/home/user/anaconda3/envs/fid/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1208, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'EncoderWrapper' object has no attribute 'main_input_name'\r\n```\r\n\r\n<img width=\"571\" alt=\"image\" src=\"https://user-images.githubusercontent.com/84232793/188636847-4d3b0b3d-538e-4d38-ba36-7b26a9fff938.png\">\r\n\r\n",
"The `main_input_name` attribute is something we introduced in a later version (in order to make the `generate` method work with text, vision and speech models, i.e. several modalities). It refers to the main input name of a model, like \"input_ids\" for text models, or \"pixel_values\" for vision models.\r\n\r\nEach model defines this, see for instance [here](https://github.com/huggingface/transformers/blob/09178705101b9803e7b9ea7f79a46b4c242dd4bf/src/transformers/models/resnet/modeling_resnet.py#L252) for ResNet.",
"Hi @NielsRogge , so what should I do? Should I define it myself in the model?",
"Here's the PR that introduced it: https://github.com/huggingface/transformers/pull/14803\r\n\r\nYes, it's set to \"input_ids\" by default (and overwritten by vision and speech models)",
"Hi @NielsRogge @saradhix , it is very interesting that when I do this, it says warning. \r\n```\r\n/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5.py:174: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.\r\nFor now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.\r\n- Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.\r\n- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.\r\n- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.\r\n FutureWarning,\r\n```\r\n\r\nIs there any problem with this warning? Many thanks!!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hi @NielsRogge , so what should I do? Should I define it myself in the model?\r\nI meet the same problem. And I fixed it by adding main_input_name = \"input_ids\"\r\n<img width=\"1199\" alt=\"image\" src=\"https://user-images.githubusercontent.com/74954034/236134400-01fd31cd-d156-4d41-865c-69dfd66b4425.png\">\r\n"
] | 1,662
| 1,683
| 1,667
|
NONE
| null |
### System Info
Hi. I am working code that only can run on Transformers==3.0.2, however there are other method which only can run on higher version. So I want to ask if there are higher version that is compatible with transformers==3.0.2 or with little revision? Many thanks!
### Who can help?
@NielsRogge, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
Code of facebook research [FiD](https://github.com/facebookresearch/FiD)
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
NaturalQuestions | TriviaQA
### Reproduction
Run the [FiD code](https://github.com/facebookresearch/FiD) on a different version of transformers
### Expected behavior
Attribute error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18880/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18879
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18879/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18879/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18879/events
|
https://github.com/huggingface/transformers/pull/18879
| 1,360,872,900
|
PR_kwDOCUB6oc4-UOYQ
| 18,879
|
Token type ids generation func. for Bert-like models
|
{
"login": "Doohae",
"id": 80743307,
"node_id": "MDQ6VXNlcjgwNzQzMzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/80743307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Doohae",
"html_url": "https://github.com/Doohae",
"followers_url": "https://api.github.com/users/Doohae/followers",
"following_url": "https://api.github.com/users/Doohae/following{/other_user}",
"gists_url": "https://api.github.com/users/Doohae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Doohae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Doohae/subscriptions",
"organizations_url": "https://api.github.com/users/Doohae/orgs",
"repos_url": "https://api.github.com/users/Doohae/repos",
"events_url": "https://api.github.com/users/Doohae/events{/privacy}",
"received_events_url": "https://api.github.com/users/Doohae/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18879). All of your documentation changes will be reflected on that endpoint."
] | 1,662
| 1,663
| 1,663
|
NONE
| null |
# What does this PR do?
For a text sequence input (separated with comma), Bert tokenizer nicely create an input_ids with [SEP] token inside and proper token_type_ids as well. However for input_ids with already separated with [SEP] token id inside, couldn't find any function to generate proper token_type_ids.
So, I made a simple function which create proper token_type_ids with padding.
Input : list or tensor / pad_to_multiple_of (target length) / tokenizer (to recognize sep token id and add proper pad token id)
output : tensor of token_type_ids with pad id
Hope this might be used for those who want to override DataCollator class.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [V] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- new function for Bert-like models collator : @LysandreJik
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18879/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18879",
"html_url": "https://github.com/huggingface/transformers/pull/18879",
"diff_url": "https://github.com/huggingface/transformers/pull/18879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18879.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18878
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18878/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18878/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18878/events
|
https://github.com/huggingface/transformers/issues/18878
| 1,360,759,839
|
I_kwDOCUB6oc5RG4wf
| 18,878
|
Can't disable INTEGRATION_TO_CALLBACK
|
{
"login": "LZY-the-boys",
"id": 72137647,
"node_id": "MDQ6VXNlcjcyMTM3NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LZY-the-boys",
"html_url": "https://github.com/LZY-the-boys",
"followers_url": "https://api.github.com/users/LZY-the-boys/followers",
"following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}",
"gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions",
"organizations_url": "https://api.github.com/users/LZY-the-boys/orgs",
"repos_url": "https://api.github.com/users/LZY-the-boys/repos",
"events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}",
"received_events_url": "https://api.github.com/users/LZY-the-boys/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @sgugger",
"No, this code is not called when you set `report_to=\"none\"` or `report_to=[\"none\"]` since it is filtered out [here](https://github.com/huggingface/transformers/blob/6678350c01629b848aa9c41e169da5d6b8d9e7e9/src/transformers/training_args.py#L1155).\r\n\r\nSetting `report_to=[\"none\"]` in `TrainingArugments` works perfectly on my end, please make sure you are using the latest Transformers version and if the issue persists, please give us a code reproducer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,662
| 1,665
| 1,665
|
NONE
| null |
### System Info
I use transformers.4.20.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the huggingface Trainer, I set `report_to=['none'] ` in training_args to disable wandb logging as the doc says but a value error will be raised. I notice it's because the problem in the following code of `transformers/integrations.py`:
```python
def get_reporting_integration_callbacks(report_to):
for integration in report_to:
if integration not in INTEGRATION_TO_CALLBACK:
raise ValueError(
f"{integration} is not supported, only {', '.join(INTEGRATION_TO_CALLBACK.keys())} are supported."
)
return [INTEGRATION_TO_CALLBACK[integration] for integration in report_to]
```
No 'none' logits is dealed with, nor any other disable methods?
### Expected behavior
I don't know if it's expected, but it's confusing me and causing inconvenience. So i would like to get the answer :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18878/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18877
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18877/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18877/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18877/events
|
https://github.com/huggingface/transformers/pull/18877
| 1,360,738,951
|
PR_kwDOCUB6oc4-T1Z7
| 18,877
|
Update no trainer scripts to include gather for metrics
|
{
"login": "arun99481",
"id": 8774735,
"node_id": "MDQ6VXNlcjg3NzQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8774735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arun99481",
"html_url": "https://github.com/arun99481",
"followers_url": "https://api.github.com/users/arun99481/followers",
"following_url": "https://api.github.com/users/arun99481/following{/other_user}",
"gists_url": "https://api.github.com/users/arun99481/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arun99481/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arun99481/subscriptions",
"organizations_url": "https://api.github.com/users/arun99481/orgs",
"repos_url": "https://api.github.com/users/arun99481/repos",
"events_url": "https://api.github.com/users/arun99481/events{/privacy}",
"received_events_url": "https://api.github.com/users/arun99481/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,662
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
# What does this PR do?
Update run_wav2vec_pretraining_no_trainer example to include accelarator.gather_metrics
Related to #18437
I ran the tests for tests for 'wav2vec_pretraining_no_trainer' in 'test_pytorch_examples.py' files locally.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr , @sgugger , @pacman100
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18877/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18877",
"html_url": "https://github.com/huggingface/transformers/pull/18877",
"diff_url": "https://github.com/huggingface/transformers/pull/18877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18877.patch",
"merged_at": 1662464197000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.